diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 00000000..8be9899b --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: dea68f348907df21f0e5f2679bb2671c +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.doctrees/algorithms/algorithms.doctree b/.doctrees/algorithms/algorithms.doctree new file mode 100644 index 00000000..26b63da2 Binary files /dev/null and b/.doctrees/algorithms/algorithms.doctree differ diff --git a/.doctrees/algorithms/modules.doctree b/.doctrees/algorithms/modules.doctree new file mode 100644 index 00000000..d73514cc Binary files /dev/null and b/.doctrees/algorithms/modules.doctree differ diff --git a/.doctrees/classes/classes.doctree b/.doctrees/classes/classes.doctree new file mode 100644 index 00000000..01af737c Binary files /dev/null and b/.doctrees/classes/classes.doctree differ diff --git a/.doctrees/classes/modules.doctree b/.doctrees/classes/modules.doctree new file mode 100644 index 00000000..c4e091fa Binary files /dev/null and b/.doctrees/classes/modules.doctree differ diff --git a/.doctrees/core.doctree b/.doctrees/core.doctree new file mode 100644 index 00000000..813a7fa2 Binary files /dev/null and b/.doctrees/core.doctree differ diff --git a/.doctrees/drawing/drawing.doctree b/.doctrees/drawing/drawing.doctree new file mode 100644 index 00000000..463b343e Binary files /dev/null and b/.doctrees/drawing/drawing.doctree differ diff --git a/.doctrees/drawing/modules.doctree b/.doctrees/drawing/modules.doctree new file mode 100644 index 00000000..2af3ff3a Binary files /dev/null and b/.doctrees/drawing/modules.doctree differ diff --git a/.doctrees/environment.pickle b/.doctrees/environment.pickle new file mode 100644 index 00000000..18ef3d96 Binary files /dev/null and b/.doctrees/environment.pickle differ diff --git a/.doctrees/glossary.doctree b/.doctrees/glossary.doctree new file mode 100644 index 00000000..b191ce7e Binary files /dev/null and b/.doctrees/glossary.doctree differ diff --git a/.doctrees/hypconstructors.doctree b/.doctrees/hypconstructors.doctree new file mode 100644 index 00000000..52db6c08 Binary files /dev/null and b/.doctrees/hypconstructors.doctree differ diff --git a/.doctrees/hypergraph101.doctree b/.doctrees/hypergraph101.doctree new file mode 100644 index 00000000..ea38f6bf Binary files /dev/null and b/.doctrees/hypergraph101.doctree differ diff --git a/.doctrees/index.doctree b/.doctrees/index.doctree new file mode 100644 index 00000000..6a172855 Binary files /dev/null and b/.doctrees/index.doctree differ diff --git a/.doctrees/install.doctree b/.doctrees/install.doctree new file mode 100644 index 00000000..c06526c6 Binary files /dev/null and b/.doctrees/install.doctree differ diff --git a/.doctrees/license.doctree b/.doctrees/license.doctree new file mode 100644 index 00000000..d941916a Binary files /dev/null and b/.doctrees/license.doctree differ diff --git a/.doctrees/modularity.doctree b/.doctrees/modularity.doctree new file mode 100644 index 00000000..709c6346 Binary files /dev/null and b/.doctrees/modularity.doctree differ diff --git a/.doctrees/overview/index.doctree b/.doctrees/overview/index.doctree new file mode 100644 index 00000000..161659e6 Binary files /dev/null and b/.doctrees/overview/index.doctree differ diff --git a/.doctrees/publications.doctree b/.doctrees/publications.doctree new file mode 100644 index 00000000..a1a0f82f Binary files /dev/null and b/.doctrees/publications.doctree differ diff --git a/.doctrees/reports/modules.doctree b/.doctrees/reports/modules.doctree new file mode 100644 index 00000000..a154acc9 Binary files /dev/null and b/.doctrees/reports/modules.doctree differ diff --git a/.doctrees/reports/reports.doctree b/.doctrees/reports/reports.doctree new file mode 100644 index 00000000..c0ff0d31 Binary files /dev/null and b/.doctrees/reports/reports.doctree differ diff --git a/.doctrees/widget.doctree b/.doctrees/widget.doctree new file mode 100644 index 00000000..2402beef Binary files /dev/null and b/.doctrees/widget.doctree differ diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/_images/ModularityScreenShot.png b/_images/ModularityScreenShot.png new file mode 100644 index 00000000..5978e604 Binary files /dev/null and b/_images/ModularityScreenShot.png differ diff --git a/_images/WidgetScreenShot.png b/_images/WidgetScreenShot.png new file mode 100644 index 00000000..6fd160d9 Binary files /dev/null and b/_images/WidgetScreenShot.png differ diff --git a/_images/biblio_hg.png b/_images/biblio_hg.png new file mode 100644 index 00000000..8b301423 Binary files /dev/null and b/_images/biblio_hg.png differ diff --git a/_images/bicolored1.png b/_images/bicolored1.png new file mode 100644 index 00000000..8b9dae77 Binary files /dev/null and b/_images/bicolored1.png differ diff --git a/_images/dual.png b/_images/dual.png new file mode 100644 index 00000000..06dac85d Binary files /dev/null and b/_images/dual.png differ diff --git a/_images/dual2.png b/_images/dual2.png new file mode 100644 index 00000000..2b61434f Binary files /dev/null and b/_images/dual2.png differ diff --git a/_images/ex.png b/_images/ex.png new file mode 100644 index 00000000..1318c4a0 Binary files /dev/null and b/_images/ex.png differ diff --git a/_images/exgraph.png b/_images/exgraph.png new file mode 100644 index 00000000..2c4680d1 Binary files /dev/null and b/_images/exgraph.png differ diff --git a/_images/harrypotter_basic_hyp.png b/_images/harrypotter_basic_hyp.png new file mode 100644 index 00000000..d546722a Binary files /dev/null and b/_images/harrypotter_basic_hyp.png differ diff --git a/_images/hnxbasics.png b/_images/hnxbasics.png new file mode 100644 index 00000000..054888f2 Binary files /dev/null and b/_images/hnxbasics.png differ diff --git a/_images/simplicial.png b/_images/simplicial.png new file mode 100644 index 00000000..87371590 Binary files /dev/null and b/_images/simplicial.png differ diff --git a/_images/swalks.png b/_images/swalks.png new file mode 100644 index 00000000..cd8d7a8a Binary files /dev/null and b/_images/swalks.png differ diff --git a/_modules/algorithms/contagion.html b/_modules/algorithms/contagion.html new file mode 100644 index 00000000..6e888041 --- /dev/null +++ b/_modules/algorithms/contagion.html @@ -0,0 +1,1274 @@ + + + + + + algorithms.contagion — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for algorithms.contagion

+import random
+import numpy as np
+from collections import defaultdict
+from collections import Counter
+import hypernetx as hnx
+
+
+
[docs]def contagion_animation( + fig, + H, + transition_events, + node_state_color_dict, + edge_state_color_dict, + node_radius=1, + fps=1, +): + """ + A function to animate discrete-time contagion models for hypergraphs. Currently only supports a circular layout. + + Parameters + ---------- + fig : matplotlib Figure object + H : HyperNetX Hypergraph object + transition_events : dictionary + The dictionary that is output from the discrete_SIS and discrete_SIR functions with return_full_data=True + node_state_color_dict : dictionary + Dictionary which specifies the colors of each node state. All node states must be specified. + edge_state_color_dict : dictionary + Dictionary with keys that are edge states and values which specify the colors of each edge state + (can specify an alpha parameter). All edge-dependent transition states must be specified + (most common is "I") and there must be a a default "OFF" setting. + node_radius : float, default: 1 + The radius of the nodes to draw + fps : int > 0, default: 1 + Frames per second of the animation + + Returns + ------- + matplotlib Animation object + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> import matplotlib.pyplot as plt + >>> from IPython.display import HTML + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> transition_events = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt, return_full_data=True) + >>> node_state_color_dict = {"S":"green", "I":"red", "R":"blue"} + >>> edge_state_color_dict = {"S":(0, 1, 0, 0.3), "I":(1, 0, 0, 0.3), "R":(0, 0, 1, 0.3), "OFF": (1, 1, 1, 0)} + >>> fps = 1 + >>> fig = plt.figure() + >>> animation = contagion.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=fps) + >>> HTML(animation.to_jshtml()) + """ + + try: + from celluloid import Camera + except ModuleNotFoundError as e: + print( + f" {e}. If you need to use {__name__}, please install additional packages by running the following command: pip install .['all']" + ) + raise + + nodeState = defaultdict(lambda: "S") + + camera = Camera(fig) + + for t in sorted(list(transition_events.keys())): + edgeState = defaultdict(lambda: "OFF") + + # update edge and node states + for event in transition_events[t]: + status = event[0] + node = event[1] + + # update node states + nodeState[node] = status + + try: + # update the edge transmitters list if they are neighbor-dependent transitions + edgeID = event[2] + if edgeID is not None: + edgeState[edgeID] = status + except: + pass + + kwargs = {"layout_kwargs": {"seed": 39}} + + nodeStyle = { + "facecolors": [node_state_color_dict[nodeState[node]] for node in H.nodes] + } + edgeStyle = { + "facecolors": [edge_state_color_dict[edgeState[edge]] for edge in H.edges], + "edgecolors": "black", + } + + # draw hypergraph + hnx.draw( + H, + node_radius=node_radius, + nodes_kwargs=nodeStyle, + edges_kwargs=edgeStyle, + with_edge_labels=False, + with_node_labels=False, + **kwargs, + ) + camera.snap() + + return camera.animate(interval=1000 / fps)
+ + +# Canned Contagion Functions +
[docs]def collective_contagion(node, status, edge): + """ + The collective contagion mechanism described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 + + Parameters + ---------- + node : hashable + the node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> collective_contagion(0, status, (0, 1, 2)) + True + >>> collective_contagion(1, status, (0, 1, 2)) + False + >>> collective_contagion(3, status, (0, 1, 2)) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + for i in neighbors: + if status[i] != "I": + return False + return True
+ + +
[docs]def individual_contagion(node, status, edge): + """ + The individual contagion mechanism described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> individual_contagion(0, status, (0, 1, 3)) + True + >>> individual_contagion(1, status, (0, 1, 2)) + False + >>> collective_contagion(3, status, (0, 3, 4)) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + for i in neighbors: + if status[i] == "I": + return True + return False
+ + +
[docs]def threshold(node, status, edge, tau=0.1): + """ + The threshold contagion mechanism + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + tau : float between 0 and 1, default: 0.1 + The fraction of nodes in an edge that must be infected for the edge to be able to transmit to the node + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> threshold(0, status, (0, 2, 3, 4), tau=0.2) + True + >>> threshold(0, status, (0, 2, 3, 4), tau=0.5) + False + >>> threshold(3, status, (1, 2, 3), tau=1) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + if len(neighbors) > 0: + fraction_infected = sum([status[i] == "I" for i in neighbors]) / len(neighbors) + # The isolated node case + else: + fraction_infected = 0 + return fraction_infected >= tau
+ + +
[docs]def majority_vote(node, status, edge): + """ + The majority vote contagion mechanism. If a majority of neighbors are contagious, + it is possible for an individual to change their opinion. If opinions are divided equally, + choose randomly. + + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> majority_vote(0, status, (0, 1, 2)) + True + >>> majority_vote(0, status, (0, 1, 2, 3)) + True + >>> majority_vote(1, status, (0, 1, 2)) + False + >>> majority_vote(3, status, (0, 1, 2)) + False + """ + + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + if len(neighbors) > 0: + fraction_infected = sum([status[i] == "I" for i in neighbors]) / len(neighbors) + else: + fraction_infected = 0 + + if fraction_infected < 0.5: + return False + elif fraction_infected > 0.5: + return True + else: + return random.choice([False, True])
+ + +# Auxiliary functions + +# The ListDict class is copied from Joel Miller's Github repository Mathematics-of-Epidemics-on-Networks +class _ListDict_(object): + r""" + The Gillespie algorithm will involve a step that samples a random element + from a set based on its weight. This is awkward in Python. + + So I'm introducing a new class based on a stack overflow answer by + Amber (http://stackoverflow.com/users/148870/amber) + for a question by + tba (http://stackoverflow.com/users/46521/tba) + found at + http://stackoverflow.com/a/15993515/2966723 + + This will allow me to select a random element uniformly, and then use + rejection sampling to make sure it's been selected with the appropriate + weight. + """ + + def __init__(self, weighted=False): + self.item_to_position = {} + self.items = [] + + self.weighted = weighted + if self.weighted: + self.weight = defaultdict(int) # presume all weights positive + self.max_weight = 0 + self._total_weight = 0 + self.max_weight_count = 0 + + def __len__(self): + return len(self.items) + + def __contains__(self, item): + return item in self.item_to_position + + def _update_max_weight(self): + C = Counter( + self.weight.values() + ) # may be a faster way to do this, we only need to count the max. + self.max_weight = max(C.keys()) + self.max_weight_count = C[self.max_weight] + + def insert(self, item, weight=None): + r""" + If not present, then inserts the thing (with weight if appropriate) + if already there, replaces the weight unless weight is 0 + + If weight is 0, then it removes the item and doesn't replace. + + WARNING: + replaces weight if already present, does not increment weight. + + + """ + if self.__contains__(item): + self.remove(item) + if weight != 0: + self.update(item, weight_increment=weight) + + def update(self, item, weight_increment=None): + r""" + If not present, then inserts the thing (with weight if appropriate) + if already there, increments weight + + WARNING: + increments weight if already present, cannot overwrite weight. + """ + if ( + weight_increment is not None + ): # will break if passing a weight to unweighted case + if weight_increment > 0 or self.weight[item] != self.max_weight: + self.weight[item] = self.weight[item] + weight_increment + self._total_weight += weight_increment + if self.weight[item] > self.max_weight: + self.max_weight_count = 1 + self.max_weight = self.weight[item] + elif self.weight[item] == self.max_weight: + self.max_weight_count += 1 + else: # it's a negative increment and was at max + self.max_weight_count -= 1 + self.weight[item] = self.weight[item] + weight_increment + self._total_weight += weight_increment + self.max_weight_count -= 1 + if self.max_weight_count == 0: + self._update_max_weight + elif self.weighted: + raise Exception("if weighted, must assign weight_increment") + + if item in self: # we've already got it, do nothing else + return + self.items.append(item) + self.item_to_position[item] = len(self.items) - 1 + + def remove(self, choice): + position = self.item_to_position.pop(choice) + last_item = self.items.pop() + if position != len(self.items): + self.items[position] = last_item + self.item_to_position[last_item] = position + + if self.weighted: + weight = self.weight.pop(choice) + self._total_weight -= weight + if weight == self.max_weight: + # if we find ourselves in this case often + # it may be better just to let max_weight be the + # largest weight *ever* encountered, even if all remaining weights are less + # + self.max_weight_count -= 1 + if self.max_weight_count == 0 and len(self) > 0: + self._update_max_weight() + + def choose_random(self): + # r'''chooses a random node. If there is a weight, it will use rejection + # sampling to choose a random node until it succeeds''' + if self.weighted: + while True: + choice = random.choice(self.items) + if random.random() < self.weight[choice] / self.max_weight: + break + # r = random.random()*self.total_weight + # for item in self.items: + # r-= self.weight[item] + # if r<0: + # break + return choice + + else: + return random.choice(self.items) + + def random_removal(self): + r"""uses other class methods to choose and then remove a random node""" + choice = self.choose_random() + self.remove(choice) + return choice + + def total_weight(self): + if self.weighted: + return self._total_weight + else: + return len(self) + + def update_total_weight(self): + self._total_weight = sum(self.weight[item] for item in self.items) + + +# Contagion Functions +
[docs]def discrete_SIR( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + initial_recovereds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + dt=1.0, + return_full_data=False, + **args, +): + """ + A discrete-time SIR model for hypergraphs similar to the construction described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + "Simplicial models of social contagion" by Iacopini et al. + https://doi.org/10.1038/s41467-019-10431-6 + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + initial_recovereds : list or numpy array, default: None + An iterable of initially recovered node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: float('Inf') + Time at which the simulation should be terminated if it hasn't already. + dt : float > 0, default: 1.0 + Step forward in time that the simulation takes at each step. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + if return_full_data + dictionary + Time as the keys and events that happen as the values. + else + t, S, I, R : numpy arrays + time (t), number of susceptible (S), infected (I), and recovered (R) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> t, S, I, R = contagion.discrete_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt) + """ + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + if initial_recovereds is None: + initial_recovereds = [] + else: + # check to make sure that the initially recovered nodes are in the hypergraph + initial_recovereds = list(set(H.nodes).intersection(set(initial_recovereds))) + + status = defaultdict(lambda: "S") + + if return_full_data: + transition_events = dict() + transition_events[tmin] = list() + + for node in initial_infecteds: + status[node] = "I" + if return_full_data: + transition_events[tmin].append(("I", node, None)) + + for node in initial_recovereds: + status[node] = "R" + if return_full_data: + transition_events[tmin].append(("R", node)) + + I = [len(initial_infecteds)] + R = [len(initial_recovereds)] + S = [H.number_of_nodes() - I[-1] - R[-1]] + + t = tmin + times = [t] + newStatus = status.copy() + + edge_neighbors = lambda node: H.edges.memberships[node] + + while t < tmax and I[-1] != 0: + # Initialize the next step with the same numbers of S, I, and R as the last step before computing the changes + S.append(S[-1]) + I.append(I[-1]) + R.append(R[-1]) + + if return_full_data: + transition_events[t + dt] = list() + + for node in H.nodes: + if status[node] == "I": + # recover the node. If it is not healed, it stays infected. + if random.random() <= gamma * dt: + newStatus[node] = "R" + I[-1] += -1 + R[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("R", node)) + elif status[node] == "S": + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + if ( + random.random() + <= tau[len(members)] + * transmission_function(node, status, members, **args) + * dt + ): + newStatus[node] = "I" + S[-1] += -1 + I[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("I", node, edge_id)) + break + # This executes after the loop has executed normally without hitting the break statement which indicates infection + else: + newStatus[node] = "S" + status = newStatus.copy() + t += dt + times.append(t) + if return_full_data: + return transition_events + else: + return np.array(times), np.array(S), np.array(I), np.array(R)
+ + +
[docs]def discrete_SIS( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + rho=None, + tmin=0, + tmax=100, + dt=1.0, + return_full_data=False, + **args, +): + """ + A discrete-time SIS model for hypergraphs as implemented in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + "Simplicial models of social contagion" by Iacopini et al. + https://doi.org/10.1038/s41467-019-10431-6 + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: 100 + Time at which the simulation should be terminated if it hasn't already. + dt : float > 0, default: 1.0 + Step forward in time that the simulation takes at each step. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + if return_full_data + dictionary + Time as the keys and events that happen as the values. + else + t, S, I : numpy arrays + time (t), number of susceptible (S), and infected (I) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> t, S, I = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt) + """ + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + status = defaultdict(lambda: "S") + + if return_full_data: + transition_events = dict() + transition_events[tmin] = list() + + for node in initial_infecteds: + status[node] = "I" + if return_full_data: + transition_events[tmin].append(("I", node, None)) + + I = [len(initial_infecteds)] + S = [H.number_of_nodes() - I[-1]] + + t = tmin + times = [t] + newStatus = status.copy() + + edge_neighbors = lambda node: H.edges.memberships[node] + + while t < tmax and I[-1] != 0: + # Initialize the next step with the same numbers of S, I, and R as the last step before computing the changes + S.append(S[-1]) + I.append(I[-1]) + if return_full_data: + transition_events[t + dt] = list() + + for node in H.nodes: + if status[node] == "I": + # recover the node. If it is not healed, it stays infected. + if random.random() <= gamma * dt: + newStatus[node] = "S" + I[-1] += -1 + S[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("S", node)) + elif status[node] == "S": + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + + if ( + random.random() + <= tau[len(members)] + * transmission_function(node, status, members, **args) + * dt + ): + newStatus[node] = "I" + S[-1] += -1 + I[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("I", node, edge_id)) + break + # This executes after the loop has executed normally without hitting the break statement which indicates infection, though I'm not sure we even need it + else: + newStatus[node] = "S" + status = newStatus.copy() + t += dt + times.append(t) + if return_full_data: + return transition_events + else: + return np.array(times), np.array(S), np.array(I)
+ + +
[docs]def Gillespie_SIR( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + initial_recovereds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + **args, +): + """ + A continuous-time SIR model for hypergraphs similar to the model in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + implemented for networks in the EoN package by Joel C. Miller + https://epidemicsonnetworks.readthedocs.io/en/latest/ + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + initial_recovereds : list or numpy array, default: None + An iterable of initially recovered node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: float('Inf') + Time at which the simulation should be terminated if it hasn't already. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + t, S, I, R : numpy arrays + time (t), number of susceptible (S), infected (I), and recovered (R) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> t, S, I, R = contagion.Gillespie_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax) + """ + # Initial infecteds and recovereds should be lists or None. Add a check here. + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + if initial_recovereds is None: + initial_recovereds = [] + else: + # check to make sure that the initially recovered nodes are in the hypergraph + initial_recovereds = list(set(H.nodes).intersection(set(initial_recovereds))) + + status = defaultdict(lambda: "S") + + size_dist = np.unique(H.edge_size_dist()) + + for node in initial_infecteds: + status[node] = "I" + + for node in initial_recovereds: + status[node] = "R" + + I = [len(initial_infecteds)] + R = [len(initial_recovereds)] + S = [H.number_of_nodes() - I[-1] - R[-1]] + + edge_neighbors = lambda node: H.edges.memberships[node] + + t = tmin + times = [t] + + infecteds = _ListDict_() + + infectious_edges = dict() + for size in size_dist: + infectious_edges[size] = _ListDict_() + + for node in initial_infecteds: + infecteds.update(node) + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + infectious_edges[len(members)].update((edge_id, node)) + + total_rates = dict() + total_rates[1] = gamma * infecteds.total_weight() + + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + dt = random.expovariate(total_rate) + t += dt + + while t < tmax and I[-1] != 0: + # choose type of event that happens + while True: + choice = random.choice(list(total_rates.keys())) + if random.random() <= total_rates[choice] / total_rate: + break + + if choice == 1: # recover + recovering_node = infecteds.random_removal() + status[recovering_node] = "R" + + # remove edges that are no longer infectious because of this node recovering + for edge_id in edge_neighbors(recovering_node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].remove((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1]) + I.append(I[-1] - 1) + R.append(R[-1] + 1) + + else: + _, recipient = infectious_edges[choice].choose_random() + status[recipient] = "I" + + infecteds.update(recipient) + + # remove the infectious links, because they can't infect an infected node. + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + try: + infectious_edges[len(members)].remove((edge_id, recipient)) + except: + pass + + # add edges that are infectious because of this node being infected + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].update((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] - 1) + I.append(I[-1] + 1) + R.append(R[-1]) + + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + if total_rate > 0: + dt = random.expovariate(total_rate) + else: + dt = float("Inf") + t += dt + return np.array(times), np.array(S), np.array(I), np.array(R)
+ + +
[docs]def Gillespie_SIS( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + return_full_data=False, + sim_kwargs=None, + **args, +): + """ + A continuous-time SIS model for hypergraphs similar to the model in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + implemented for networks in the EoN package by Joel C. Miller + https://epidemicsonnetworks.readthedocs.io/en/latest/ + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: 100 + Time at which the simulation should be terminated if it hasn't already. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + t, S, I : numpy arrays + time (t), number of susceptible (S), and infected (I) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> t, S, I = contagion.Gillespie_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax) + """ + # Initial infecteds and recovereds should be lists or None. Add a check here. + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + status = defaultdict(lambda: "S") + + size_dist = np.unique(H.edge_size_dist()) + + for node in initial_infecteds: + status[node] = "I" + + I = [len(initial_infecteds)] + S = [H.number_of_nodes() - I[-1]] + + edge_neighbors = lambda node: H.edges.memberships[node] + + t = tmin + times = [t] + + infecteds = _ListDict_() + + infectious_edges = dict() + for size in size_dist: + infectious_edges[size] = _ListDict_() + + for node in initial_infecteds: + infecteds.update(node) + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + infectious_edges[len(members)].update((edge_id, node)) + + total_rates = dict() + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + dt = random.expovariate(total_rate) + t += dt + + while t < tmax and I[-1] != 0: + # choose type of event that happens + # this can be improved + while True: + choice = random.choice(list(total_rates.keys())) + if random.random() <= total_rates[choice] / total_rate: + break + + if choice == 1: # recover + recovering_node = infecteds.random_removal() + status[recovering_node] = "S" + + # remove edges that are no longer infectious because of this node recovering + for edge_id in edge_neighbors(recovering_node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].remove((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] + 1) + I.append(I[-1] - 1) + + else: + _, recipient = infectious_edges[choice].choose_random() + status[recipient] = "I" + + infecteds.update(recipient) + + # remove the infectious links, because they can't infect an infected node. + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + try: + infectious_edges[len(members)].remove((edge_id, recipient)) + except: + pass + + # add edges that are infectious because of this node being infected + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].update((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] - 1) + I.append(I[-1] + 1) + + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + if total_rate > 0: + dt = random.expovariate(total_rate) + else: + dt = float("Inf") + t += dt + + return np.array(times), np.array(S), np.array(I)
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/algorithms/generative_models.html b/_modules/algorithms/generative_models.html new file mode 100644 index 00000000..adfb45bf --- /dev/null +++ b/_modules/algorithms/generative_models.html @@ -0,0 +1,367 @@ + + + + + + algorithms.generative_models — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for algorithms.generative_models

+import random
+import math
+import warnings
+from collections import defaultdict
+import numpy as np
+import pandas as pd
+from hypernetx import Hypergraph
+
+
+
[docs]def erdos_renyi_hypergraph(n, m, p, node_labels=None, edge_labels=None): + """ + A function to generate an Erdos-Renyi hypergraph as implemented by Mirah Shi and described for + bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001 + + Parameters + ---------- + n: int + Number of nodes + m: int + Number of edges + p: float + The probability that a bipartite edge is created + node_labels: list, default=None + Vertex labels + edge_labels: list, default=None + Hyperedge labels + + Returns + ------- + HyperNetX Hypergraph object + + + Example:: + + >>> import hypernetx.algorithms.generative_models as gm + >>> n = 1000 + >>> m = n + >>> p = 0.01 + >>> H = gm.erdos_renyi_hypergraph(n, m, p) + + """ + + if node_labels is not None and edge_labels is not None: + get_node_label = lambda index: node_labels[index] + get_edge_label = lambda index: edge_labels[index] + else: + get_node_label = lambda index: index + get_edge_label = lambda index: index + + bipartite_edges = [] + for u in range(n): + v = 0 + while v < m: + # identify next pair + r = random.random() + v = v + math.floor(math.log(r) / math.log(1 - p)) + if v < m: + # add vertex hyperedge pair + bipartite_edges.append((get_edge_label(v), get_node_label(u))) + v = v + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+ + +
[docs]def chung_lu_hypergraph(k1, k2): + """ + A function to generate an extension of Chung-Lu hypergraph as implemented by Mirah Shi and described for + bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001 + + Parameters + ---------- + k1 : dictionary + This a dictionary where the keys are node ids and the values are node degrees. + k2 : dictionary + This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes. + Returns + ------- + HyperNetX Hypergraph object + + Notes + ----- + The sums of k1 and k2 should be roughly the same. If they are not the same, this function returns a warning but still runs. + The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported. + + Example:: + + >>> import hypernetx.algorithms.generative_models as gm + >>> import random + >>> n = 100 + >>> k1 = {i : random.randint(1, 100) for i in range(n)} + >>> k2 = {i : sorted(k1.values())[i] for i in range(n)} + >>> H = gm.chung_lu_hypergraph(k1, k2) + """ + + # sort dictionary by degree in decreasing order + Nlabels = [n for n, _ in sorted(k1.items(), key=lambda d: d[1], reverse=True)] + Mlabels = [m for m, _ in sorted(k2.items(), key=lambda d: d[1], reverse=True)] + + m = len(k2) + + if sum(k1.values()) != sum(k2.values()): + warnings.warn( + "The sum of the degree sequence does not match the sum of the size sequence" + ) + + S = sum(k1.values()) + + bipartite_edges = [] + for u in Nlabels: + j = 0 + v = Mlabels[j] # start from beginning every time + p = min((k1[u] * k2[v]) / S, 1) + + while j < m: + if p != 1: + r = random.random() + j = j + math.floor(math.log(r) / math.log(1 - p)) + if j < m: + v = Mlabels[j] + q = min((k1[u] * k2[v]) / S, 1) + r = random.random() + if r < q / p: + # no duplicates + bipartite_edges.append((v, u)) + + p = q + j = j + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+ + +
[docs]def dcsbm_hypergraph(k1, k2, g1, g2, omega): + """ + A function to generate an extension of DCSBM hypergraph as implemented by Mirah Shi and described for + bipartite networks by Larremore et al. in https://doi.org/10.1103/PhysRevE.90.012805 + + Parameters + ---------- + k1 : dictionary + This a dictionary where the keys are node ids and the values are node degrees. + k2 : dictionary + This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes. + g1 : dictionary + This a dictionary where the keys are node ids and the values are the group ids to which the node belongs. + The keys must match the keys of k1. + g2 : dictionary + This a dictionary where the keys are edge ids and the values are the group ids to which the edge belongs. + The keys must match the keys of k2. + omega : 2D numpy array + This is a matrix with entries which specify the number of edges between a given node community and edge community. + The number of rows must match the number of node communities and the number of columns + must match the number of edge communities. + + + Returns + ------- + HyperNetX Hypergraph object + + Notes + ----- + The sums of k1 and k2 should be the same. If they are not the same, this function returns a warning but still runs. + The sum of k1 (and k2) and omega should be the same. If they are not the same, this function returns a warning + but still runs and the number of entries in the incidence matrix is determined by the omega matrix. + + The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported. + + Example:: + + >>> n = 100 + >>> k1 = {i : random.randint(1, 100) for i in range(n)} + >>> k2 = {i : sorted(k1.values())[i] for i in range(n)} + >>> g1 = {i : random.choice([0, 1]) for i in range(n)} + >>> g2 = {i : random.choice([0, 1]) for i in range(n)} + >>> omega = np.array([[100, 10], [10, 100]]) + >>> H = gm.dcsbm_hypergraph(k1, k2, g1, g2, omega) + """ + + # sort dictionary by degree in decreasing order + Nlabels = [n for n, _ in sorted(k1.items(), key=lambda d: d[1], reverse=True)] + Mlabels = [m for m, _ in sorted(k2.items(), key=lambda d: d[1], reverse=True)] + + # these checks verify that the sum of node and edge degrees and the sum of node degrees + # and the sum of community connection matrix differ by less than a single edge. + if abs(sum(k1.values()) - sum(k2.values())) > 1: + warnings.warn( + "The sum of the degree sequence does not match the sum of the size sequence" + ) + + if abs(sum(k1.values()) - np.sum(omega)) > 1: + warnings.warn( + "The sum of the degree sequence does not match the entries in the omega matrix" + ) + + # get indices for each community + community1Indices = defaultdict(list) + for label in Nlabels: + group = g1[label] + community1Indices[group].append(label) + + community2Indices = defaultdict(list) + for label in Mlabels: + group = g2[label] + community2Indices[group].append(label) + + bipartite_edges = list() + + kappa1 = defaultdict(lambda: 0) + kappa2 = defaultdict(lambda: 0) + for id, g in g1.items(): + kappa1[g] += k1[id] + for id, g in g2.items(): + kappa2[g] += k2[id] + + for group1 in community1Indices.keys(): + for group2 in community2Indices.keys(): + # for each constant probability patch + try: + groupConstant = omega[group1, group2] / ( + kappa1[group1] * kappa2[group2] + ) + except: + groupConstant = 0 + + for u in community1Indices[group1]: + j = 0 + v = community2Indices[group2][j] # start from beginning every time + # max probability + p = min(k1[u] * k2[v] * groupConstant, 1) + while j < len(community2Indices[group2]): + if p != 1: + r = random.random() + try: + j = j + math.floor(math.log(r) / math.log(1 - p)) + except: + j = np.inf + if j < len(community2Indices[group2]): + v = community2Indices[group2][j] + q = min((k1[u] * k2[v]) * groupConstant, 1) + r = random.random() + if r < q / p: + # no duplicates + bipartite_edges.append((v, u)) + + p = q + j = j + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/algorithms/homology_mod2.html b/_modules/algorithms/homology_mod2.html new file mode 100644 index 00000000..97c87add --- /dev/null +++ b/_modules/algorithms/homology_mod2.html @@ -0,0 +1,1012 @@ + + + + + + algorithms.homology_mod2 — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for algorithms.homology_mod2

+"""
+Homology and Smith Normal Form
+==============================
+The purpose of computing the Homology groups for data generated
+hypergraphs is to identify data sources that correspond to interesting
+features in the topology of the hypergraph.
+
+The elements of one of these Homology groups are generated by $k$
+dimensional cycles of relationships in the original data that are not
+bound together by higher order relationships. Ideally, we want the
+briefest description of these cycles; we want a minimal set of
+relationships exhibiting interesting cyclic behavior. This minimal set
+will be a bases for the Homology group.
+
+The cyclic relationships in the data are discovered using a **boundary
+map** represented as a matrix. To discover the bases we compute the
+**Smith Normal Form** of the boundary map.
+
+Homology Mod2
+-------------
+This module computes the homology groups for data represented as an
+abstract simplicial complex with chain groups $\{C_k\}$ and $Z_2$ additions.
+The boundary matrices are represented as rectangular matrices over $Z_2$.
+These matrices are diagonalized and represented in Smith
+Normal Form. The kernel and image bases are computed and the Betti
+numbers and homology bases are returned.
+
+Methods for obtaining SNF for Z/2Z are based on Ferrario's work:
+http://www.dlfer.xyz/post/2016-10-27-smith-normal-form/
+
+"""
+
+import numpy as np
+import hypernetx as hnx
+import warnings
+import copy
+from hypernetx import HyperNetXError
+from collections import defaultdict
+import itertools as it
+from scipy.sparse import csr_matrix
+
+
+
[docs]def kchainbasis(h, k): + """ + Compute the set of k dimensional cells in the abstract simplicial + complex associated with the hypergraph. + + Parameters + ---------- + h : hnx.Hypergraph + k : int + dimension of cell + + Returns + ------- + : list + an ordered list of kchains represented as tuples of length k+1 + + See also + -------- + hnx.hypergraph.toplexes + + Notes + ----- + - Method works best if h is simple [Berge], i.e. no edge contains another and there are no duplicate edges (toplexes). + - Hypergraph node uids must be sortable. + + """ + + import itertools as it + + kchains = set() + for e in h.edges(): + en = sorted(h.edges[e]) + if len(en) == k + 1: + kchains.add(tuple(en)) + elif len(en) > k + 1: + kchains.update(set(it.combinations(en, k + 1))) + return sorted(list(kchains))
+ + +
[docs]def bkMatrix(km1basis, kbasis): + """ + Compute the boundary map from $C_{k-1}$-basis to $C_k$ basis with + respect to $Z_2$ + + Parameters + ---------- + km1basis : indexable iterable + Ordered list of $k-1$ dimensional cell + kbasis : indexable iterable + Ordered list of $k$ dimensional cells + + Returns + ------- + bk : np.array + boundary matrix in $Z_2$ stored as boolean + + """ + bk = np.zeros((len(km1basis), len(kbasis)), dtype=int) + for cell in kbasis: + for idx in range(len(cell)): + face = cell[:idx] + cell[idx + 1 :] + row = km1basis.index(face) + col = kbasis.index(cell) + bk[row, col] = 1 + return bk
+ + +def _rswap(i, j, S): + """ + Swaps ith and jth row of copy of S + + Parameters + ---------- + i : int + j : int + S : np.array + + Returns + ------- + N : np.array + """ + N = copy.deepcopy(S) + row = copy.deepcopy(N[i]) + N[i] = copy.deepcopy(N[j]) + N[j] = row + return N + + +def _cswap(i, j, S): + """ + Swaps ith and jth column of copy of S + + Parameters + ---------- + i : int + j : int + S : np.array + matrix + + Returns + ------- + N : np.array + """ + N = _rswap(i, j, S.transpose()).transpose() + return N + + +
[docs]def swap_rows(i, j, *args): + """ + Swaps ith and jth row of each matrix in args + Returns a list of new matrices + + Parameters + ---------- + i : int + j : int + args : np.arrays + + Returns + ------- + list + list of copies of args with ith and jth row swapped + """ + output = list() + for M in args: + output.append(_rswap(i, j, M)) + return output
+ + +
[docs]def swap_columns(i, j, *args): + """ + Swaps ith and jth column of each matrix in args + Returns a list of new matrices + + Parameters + ---------- + i : int + j : int + args : np.arrays + + Returns + ------- + list + list of copies of args with ith and jth row swapped + """ + output = list() + for M in args: + output.append(_cswap(i, j, M)) + return output
+ + +
[docs]def add_to_row(M, i, j): + """ + Replaces row i with logical xor between row i and j + + Parameters + ---------- + M : np.array + i : int + index of row being altered + j : int + index of row being added to altered + + Returns + ------- + N : np.array + """ + N = copy.deepcopy(M) + N[i] = 1 * np.logical_xor(N[i], N[j]) + return N
+ + +
[docs]def add_to_column(M, i, j): + """ + Replaces column i (of M) with logical xor between column i and j + + Parameters + ---------- + M : np.array + matrix + i : int + index of column being altered + j : int + index of column being added to altered + + Returns + ------- + N : np.array + """ + N = M.transpose() + return add_to_row(N, i, j).transpose()
+ + +
[docs]def logical_dot(ar1, ar2): + """ + Returns the boolean equivalent of the dot product mod 2 on two 1-d arrays of + the same length. + + Parameters + ---------- + ar1 : numpy.ndarray + 1-d array + ar2 : numpy.ndarray + 1-d array + + Returns + ------- + : bool + boolean value associated with dot product mod 2 + + Raises + ------ + HyperNetXError + If arrays are not of the same length an error will be raised. + """ + if len(ar1) != len(ar2): + raise HyperNetXError("logical_dot requires two 1-d arrays of the same length") + else: + return 1 * np.logical_xor.reduce(np.logical_and(ar1, ar2))
+ + +
[docs]def logical_matmul(mat1, mat2): + """ + Returns the boolean equivalent of matrix multiplication mod 2 on two + binary arrays stored as type boolean + + Parameters + ---------- + mat1 : np.ndarray + 2-d array of boolean values + mat2 : np.ndarray + 2-d array of boolean values + + Returns + ------- + mat : np.ndarray + boolean matrix equivalent to the mod 2 matrix multiplication of the + matrices as matrices over Z/2Z + + Raises + ------ + HyperNetXError + If inner dimensions are not equal an error will be raised. + + """ + L1, R1 = mat1.shape + L2, R2 = mat2.shape + if R1 != L2: + raise HyperNetXError( + "logical_matmul called for matrices with inner dimensions mismatched" + ) + + mat = np.zeros((L1, R2), dtype=int) + mat2T = mat2.transpose() + for i in range(L1): + if np.any(mat1[i]): + for j in range(R2): + mat[i, j] = logical_dot(mat1[i], mat2T[j]) + else: + mat[i] = np.zeros((1, R2), dtype=int) + return mat
+ + +
[docs]def matmulreduce(arr, reverse=False): + """ + Recursively applies a 'logical multiplication' to a list of boolean arrays. + + For arr = [arr[0],arr[1],arr[2]...arr[n]] returns product arr[0]arr[1]...arr[n] + If reverse = True, returns product arr[n]arr[n-1]...arr[0] + + Parameters + ---------- + arr : list of np.array + list of nxm matrices represented as np.array + reverse : bool, optional + order to multiply the matrices + + Returns + ------- + P : np.array + Product of matrices in the list + """ + if reverse: + items = range(len(arr) - 1, -1, -1) + else: + items = range(len(arr)) + P = arr[items[0]] + for i in items[1:]: + P = logical_matmul(P, arr[i]) * 1 + return P
+ + +
[docs]def logical_matadd(mat1, mat2): + """ + Returns the boolean equivalent of matrix addition mod 2 on two + binary arrays stored as type boolean + + Parameters + ---------- + mat1 : np.ndarray + 2-d array of boolean values + mat2 : np.ndarray + 2-d array of boolean values + + Returns + ------- + mat : np.ndarray + boolean matrix equivalent to the mod 2 matrix addition of the + matrices as matrices over Z/2Z + + Raises + ------ + HyperNetXError + If dimensions are not equal an error will be raised. + + """ + S1 = mat1.shape + S2 = mat2.shape + mat = np.zeros(S1, dtype=int) + if S1 != S2: + raise HyperNetXError( + "logical_matadd called for matrices with different dimensions" + ) + if len(S1) == 1: + for idx in range(S1[0]): + mat[idx] = 1 * np.logical_xor(mat1[idx], mat2[idx]) + else: + for idx in range(S1[0]): + for jdx in range(S1[1]): + mat[idx, jdx] = 1 * np.logical_xor(mat1[idx, jdx], mat2[idx, jdx]) + return mat
+ + +# Convenience methods for computing Smith Normal Form +# All of these operations have themselves as inverses + + +def _sr(i, j, M, L): + return swap_rows(i, j, M, L) + + +def _sc(i, j, M, R): + return swap_columns(i, j, M, R) + + +def _ar(i, j, M, L): + return add_to_row(M, i, j), add_to_row(L, i, j) + + +def _ac(i, j, M, R): + return add_to_column(M, i, j), add_to_column(R, i, j) + + +def _get_next_pivot(M, s1, s2=None): + """ + Determines the first r,c indices in the submatrix of M starting + with row s1 and column s2 index (row,col) that is nonzero, + if it exists. + + Search starts with the s2th column and looks for the first nonzero + s1 row. If none is found, search continues to the next column and so + on. + + Parameters + ---------- + M : np.array + matrix represented as np.array + s1 : int + index of row position to start submatrix of M + s2 : int, optional, default = s1 + index of column position to start submatrix of M + + Returns + ------- + (r,c) : tuple of int or None + + """ + # find the next nonzero pivot to put in s,s spot for Smith Normal Form + m, n = M.shape + if not s2: + s2 = s1 + for c in range(s2, n): + for r in range(s1, m): + if M[r, c]: + return (r, c) + return None + + +
[docs]def smith_normal_form_mod2(M): + """ + Computes the invertible transformation matrices needed to compute the + Smith Normal Form of M modulo 2 + + Parameters + ---------- + M : np.array + a rectangular matrix with data type bool + track : bool + if track=True will print out the transformation as Z/2Z matrix as it + discovers L[i] and R[j] + + Returns + ------- + L, R, S, Linv : np.arrays + LMR = S is the Smith Normal Form of the matrix M. + + Note + ---- + Given a mxn matrix $M$ with + entries in $Z_2$ we start with the equation: $L M R = S$, where + $L = I_m$, and $R=I_n$ are identity matrices and $S = M$. We + repeatedly apply actions to the left and right side of the equation + to transform S into a diagonal matrix. + For each action applied to the left side we apply its inverse + action to the right side of I_m to generate $L^{-1}$. + Finally we verify: + $L M R = S$ and $LLinv = I_m$. + """ + + S = copy.copy(M) + dimL, dimR = M.shape + + # initialize left and right transformations with identity matrices + L = np.eye(dimL, dtype=int) + R = np.eye(dimR, dtype=int) + Linv = np.eye(dimL, dtype=int) + for s in range(min(dimL, dimR)): + # Find index pair (rdx,cdx) with value 1 in submatrix M[s:,s:] + pivot = _get_next_pivot(S, s) + if not pivot: + break + else: + rdx, cdx = pivot + # Swap rows and columns as needed so that 1 is in the s,s position + if rdx > s: + S, L = _sr(s, rdx, S, L) + Linv = swap_columns(rdx, s, Linv)[0] + if cdx > s: + S, R = _sc(s, cdx, S, R) + # add sth row to every row with 1 in sth column & sth column to every column with 1 in sth row + row_indices = [idx for idx in range(s + 1, dimL) if S[idx, s] == 1] + for rdx in row_indices: + S, L = _ar(rdx, s, S, L) + Linv = add_to_column(Linv, s, rdx) + column_indices = [jdx for jdx in range(s + 1, dimR) if S[s, jdx] == 1] + for cdx in column_indices: + S, R = _ac(cdx, s, S, R) + return L, R, S, Linv
+ + +
[docs]def reduced_row_echelon_form_mod2(M): + """ + Computes the invertible transformation matrices needed to compute + the reduced row echelon form of M modulo 2 + + Parameters + ---------- + M : np.array + a rectangular matrix with elements in $Z_2$ + + Returns + ------- + L, S, Linv : np.arrays + LM = S where S is the reduced echelon form of M + and M = LinvS + """ + S = copy.deepcopy(M) + dimL, dimR = M.shape + + # method with numpy + Linv = np.eye(dimL, dtype=int) + L = np.eye(dimL, dtype=int) + + s2 = 0 + s1 = 0 + while s2 <= dimR and s1 <= dimL: + # Find index pair (rdx,cdx) with value 1 in submatrix M[s1:,s2:] + # look for the first 1 in the s2 column + pivot = _get_next_pivot(S, s1, s2) + + if not pivot: + return L, S, Linv + else: + rdx, cdx = pivot + if rdx > s1: + # Swap rows as needed so that 1 leads the row + S, L = _sr(s1, rdx, S, L) + Linv = swap_columns(rdx, s1, Linv)[0] + # add s1th row to every nonzero row + row_indices = [ + idx for idx in range(0, dimL) if idx != s1 and S[idx, cdx] == 1 + ] + for idx in row_indices: + S, L = _ar(idx, s1, S, L) + Linv = add_to_column(Linv, s1, idx) + s1, s2 = s1 + 1, cdx + 1 + + return L, S, Linv
+ + +
[docs]def boundary_group(image_basis): + """ + Returns a csr_matrix with rows corresponding to the elements of the + group generated by image basis over $\mathbb{Z}_2$ + + Parameters + ---------- + image_basis : numpy.ndarray or scipy.sparse.csr_matrix + 2d-array of basis elements + + Returns + ------- + : scipy.sparse.csr_matrix + """ + if len(image_basis) > 10: + msg = """ + This method is inefficient for large image bases. + """ + warnings.warn(msg, stacklevel=2) + if np.sum(image_basis) == 0: + return None + dim = image_basis.shape[0] + itm = csr_matrix(list(it.product([0, 1], repeat=dim))) + return csr_matrix(np.mod(itm * image_basis, 2))
+ + +def _compute_matrices_for_snf(bd): + """ + Helper method for smith normal form decomposition for boundary maps + associated to chain complex + + Parameters + ---------- + bd : dict + dict of k-boundary matrices keyed on dimension of domain + + Returns + ------- + L,R,S,Linv : dict + dict of matrices ranging over krange + + """ + L, R, S, Linv = [dict() for i in range(4)] + + for kdx in bd: + L[kdx], R[kdx], S[kdx], Linv[kdx] = smith_normal_form_mod2(bd[kdx]) + return L, R, S, Linv + + +def _get_krange(max_dim, k=None): + """ + Helper method to compute range of appropriate k dimensions for homology + computations given k and the max dimension of a simplicial complex + """ + if k is None: + krange = [1, max_dim] + elif isinstance(k, int): + if k == 0: + msg = ( + "Only kth simplicial homology groups for k>0 may be computed." + "If you are interested in k=0, compute the number connected components." + ) + print(msg) + return + if k > max_dim: + msg = f"No simplices of dim {k} exist. k adjusted to max dim." + print(msg) + krange = [min([k, max_dim])] * 2 + elif not len(k) == 2: + msg = f"Please enter krange as a positive integer or list of integers: [<min k>,<max k>] inclusive." + print(msg) + return None + elif not k[0] <= k[1]: + msg = f"k must be an integer or a list of two integers [min,max] with min <=max" + print(msg) + return None + else: + krange = k + + if krange[1] > max_dim: + msg = f"No simplices of dim {krange[1]} exist. Range adjusted to max dim." + print(msg) + krange[1] = max_dim + if krange[0] < 1: + msg = ( + "Only kth simplicial homology groups for k>0 may be computed." + "If you are interested in k=0, compute the number of connected components." + ) + print(msg) + krange[0] = 1 + return krange + + +
[docs]def chain_complex(h, k=None): + """ + Compute the k-chains and k-boundary maps required to compute homology + for all values in k + + Parameters + ---------- + h : hnx.Hypergraph + k : int or list of length 2, optional, default=None + k must be an integer greater than 0 or a list of + length 2 indicating min and max dimensions to be + computed. eg. if k = [1,2] then 0,1,2,3-chains + and boundary maps for k=1,2,3 will be returned, + if None than k = [1,max dimension of edge in h] + + Returns + ------- + C, bd : dict + C is a dictionary of lists + bd is a dictionary of numpy arrays + """ + max_dim = np.max([len(h.edges[e]) for e in h.edges()]) - 1 + krange = _get_krange(max_dim, k) + if not krange: + return + # Compute chain complex + + C = defaultdict(list) + C[krange[0] - 1] = kchainbasis(h, krange[0] - 1) + bd = dict() + for kdx in range(krange[0], krange[1] + 2): + C[kdx] = kchainbasis(h, kdx) + bd[kdx] = bkMatrix(C[kdx - 1], C[kdx]) + return C, bd
+ + +
[docs]def betti(bd, k=None): + """ + Generate the kth-betti numbers for a chain complex with boundary + matrices given by bd + + Parameters + ---------- + bd: dict of k-boundary matrices keyed on dimension of domain + k : int, list or tuple, optional, default=None + list must be min value and max value of k values inclusive + if None, then all betti numbers for dimensions of existing cells will be + computed. + + Returns + ------- + betti : dict + Description + """ + rank = defaultdict(int) + if k: + max_dim = max(bd.keys()) + krange = _get_krange(max_dim, k) + if not krange: + return + kvals = sorted(set(range(krange[0], krange[1] + 2)).intersection(bd.keys())) + else: + kvals = bd.keys() + for kdx in kvals: + _, S, _ = hnx.reduced_row_echelon_form_mod2(bd[kdx]) + rank[kdx] = np.sum(np.sum(S, axis=1).astype(bool)) + + betti = dict() + for kdx in kvals: + if kdx + 1 in kvals: + betti[kdx] = bd[kdx].shape[1] - rank[kdx] - rank[kdx + 1] + else: + continue + + return betti
+ + +
[docs]def betti_numbers(h, k=None): + """ + Return the kth betti numbers for the simplicial homology of the ASC + associated to h + + Parameters + ---------- + h : hnx.Hypergraph + Hypergraph to compute the betti numbers from + k : int or list, optional, default=None + list must be min value and max value of k values inclusive + if None, then all betti numbers for dimensions of existing cells will be + computed. + + Returns + ------- + betti : dict + A dictionary of betti numbers keyed by dimension + """ + _, bd = chain_complex(h, k) + return betti(bd)
+ + +
[docs]def homology_basis(bd, k=None, boundary=False, **kwargs): + """ + Compute a basis for the kth-simplicial homology group, $H_k$, defined by a + chain complex $C$ with boundary maps given by bd $= \{k:\partial_k \}$ + + Parameters + ---------- + bd : dict + dict of boundary matrices on k-chains to k-1 chains keyed on k + if krange is a tuple then all boundary matrices k \in [krange[0],..,krange[1]] + inclusive must be in the dictionary + k : int or list of ints, optional, default=None + k must be a positive integer or a list of + 2 integers indicating min and max dimensions to be + computed, if none given all homology groups will be computed from + available boundary matrices in bd + boundary : bool + option to return a basis for the boundary group from each dimension. + Needed to compute the shortest generators in the homology group. + + Returns + ------- + basis : dict + dict of generators as 0-1 tuples keyed by dim + basis for dimension k will be returned only if bd[k] and bd[k+1] have + been provided. + im : dict + dict of boundary group generators keyed by dim + """ + max_dim = max(bd.keys()) + if k: + krange = _get_krange(max_dim, k) + kvals = sorted( + set(bd.keys()).intersection(range(krange[0], krange[1] + 2)) + ) # to get kth dim need k+1 bdry matrix + else: + kvals = bd.keys() + + L, R, S, Linv = _compute_matrices_for_snf( + {k: v for k, v in bd.items() if k in kvals} + ) + + rank = dict() + for kdx in kvals: + rank[kdx] = np.sum(S[kdx]) + + basis = dict() + im = dict() + for kdx in kvals: + if kdx + 1 not in kvals: + continue + rank1 = rank[kdx] + rank2 = rank[kdx + 1] + ker1 = R[kdx][:, rank1:] + im2 = Linv[kdx + 1][:, :rank2] + cokernel2 = Linv[kdx + 1][:, rank2:] + cokproj2 = L[kdx + 1][rank2:, :] + + proj = matmulreduce([cokernel2, cokproj2, ker1]).transpose() + _, proj, _ = reduced_row_echelon_form_mod2(proj) + # proj = np.array(proj) + basis[kdx] = np.array([row for row in proj if np.any(row)]) + if boundary: + im[kdx] = im2.transpose() + if boundary: + return basis, im + else: + return basis
+ + +
[docs]def hypergraph_homology_basis(h, k=None, shortest=False, interpreted=True): + """ + Computes the kth-homology groups mod 2 for the ASC + associated with the hypergraph h for k in krange inclusive + + Parameters + ---------- + h : hnx.Hypergraph + k : int or list of length 2, optional, default = None + k must be an integer greater than 0 or a list of + length 2 indicating min and max dimensions to be + computed + shortest : bool, optional, default=False + option to look for shortest representative for each coset in the + homology group, only good for relatively small examples + interpreted : bool, optional, default = True + if True will return an explicit basis in terms of the k-chains + + Returns + ------- + basis : list + list of generators as k-chains as boolean vectors + interpreted_basis : + lists of kchains in basis + + """ + C, bd = chain_complex(h, k) + if shortest: + basis = defaultdict(list) + tbasis, im = homology_basis(bd, boundary=True) + for kdx in tbasis: + imgrp = boundary_group(im[kdx]) + if imgrp is None: + basis[kdx] = tbasis[kdx] + else: + for b in tbasis[kdx]: + coset = np.array( + np.mod(imgrp + b, 2) + ) # dimensions appear to be wrong. See tests2 cell 5 + idx = np.argmin(np.sum(coset, axis=1)) + basis[kdx].append(coset[idx]) + basis = dict(basis) + + else: + basis = homology_basis(bd) + + if interpreted: + interpreted_basis = dict() + for kdx in basis: + interpreted_basis[kdx] = interpret(C[kdx], basis[kdx], labels=None) + return basis, interpreted_basis + else: + return basis
+ + +
[docs]def interpret(Ck, arr, labels=None): + """ + Returns the data as represented in Ck associated with the arr + + Parameters + ---------- + Ck : list + a list of k-cells being referenced by arr + arr : np.array + array of 0-1 vectors + labels : dict, optional + dictionary of labels to associate to the nodes in the cells + + Returns + ---- + : list + list of k-cells referenced by data in Ck + + """ + + def translate(cell, labels=labels): + if not labels: + return cell + else: + temp = list() + for node in cell: + temp.append(labels[node]) + return tuple(temp) + + output = list() + for vec in arr: + if len(Ck) != len(vec): + raise HyperNetXError("elements of arr must have the same length as Ck") + output.append([translate(Ck[idx]) for idx in range(len(vec)) if vec[idx] == 1]) + return output
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/algorithms/hypergraph_modularity.html b/_modules/algorithms/hypergraph_modularity.html new file mode 100644 index 00000000..1050009d --- /dev/null +++ b/_modules/algorithms/hypergraph_modularity.html @@ -0,0 +1,710 @@ + + + + + + algorithms.hypergraph_modularity — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for algorithms.hypergraph_modularity

+"""
+Hypergraph_Modularity
+---------------------
+Modularity and clustering for hypergraphs using HyperNetX.
+Adapted from F. Théberge's GitHub repository: `Hypergraph Clustering <https://github.com/ftheberge/Hypergraph_Clustering>`_
+See Tutorial 13 in the tutorials folder for library usage.
+
+References
+----------
+.. [1] Kumar T., Vaidyanathan S., Ananthapadmanabhan H., Parthasarathy S. and Ravindran B. "A New Measure of Modularity in Hypergraphs: Theoretical Insights and Implications for Effective Clustering". In: Cherifi H., Gaito S., Mendes J., Moro E., Rocha L. (eds) Complex Networks and Their Applications VIII. COMPLEX NETWORKS 2019. Studies in Computational Intelligence, vol 881. Springer, Cham. https://doi.org/10.1007/978-3-030-36687-2_24
+.. [2] Kamiński  B., Prałat  P. and Théberge  F. "Community Detection Algorithm Using Hypergraph Modularity". In: Benito R.M., Cherifi C., Cherifi H., Moro E., Rocha L.M., Sales-Pardo M. (eds) Complex Networks & Their Applications IX. COMPLEX NETWORKS 2020. Studies in Computational Intelligence, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-65347-7_13
+.. [3] Kamiński  B., Poulin V., Prałat  P., Szufel P. and Théberge  F. "Clustering via hypergraph modularity", Plos ONE 2019, https://doi.org/10.1371/journal.pone.0224307
+"""
+
+from collections import Counter
+import numpy as np
+import itertools
+from scipy.special import comb
+
+try:
+    import igraph as ig
+except ModuleNotFoundError as e:
+    print(
+        f" {e}. If you need to use {__name__}, please install additional packages by running the following command: pip install .['all']"
+    )
+################################################################################
+
+# we use 2 representations for partitions (0-based part ids):
+# (1) dictionary or (2) list of sets
+
+
+
[docs]def dict2part(D): + """ + Given a dictionary mapping the part for each vertex, return a partition as a list of sets; inverse function to part2dict + + Parameters + ---------- + D : dict + Dictionary keyed by vertices with values equal to integer + index of the partition the vertex belongs to + + Returns + ------- + : list + List of sets; one set for each part in the partition + """ + P = [] + k = list(D.keys()) + v = list(D.values()) + for x in range(max(D.values()) + 1): + P.append(set([k[i] for i in range(len(k)) if v[i] == x])) + return P
+ + +
[docs]def part2dict(A): + """ + Given a partition (list of sets), returns a dictionary mapping the part for each vertex; inverse function + to dict2part + + Parameters + ---------- + A : list of sets + a partition of the vertices + + Returns + ------- + : dict + a dictionary with {vertex: partition index} + """ + x = [] + for i in range(len(A)): + x.extend([(a, i) for a in A[i]]) + return {k: v for k, v in x}
+ + +################################################################################ + + +
[docs]def precompute_attributes(H): + """ + Precompute some values on hypergraph HG for faster computing of hypergraph modularity. + This needs to be run before calling either modularity() or last_step(). + + Note + ---- + + If HG is unweighted, v.weight is set to 1 for each vertex v in HG. + The weighted degree for each vertex v is stored in v.strength. + The total edge weigths for each edge cardinality is stored in HG.d_weights. + Binomial coefficients to speed-up modularity computation are stored in HG.bin_coef. + Isolated vertices found only in edge(s) of size 1 are dropped. + + Parameters + ---------- + HG : Hypergraph + + Returns + ------- + H : Hypergraph + New hypergraph with added attributes + + """ + # 1. compute node strenghts (weighted degrees) + for v in H.nodes: + H.nodes[v].strength = 0 + for e in H.edges: + try: + w = H.edges[e].weight + except: + w = 1 + # add unit weight if none to simplify other functions + H.edges[e].weight = 1 + for v in list(H.edges[e]): + H.nodes[v].strength += w + # 2. compute d-weights + ctr = Counter([len(H.edges[e]) for e in H.edges]) + for k in ctr.keys(): + ctr[k] = 0 + for e in H.edges: + ctr[len(H.edges[e])] += H.edges[e].weight + H.d_weights = ctr + H.total_weight = sum(ctr.values()) + # 3. compute binomial coeffcients (modularity speed-up) + bin_coef = {} + for n in H.d_weights.keys(): + for k in np.arange(n // 2 + 1, n + 1): + bin_coef[(n, k)] = comb(n, k, exact=True) + H.bin_coef = bin_coef + return H
+ + +################################################################################ + + +
[docs]def linear(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This is the default choice for modularity() and last_step() functions. + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : float + c/d if c>d/2 else 0 + """ + return c / d if c > d / 2 else 0
+ + +# majority + + +
[docs]def majority(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This corresponds to the majority rule [3]_ + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : bool + 1 if c>d/2 else 0 + + """ + return 1 if c > d / 2 else 0
+ + +# strict + + +
[docs]def strict(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This corresponds to the strict rule [3]_ + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : bool + 1 if c==d else 0 + """ + return 1 if c == d else 0
+ + +######################################### + + +def _compute_partition_probas(HG, A): + """ + Compute vol(A_i)/vol(V) for each part A_i in A (list of sets) + + Parameters + ---------- + HG : Hypergraph + A : list of sets + + Returns + ------- + : list + normalized distribution of strengths in partition elements + """ + p = [] + for part in A: + vol = 0 + for v in part: + vol += HG.nodes[v].strength + p.append(vol) + s = sum(p) + return [i / s for i in p] + + +def _degree_tax(HG, Pr, wdc): + """ + Computes the expected fraction of edges falling in + the partition as per [2]_ + + Parameters + ---------- + HG : Hypergraph + + Pr : list + Probability distribution + wdc : func + weight function for edge contribution (ex: strict, majority, linear) + + Returns + ------- + float + + """ + DT = 0 + for d in HG.d_weights.keys(): + tax = 0 + for c in np.arange(d // 2 + 1, d + 1): + for p in Pr: + tax += p**c * (1 - p) ** (d - c) * HG.bin_coef[(d, c)] * wdc(d, c) + tax *= HG.d_weights[d] + DT += tax + DT /= HG.total_weight + return DT + + +def _edge_contribution(HG, A, wdc): + """ + Edge contribution from hypergraph with respect + to partion A. + + Parameters + ---------- + HG : Hypergraph + + A : list of sets + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + + """ + EC = 0 + for e in HG.edges: + d = HG.size(e) + for part in A: + hgs = HG.size(e, part) + if hgs > d / 2: + EC += wdc(d, hgs) * HG.edges[e].weight + break + EC /= HG.total_weight + return EC + + +# HG: HNX hypergraph +# A: partition (list of sets) +# wcd: weight function (ex: strict, majority, linear) + + +
[docs]def modularity(HG, A, wdc=linear): + """ + Computes modularity of hypergraph HG with respect to partition A. + + Parameters + ---------- + HG : Hypergraph + The hypergraph with some precomputed attributes via: precompute_attributes(HG) + A : list of sets + Partition of the vertices in HG + wdc : func, optional + Hyperparameter for hypergraph modularity [2]_ + + Note + ---- + For 'wdc', any function of the format w(d,c) that returns 0 when c <= d/2 and value in [0,1] otherwise can be used. + Default is 'linear'; other supplied choices are 'majority' and 'strict'. + + Returns + ------- + : float + The modularity function for partition A on HG + """ + Pr = _compute_partition_probas(HG, A) + return _edge_contribution(HG, A, wdc) - _degree_tax(HG, Pr, wdc)
+ + +################################################################################ + + +
[docs]def two_section(HG): + """ + Creates a random walk based [1]_ 2-section igraph Graph with transition weights defined by the + weights of the hyperedges. + + Parameters + ---------- + HG : Hypergraph + + Returns + ------- + : igraph.Graph + The 2-section graph built from HG + """ + s = [] + for e in HG.edges: + E = HG.edges[e] + # random-walk 2-section (preserve nodes' weighted degrees) + if len(E) > 1: + try: + w = HG.edges[e].weight / (len(E) - 1) + except: + w = 1 / (len(E) - 1) + s.extend([(k[0], k[1], w) for k in itertools.combinations(E, 2)]) + G = ig.Graph.TupleList(s, weights=True).simplify(combine_edges="sum") + return G
+ + +################################################################################ + + +
[docs]def kumar(HG, delta=0.01): + """ + Compute a partition of the vertices in hypergraph HG as per Kumar's algorithm [1]_ + + Parameters + ---------- + HG : Hypergraph + + delta : float, optional + convergence stopping criterion + + Returns + ------- + : list of sets + A partition of the vertices in HG + + """ + # weights will be modified -- store initial weights + W = { + e: HG.edges[e].weight for e in HG.edges + } # uses edge id for reference instead of int + # build graph + G = two_section(HG) + # apply clustering + CG = G.community_multilevel(weights="weight") + CH = [] + for comm in CG.as_cover(): + CH.append(set([G.vs[x]["name"] for x in comm])) + + # LOOP + diff = 1 + ctr = 0 + while diff > delta: + # re-weight + diff = 0 + for e in HG.edges: + edge = HG.edges[e] + reweight = ( + sum([1 / (1 + HG.size(e, c)) for c in CH]) + * (HG.size(e) + len(CH)) + / HG.number_of_edges() + ) + diff = max(diff, 0.5 * abs(edge.weight - reweight)) + edge.weight = 0.5 * edge.weight + 0.5 * reweight + # re-run louvain + # build graph + G = two_section(HG) + # apply clustering + CG = G.community_multilevel(weights="weight") + CH = [] + for comm in CG.as_cover(): + CH.append(set([G.vs[x]["name"] for x in comm])) + ctr += 1 + if ctr > 50: # this process sometimes gets stuck -- set limit + break + G.vs["part"] = CG.membership + for e in HG.edges: + HG.edges[e].weight = W[e] + return dict2part({v["name"]: v["part"] for v in G.vs})
+ + +################################################################################ + + +def _delta_ec(HG, P, v, a, b, wdc): + """ + Computes change in edge contribution -- + partition P, node v going from P[a] to P[b] + + Parameters + ---------- + HG : Hypergraph + + P : list of sets + + v : int or str + node identifier + a : int + + b : int + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + """ + Pm = P[a] - {v} + Pn = P[b].union({v}) + ec = 0 + + # TODO: Verify the data shape of `memberships` (ie. what are the keys and values) + for e in list(HG.nodes.memberships[v]): + d = HG.size(e) + w = HG.edges[e].weight + ec += w * ( + wdc(d, HG.size(e, Pm)) + + wdc(d, HG.size(e, Pn)) + - wdc(d, HG.size(e, P[a])) + - wdc(d, HG.size(e, P[b])) + ) + return ec / HG.total_weight + + +def _bin_ppmf(d, c, p): + """ + exponential part of the binomial pmf + + Parameters + ---------- + d : int + + c : int + + p : float + + + Returns + ------- + : float + + """ + return p**c * (1 - p) ** (d - c) + + +def _delta_dt(HG, P, v, a, b, wdc): + """ + Compute change in degree tax -- + partition P (list), node v going from P[a] to P[b] + + Parameters + ---------- + HG : Hypergraph + + P : list of sets + + v : int or str + node identifier + a : int + + b : int + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + + """ + s = HG.nodes[v].strength + vol = sum([HG.nodes[v].strength for v in HG.nodes]) + vola = sum([HG.nodes[v].strength for v in P[a]]) + volb = sum([HG.nodes[v].strength for v in P[b]]) + volm = (vola - s) / vol + voln = (volb + s) / vol + vola /= vol + volb /= vol + DT = 0 + + for d in HG.d_weights.keys(): + x = 0 + for c in np.arange(int(np.floor(d / 2)) + 1, d + 1): + x += ( + HG.bin_coef[(d, c)] + * wdc(d, c) + * ( + _bin_ppmf(d, c, voln) + + _bin_ppmf(d, c, volm) + - _bin_ppmf(d, c, vola) + - _bin_ppmf(d, c, volb) + ) + ) + DT += x * HG.d_weights[d] + return DT / HG.total_weight + + +
[docs]def last_step(HG, L, wdc=linear, delta=0.01): + """ + Given some initial partition L, compute a new partition of the vertices in HG as per Last-Step algorithm [2]_ + + Note + ---- + This is a very simple algorithm that tries moving nodes between communities to improve hypergraph modularity. + It requires an initial non-trivial partition which can be obtained for example via graph clustering on the 2-section of HG, + or via Kumar's algorithm. + + Parameters + ---------- + HG : Hypergraph + + L : list of sets + some initial partition of the vertices in HG + + wdc : func, optional + Hyperparameter for hypergraph modularity [2]_ + + delta : float, optional + convergence stopping criterion + + Returns + ------- + : list of sets + A new partition for the vertices in HG + """ + A = L[:] # we will modify this, copy + D = part2dict(A) + qH = 0 + while True: + for v in list(np.random.permutation(list(HG.nodes))): + c = D[v] + s = list(set([c] + [D[i] for i in HG.neighbors(v)])) + M = [] + if len(s) > 0: + for i in s: + if c == i: + M.append(0) + else: + M.append( + _delta_ec(HG, A, v, c, i, wdc) + - _delta_dt(HG, A, v, c, i, wdc) + ) + i = s[np.argmax(M)] + if c != i: + A[c] = A[c] - {v} + A[i] = A[i].union({v}) + D[v] = i + Pr = _compute_partition_probas(HG, A) + q2 = _edge_contribution(HG, A, wdc) - _degree_tax(HG, Pr, wdc) + if (q2 - qH) < delta: + break + qH = q2 + return [a for a in A if len(a) > 0]
+ + +################################################################################ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/algorithms/laplacians_clustering.html b/_modules/algorithms/laplacians_clustering.html new file mode 100644 index 00000000..63152f91 --- /dev/null +++ b/_modules/algorithms/laplacians_clustering.html @@ -0,0 +1,348 @@ + + + + + + algorithms.laplacians_clustering — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for algorithms.laplacians_clustering

+# Copyright © 2021 Battelle Memorial Institute
+# All rights reserved.
+
+"""
+
+Hypergraph Probability Transition Matrices, Laplacians, and Clustering
+======================================================================
+We contruct hypergraph random walks utilizing optional "edge-dependent vertex weights", which are
+weights associated with each vertex-hyperedge pair (i.e. cell weights on the incidence matrix).
+The probability transition matrix of this random walk is used to construct a normalized Laplacian
+matrix for the hypergraph. That normalized Laplacian then serves as the input for a spectral clustering
+algorithm. This spectral clustering algorithm, as well as the normalized Laplacian and other details of
+this methodology are described in
+
+K. Hayashi, S. Aksoy, C. Park, H. Park, "Hypergraph random walks, Laplacians, and clustering",
+Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020.
+https://doi.org/10.1145/3340531.3412034
+
+Please direct any inquiries concerning the clustering module to Sinan Aksoy, sinan.aksoy@pnnl.gov
+
+"""
+
+import numpy as np
+import sys
+from scipy.sparse import csr_matrix, diags, identity
+from scipy.sparse.linalg import eigs
+from sklearn.cluster import KMeans
+from sklearn import preprocessing
+from hypernetx import HyperNetXError
+
+try:
+    import nwhy
+
+    nwhy_available = True
+except:
+    nwhy_available = False
+
+sys.setrecursionlimit(10000)
+
+
+
[docs]def prob_trans(H, weights=False, index=True, check_connected=True): + """ + The probability transition matrix of a random walk on the vertices of a hypergraph. + At each step in the walk, the next vertex is chosen by: + + 1. Selecting a hyperedge e containing the vertex with probability proportional to w(e) + 2. Selecting a vertex v within e with probability proportional to a \gamma(v,e) + + If weights are not specified, then all weights are uniform and the walk is equivalent + to a simple random walk. + If weights are specified, the hyperedge weights w(e) are determined from the weights + \gamma(v,e). + + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + weights : bool, optional, default : False + Use the cell_weights associated with the hypergraph + If False, uniform weights are utilized. + index : bool, optional + Whether to return matrix index to vertex label mapping + + Returns + ------- + P : scipy.sparse.csr.csr_matrix + Probability transition matrix of the random walk on the hypergraph + index: list + contains list of index of node ids for rows + """ + # hypergraph must be connected + if check_connected: + if not H.is_connected(): + raise HyperNetXError("hypergraph must be connected") + + R = H.incidence_matrix(index=index, weights=weights) + if index: + R, rdx, _ = R + + # transpose incidence matrix for notational convenience + R = R.transpose() + + # generates hyperedge weight matrix, has same nonzero pattern as incidence matrix, + # with values determined by the edge-dependent vertex weight standard deviation + edgeScore = { + i: np.std(R.getrow(i).data) + 1 for i in range(R.shape[0]) + } # hyperedge weights + vals = [edgeScore[i] for i in R.nonzero()[0]] + W = csr_matrix( + (vals, (R.nonzero()[1], R.nonzero()[0])), shape=(R.shape[1], R.shape[0]) + ) + + # generate diagonal degree matrices used to normalize probability transition matrix + [rowSums] = R.sum(axis=1).flatten().tolist() + D_E = diags([1 / x for x in rowSums]) + + [rowSums] = W.sum(axis=1).flatten().tolist() + D_V = diags([1 / x for x in rowSums]) + + # probability transition matrix P + P = D_V * W * D_E * R + + if index: + return P, rdx + return P
+ + +
[docs]def get_pi(P): + """ + Returns the eigenvector corresponding to the largest eigenvalue (in magnitude), + normalized so its entries sum to 1. Intended for the probability transition matrix + of a random walk on a (connected) hypergraph, in which case the output can + be interpreted as the stationary distribution. + + Parameters + ---------- + P : csr matrix + Probability transition matrix + + Returns + ------- + pi : numpy.ndarray + Stationary distribution of random walk defined by P + """ + rho, pi = eigs( + np.transpose(P), k=1, return_eigenvectors=True + ) # dominant eigenvector + pi = np.real(pi / np.sum(pi)).flatten() # normalize as prob distribution + return pi
+ + +
[docs]def norm_lap(H, weights=False, index=True): + """ + Normalized Laplacian matrix of the hypergraph. Symmetrizes the probability transition + matrix of a hypergraph random walk using the stationary distribution, using the digraph + Laplacian defined in: + + Chung, Fan. "Laplacians and the Cheeger inequality for directed graphs." + Annals of Combinatorics 9.1 (2005): 1-19. + + and studied in the context of hypergraphs in: + + Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. + Hypergraph random walks, laplacians, and clustering. + In Proceedings of CIKM 2020, (2020): 495-504. + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + weight : bool, optional, default : False + Uses cell_weights, if False, uniform weights are utilized. + index : bool, optional + Whether to return matrix-index to vertex-label mapping + + Returns + ------- + P : scipy.sparse.csr.csr_matrix + Probability transition matrix of the random walk on the hypergraph + id: list + contains list of index of node ids for rows + """ + P = prob_trans(H, weights=weights, index=index) + if index: + P, idx = P + + pi = get_pi(P) + gamma = diags(np.power(pi, 1 / 2)) * P * diags(np.power(pi, -1 / 2)) + L = identity(gamma.shape[0]) - (1 / 2) * (gamma + gamma.transpose()) + + if index: + return L, idx + return L
+ + +
[docs]def spec_clus(H, k, existing_lap=None, weights=False): + """ + Hypergraph spectral clustering of the vertex set into k disjoint clusters + using the normalized hypergraph Laplacian. Equivalent to the "RDC-Spec" + Algorithm 1 in: + + Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. + Hypergraph random walks, laplacians, and clustering. + In Proceedings of CIKM 2020, (2020): 495-504. + + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + k : int + Number of clusters + existing_lap: csr matrix, optional + Whether to use an existing Laplacian; otherwise, normalized hypergraph Laplacian + will be utilized + weights : bool, optional + Use the cell_weights of the hypergraph. If False uniform weights are used. + + Returns + ------- + clusters : dict + Vertex cluster dictionary, keyed by integers 0,...,k-1, with lists of + vertices as values. + """ + if existing_lap == None: + if weights == None: + L, index = norm_lap(H) + else: + L, index = norm_lap(H, weights=weights) + else: + L = existing_lap + + # compute top eigenvectors + e, v = eigs(identity(L.shape[0]) - L, k=k, which="LM", return_eigenvectors=True) + v = np.real(v) # ignore zero complex parts + v = preprocessing.normalize(v, norm="l2", axis=1) # normalize + U = np.array(v) + km = KMeans(init="k-means++", n_clusters=k, random_state=0) # k-means + km.fit(U) + d = km.labels_ + + # organize cluster assingments in dictionary of form cluster #: ips + clusters = {i: [] for i in range(k)} + for i in range(len(index)): + clusters[d[i]].append(index[i]) + + return clusters
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/algorithms/s_centrality_measures.html b/_modules/algorithms/s_centrality_measures.html new file mode 100644 index 00000000..ead5ebe8 --- /dev/null +++ b/_modules/algorithms/s_centrality_measures.html @@ -0,0 +1,455 @@ + + + + + + algorithms.s_centrality_measures — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for algorithms.s_centrality_measures

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+"""
+
+S-Centrality Measures
+=====================
+We generalize graph metrics to s-metrics for a hypergraph by using its s-connected
+components. This is accomplished by computing the s edge-adjacency matrix and
+constructing the corresponding graph of the matrix. We then use existing graph metrics
+on this representation of the hypergraph. In essence we construct an *s*-line graph
+corresponding to the hypergraph on which to apply our methods.
+
+S-Metrics for hypergraphs are discussed in depth in:
+*Aksoy, S.G., Joslyn, C., Ortiz Marrero, C. et al. Hypernetwork science via high-order hypergraph walks.
+EPJ Data Sci. 9, 16 (2020). https://doi.org/10.1140/epjds/s13688-020-00231-0*
+
+"""
+
+import networkx as nx
+import warnings
+import sys
+from functools import partial
+
+try:
+    import nwhy
+
+    nwhy_available = True
+except:
+    nwhy_available = False
+
+sys.setrecursionlimit(10000)
+
+
+def _s_centrality(func, H, s=1, edges=True, f=None, return_singletons=True, **kwargs):
+    """
+    Wrapper for computing s-centrality either in NetworkX or in NWHy
+
+    Parameters
+    ----------
+    func : function
+        Function or partial function from NetworkX or NWHy
+    H : hnx.Hypergraph
+
+    s : int, optional
+        s-width for computation
+    edges : bool, optional
+        If True, an edge linegraph will be used, otherwise a node linegraph will be used
+    f : str, optional
+        Identifier of node or edge of interest for computing centrality
+    return_singletons : bool, optional
+        If True will return 0 value for each singleton in the s-linegraph
+    **kwargs
+        Centrality metric specific keyword arguments to be passed to func
+
+    Returns
+    -------
+    dict
+        dictionary of centrality scores keyed by names
+    """
+    comps = H.s_component_subgraphs(
+        s=s, edges=edges, return_singletons=return_singletons
+    )
+    if f is not None:
+        for cps in comps:
+            if (edges and f in cps.edges) or (not edges and f in cps.nodes):
+                comps = [cps]
+                break
+        else:
+            return {f: 0}
+
+    stats = dict()
+    for h in comps:
+        if edges:
+            vertices = h.edges
+        else:
+            vertices = h.nodes
+
+        if h.shape[edges * 1] == 1:
+            stats = {v: 0 for v in vertices}
+        else:
+            g = h.get_linegraph(s=s, edges=edges)
+            stats.update({k: v for k, v in func(g, **kwargs).items()})
+        if f:
+            return {f: stats[f]}
+
+    return stats
+
+
+
[docs]def s_betweenness_centrality( + H, s=1, edges=True, normalized=True, return_singletons=True +): + r""" + A centrality measure for an s-edge(node) subgraph of H based on shortest paths. + Equals the betweenness centrality of vertices in the edge(node) s-linegraph. + + In a graph (2-uniform hypergraph) the betweenness centrality of a vertex $v$ + is the ratio of the number of non-trivial shortest paths between any pair of + vertices in the graph that pass through $v$ divided by the total number of + non-trivial shortest paths in the graph. + + The centrality of edge to all shortest s-edge paths + $V$ = the set of vertices in the linegraph. + $\sigma(s,t)$ = the number of shortest paths between vertices $s$ and $t$. + $\sigma(s,t|v)$ = the number of those paths that pass through vertex $v$. + + .. math:: + + c_B(v) = \sum_{s \neq t \in V} \frac{\sigma(s, t|v)}{\sigma(s,t)} + + Parameters + ---------- + H : hnx.Hypergraph + s : int + s connectedness requirement + edges : bool, optional + determines if edge or node linegraph + normalized + bool, default=False, + If true the betweenness values are normalized by `2/((n-1)(n-2))`, + where n is the number of edges in H + return_singletons : bool, optional + if False will ignore singleton components of linegraph + + Returns + ------- + dict + A dictionary of s-betweenness centrality value of the edges. + + """ + func = partial(nx.betweenness_centrality, normalized=False) + result = _s_centrality( + func, + H, + s=s, + edges=edges, + return_singletons=return_singletons, + ) + + if normalized and H.shape[edges * 1] > 2: + n = H.shape[edges * 1] + return {k: v * 2 / ((n - 1) * (n - 2)) for k, v in result.items()} + else: + return result
+ + +
[docs]def s_closeness_centrality(H, s=1, edges=True, return_singletons=True, source=None): + r""" + In a connected component the reciprocal of the sum of the distance between an + edge(node) and all other edges(nodes) in the component times the number of edges(nodes) + in the component minus 1. + + $V$ = the set of vertices in the linegraph. + $n = |V|$ + $d$ = shortest path distance + + .. math:: + + C(u) = \frac{n - 1}{\sum_{v \neq u \in V} d(v, u)} + + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-closeness centrality value of the edges(nodes). + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + """ + func = partial(nx.closeness_centrality) + return _s_centrality( + func, + H, + s=s, + edges=edges, + return_singletons=return_singletons, + f=source, + )
+ + +
[docs]def s_harmonic_closeness_centrality(H, s=1, edge=None): + msg = """ + s_harmonic_closeness_centrality is being replaced with s_harmonic_centrality + and will not be available in future releases. + """ + warnings.warn(msg) + return s_harmonic_centrality(H, s=s, edges=True, normalized=True, source=edge)
+ + +
[docs]def s_harmonic_centrality( + H, + s=1, + edges=True, + source=None, + normalized=False, + return_singletons=True, +): + r""" + A centrality measure for an s-edge subgraph of H. A value equal to 1 means the s-edge + intersects every other s-edge in H. All values range between 0 and 1. + Edges of size less than s return 0. If H contains only one s-edge a 0 is returned. + + The denormalized reciprocal of the harmonic mean of all distances from $u$ to all other vertices. + $V$ = the set of vertices in the linegraph. + $d$ = shortest path distance + + .. math:: + + C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)} + + Normalized this becomes: + $$C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)}\cdot\frac{2}{(n-1)(n-2)}$$ + where $n$ is the number vertices. + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-harmonic closeness centrality value of the edges, a number between 0 and 1 inclusive. + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + + """ + + # func = partial(nx.harmonic_centrality) + # result = _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # return_singletons=return_singletons, + # f=source, + # ) + g = H.get_linegraph(s=s, edges=edges) + result = nx.harmonic_centrality(g) + + if normalized and H.shape[edges * 1] > 2: + n = H.shape[edges * 1] + factor = 2 / ((n - 1) * (n - 2)) + else: + factor = 1 + + if source: + return result[source] * factor + else: + return {k: v * factor for k, v in result.items()}
+ + # if normalized and H.shape[edges * 1] > 2: + # n = H.shape[edges * 1] + # result = {k: v * 2 / ((n - 1) * (n - 2)) for k, v in result.items()} + # else: + # return result + + +
[docs]def s_eccentricity(H, s=1, edges=True, source=None, return_singletons=True): + r""" + The length of the longest shortest path from a vertex $u$ to every other vertex in + the s-linegraph. + $V$ = set of vertices in the s-linegraph + $d$ = shortest path distance + + .. math:: + + \text{s-ecc}(u) = \text{max}\{d(u,v): v \in V\} + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-eccentricity value of the edges(nodes). + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + If the s-linegraph is disconnected, np.inf is returned. + + """ + + g = H.get_linegraph(s=s, edges=edges) + result = nx.eccentricity(g) + if source: + return result[source] + else: + return result
+ + # func = nx.eccentricity + + # if source is not None: + # return _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # f=source, + # return_singletons=return_singletons, + # ) + # else: + # return _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # return_singletons=return_singletons, + # ) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/classes/entity.html b/_modules/classes/entity.html new file mode 100644 index 00000000..670ac894 --- /dev/null +++ b/_modules/classes/entity.html @@ -0,0 +1,1739 @@ + + + + + + classes.entity — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for classes.entity

+from __future__ import annotations
+
+import warnings
+from ast import literal_eval
+from collections import OrderedDict, defaultdict
+from collections.abc import Hashable, Mapping, Sequence, Iterable
+from typing import Union, TypeVar, Optional, Any
+
+import numpy as np
+import pandas as pd
+from scipy.sparse import csr_matrix
+
+from hypernetx.classes.helpers import (
+    AttrList,
+    assign_weights,
+    remove_row_duplicates,
+    dict_depth,
+)
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class Entity: + """Base class for handling N-dimensional data when building network-like models, + i.e., :class:`Hypergraph` + + Parameters + ---------- + entity : pandas.DataFrame, dict of lists or sets, list of lists or sets, optional + If a ``DataFrame`` with N columns, + represents N-dimensional entity data (data table). + Otherwise, represents 2-dimensional entity data (system of sets). + TODO: Test for compatibility with list of Entities and update docs + data : numpy.ndarray, optional + 2D M x N ``ndarray`` of ``ints`` (data table); + sparse representation of an N-dimensional incidence tensor with M nonzero cells. + Ignored if `entity` is provided. + static : bool, default=True + If ``True``, entity data may not be altered, + and the :attr:`state_dict <_state_dict>` will never be cleared. + Otherwise, rows may be added to and removed from the data table, + and updates will clear the :attr:`state_dict <_state_dict>`. + labels : collections.OrderedDict of lists, optional + User-specified labels in corresponding order to ``ints`` in `data`. + Ignored if `entity` is provided or `data` is not provided. + uid : hashable, optional + A unique identifier for the object + weights : str or sequence of float, optional + User-specified cell weights corresponding to entity data. + If sequence of ``floats`` and `entity` or `data` defines a data table, + length must equal the number of rows. + If sequence of ``floats`` and `entity` defines a system of sets, + length must equal the total sum of the sizes of all sets. + If ``str`` and `entity` is a ``DataFrame``, + must be the name of a column in `entity`. + Otherwise, weight for all cells is assumed to be 1. + aggregateby : {'sum', 'last', count', 'mean','median', max', 'min', 'first', None} + Name of function to use for aggregating cell weights of duplicate rows when + `entity` or `data` defines a data table, default is "sum". + If None, duplicate rows will be dropped without aggregating cell weights. + Effectively ignored if `entity` defines a system of sets. + properties : pandas.DataFrame or doubly-nested dict, optional + User-specified properties to be assigned to individual items in the data, i.e., + cell entries in a data table; sets or set elements in a system of sets. + See Notes for detailed explanation. + If ``DataFrame``, each row gives + ``[optional item level, item label, optional named properties, + {property name: property value}]`` + (order of columns does not matter; see note for an example). + If doubly-nested dict, + ``{item level: {item label: {property name: property value}}}``. + misc_props_col, level_col, id_col : str, default="properties", "level, "id" + Column names for miscellaneous properties, level index, and item name in + :attr:`properties`; see Notes for explanation. + + Notes + ----- + A property is a named attribute assigned to a single item in the data. + + You can pass a **table of properties** to `properties` as a ``DataFrame``: + + +------------+---------+----------------+-------+------------------+ + | Level | ID | [explicit | [...] | misc. properties | + | (optional) | | property type] | | | + +============+=========+================+=======+==================+ + | 0 | level 0 | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + | 1 | level 1 | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + | ... | ... | ... | ... | ... | + +------------+---------+----------------+-------+------------------+ + | N | level N | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + + The Level column is optional. If not provided, properties will be assigned by ID + (i.e., if an ID appears at multiple levels, the same properties will be assigned to + all occurrences). + + The names of the Level (if provided) and ID columns must be specified by `level_col` + and `id_col`. `misc_props_col` can be used to specify the name of the column to be used + for miscellaneous properties; if no column by that name is found, + a new column will be created and populated with empty ``dicts``. + All other columns will be considered explicit property types. + The order of the columns does not matter. + + This method assumes that there are no rows with the same (Level, ID); + if duplicates are found, all but the first occurrence will be dropped. + + """ + + def __init__( + self, + entity: Optional[ + pd.DataFrame | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] + ] = None, + data_cols: Sequence[T] = [0, 1], + data: Optional[np.ndarray] = None, + static: bool = False, + labels: Optional[OrderedDict[T, Sequence[T]]] = None, + uid: Optional[Hashable] = None, + weight_col: Optional[str | int] = "cell_weights", + weights: Optional[Sequence[float] | float | int | str] = 1, + aggregateby: Optional[str | dict] = "sum", + properties: Optional[pd.DataFrame | dict[int, dict[T, dict[Any, Any]]]] = None, + misc_props_col: str = "properties", + level_col: str = "level", + id_col: str = "id", + ): + # set unique identifier + self._uid = uid or None + + # if static, the original data cannot be altered + # the state dict stores all computed values that may need to be updated + # if the data is altered - the dict will be cleared when data is added + # or removed + self._static = static + self._state_dict = {} + + # entity data is stored in a DataFrame for basic access without the + # need for any label encoding lookups + if isinstance(entity, pd.DataFrame): + self._dataframe = entity.copy() + + # if the entity data is passed as a dict of lists or a list of lists, + # we convert it to a 2-column dataframe by exploding each list to cover + # one row per element for a dict of lists, the first level/column will + # be filled in with dict keys for a list of N lists, 0,1,...,N will be + # used to fill the first level/column + elif isinstance(entity, (dict, list)): + # convert dict of lists to 2-column dataframe + entity = pd.Series(entity).explode() + self._dataframe = pd.DataFrame( + {data_cols[0]: entity.index.to_list(), data_cols[1]: entity.values} + ) + + # if a 2d numpy ndarray is passed, store it as both a DataFrame and an + # ndarray in the state dict + elif isinstance(data, np.ndarray) and data.ndim == 2: + self._state_dict["data"] = data + self._dataframe = pd.DataFrame(data) + # if a dict of labels was passed, use keys as column names in the + # DataFrame, translate the dataframe, and store the dict of labels + # in the state dict + if isinstance(labels, dict) and len(labels) == len(self._dataframe.columns): + self._dataframe.columns = labels.keys() + self._state_dict["labels"] = labels + + for col in self._dataframe: + self._dataframe[col] = pd.Categorical.from_codes( + self._dataframe[col], categories=labels[col] + ) + + # create an empty Entity + else: + self._dataframe = pd.DataFrame() + + # assign a new or existing column of the dataframe to hold cell weights + self._dataframe, self._cell_weight_col = assign_weights( + self._dataframe, weights=weights, weight_col=weight_col + ) + # import ipdb; ipdb.set_trace() + # store a list of columns that hold entity data (not properties or + # weights) + # self._data_cols = list(self._dataframe.columns.drop(self._cell_weight_col)) + self._data_cols = [] + for col in data_cols: + # TODO: default arguments fail for empty Entity; data_cols has two elements but _dataframe has only one element + if isinstance(col, int): + self._data_cols.append(self._dataframe.columns[col]) + else: + self._data_cols.append(col) + + # each entity data column represents one dimension of the data + # (data updates can only add or remove rows, so this isn't stored in + # state dict) + self._dimsize = len(self._data_cols) + + # remove duplicate rows and aggregate cell weights as needed + # import ipdb; ipdb.set_trace() + self._dataframe, _ = remove_row_duplicates( + self._dataframe, + self._data_cols, + weight_col=self._cell_weight_col, + aggregateby=aggregateby, + ) + + # set the dtype of entity data columns to categorical (simplifies + # encoding, etc.) + ### This is automatically done in remove_row_duplicates + # self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + # "category" + # ) + + # create properties + item_levels = [ + (level, item) + for level, col in enumerate(self._data_cols) + for item in self.dataframe[col].cat.categories + ] + index = pd.MultiIndex.from_tuples(item_levels, names=[level_col, id_col]) + data = [(i, 1, {}) for i in range(len(index))] + self._properties = pd.DataFrame( + data=data, index=index, columns=["uid", "weight", misc_props_col] + ).sort_index() + self._misc_props_col = misc_props_col + if properties is not None: + self.assign_properties(properties) + + @property + def data(self): + """Sparse representation of the data table as an incidence tensor + + This can also be thought of as an encoding of `dataframe`, where items in each column of + the data table are translated to their int position in the `self.labels[column]` list + Returns + ------- + numpy.ndarray + 2D array of ints representing rows of the underlying data table as indices in an incidence tensor + + See Also + -------- + labels, dataframe + + """ + # generate if not already stored in state dict + if "data" not in self._state_dict: + if self.empty: + self._state_dict["data"] = np.zeros((0, 0), dtype=int) + else: + # assumes dtype of data cols is already converted to categorical + # and state dict has been properly cleared after updates + self._state_dict["data"] = ( + self._dataframe[self._data_cols] + .apply(lambda x: x.cat.codes) + .to_numpy() + ) + + return self._state_dict["data"] + + @property + def labels(self): + """Labels of all items in each column of the underlying data table + + Returns + ------- + dict of lists + dict of {column name: [item labels]} + The order of [item labels] corresponds to the int encoding of each item in `self.data`. + + See Also + -------- + data, dataframe + """ + # generate if not already stored in state dict + if "labels" not in self._state_dict: + # assumes dtype of data cols is already converted to categorical + # and state dict has been properly cleared after updates + self._state_dict["labels"] = { + col: self._dataframe[col].cat.categories.to_list() + for col in self._data_cols + } + + return self._state_dict["labels"] + + @property + def cell_weights(self): + """Cell weights corresponding to each row of the underlying data table + + Returns + ------- + dict of {tuple: int or float} + Keyed by row of data table (as a tuple) + """ + # generate if not already stored in state dict + if "cell_weights" not in self._state_dict: + if self.empty: + self._state_dict["cell_weights"] = {} + else: + self._state_dict["cell_weights"] = self._dataframe.set_index( + self._data_cols + )[self._cell_weight_col].to_dict() + + return self._state_dict["cell_weights"] + + @property + def dimensions(self): + """Dimensions of data i.e., the number of distinct items in each level (column) of the underlying data table + + Returns + ------- + tuple of ints + Length and order corresponds to columns of `self.dataframe` (excluding cell weight column) + """ + # generate if not already stored in state dict + if "dimensions" not in self._state_dict: + if self.empty: + self._state_dict["dimensions"] = tuple() + else: + self._state_dict["dimensions"] = tuple( + self._dataframe[self._data_cols].nunique() + ) + + return self._state_dict["dimensions"] + + @property + def dimsize(self): + """Number of levels (columns) in the underlying data table + + Returns + ------- + int + Equal to length of `self.dimensions` + """ + return self._dimsize + + @property + def properties(self) -> pd.DataFrame: + # Dev Note: Not sure what this contains, when running tests it contained an empty pandas series + """Properties assigned to items in the underlying data table + + Returns + ------- + pandas.DataFrame + """ + + return self._properties + + @property + def uid(self): + # Dev Note: This also returned nothing in my harry potter dataset, not sure if it was supposed to contain anything + """User-defined unique identifier for the `Entity` + + Returns + ------- + hashable + """ + return self._uid + + @property + def uidset(self): + """Labels of all items in level 0 (first column) of the underlying data table + + Returns + ------- + frozenset + + See Also + -------- + children : Labels of all items in level 1 (second column) + uidset_by_level, uidset_by_column : + Labels of all items in any level (column); specified by level index or column name + """ + return self.uidset_by_level(0) + + @property + def children(self): + """Labels of all items in level 1 (second column) of the underlying data table + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + uidset_by_level, uidset_by_column : + Labels of all items in any level (column); specified by level index or column name + """ + return self.uidset_by_level(1) + +
[docs] def uidset_by_level(self, level): + """Labels of all items in a particular level (column) of the underlying data table + + Parameters + ---------- + level : int + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + children : Labels of all items in level 1 (second column) + uidset_by_column : Same functionality, takes the column name instead of level index + """ + if self.is_empty(level): + return {} + col = self._data_cols[level] + return self.uidset_by_column(col)
+ +
[docs] def uidset_by_column(self, column): + # Dev Note: This threw an error when trying it on the harry potter dataset, + # when trying 0, or 1 for column. I'm not sure how this should be used + """Labels of all items in a particular column (level) of the underlying data table + + Parameters + ---------- + column : Hashable + Name of a column in `self.dataframe` + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + children : Labels of all items in level 1 (second column) + uidset_by_level : Same functionality, takes the level index instead of column name + """ + # generate if not already stored in state dict + if "uidset" not in self._state_dict: + self._state_dict["uidset"] = {} + if column not in self._state_dict["uidset"]: + self._state_dict["uidset"][column] = set( + self._dataframe[column].dropna().unique() + ) + + return self._state_dict["uidset"][column]
+ + @property + def elements(self): + """System of sets representation of the first two levels (columns) of the underlying data table + + Each item in level 0 (first column) defines a set containing all the level 1 + (second column) items with which it appears in the same row of the underlying + data table + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level 0 item : AttrList(level 1 items)} + + See Also + -------- + incidence_dict : same data as dict of list + memberships : + dual of this representation, + i.e., each item in level 1 (second column) defines a set + elements_by_level, elements_by_column : + system of sets representation of any two levels (columns); specified by level index or column name + + """ + if self._dimsize < 2: + return {k: AttrList(entity=self, key=(0, k)) for k in self.uidset} + + return self.elements_by_level(0, 1) + + @property + def incidence_dict(self) -> dict[T, list[T]]: + """System of sets representation of the first two levels (columns) of the underlying data table + + Returns + ------- + dict of list + System of sets representation as dict of {level 0 item : AttrList(level 1 items)} + + See Also + -------- + elements : same data as dict of AttrList + + """ + return {item: elements.data for item, elements in self.elements.items()} + + @property + def memberships(self): + """System of sets representation of the first two levels (columns) of the + underlying data table + + Each item in level 1 (second column) defines a set containing all the level 0 + (first column) items with which it appears in the same row of the underlying + data table + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level 1 item : AttrList(level 0 items)} + + See Also + -------- + elements : dual of this representation i.e., each item in level 0 (first column) defines a set + elements_by_level, elements_by_column : + system of sets representation of any two levels (columns); specified by level index or column name + + """ + + return self.elements_by_level(1, 0) + +
[docs] def elements_by_level(self, level1, level2): + """System of sets representation of two levels (columns) of the underlying data table + + Each item in level1 defines a set containing all the level2 items + with which it appears in the same row of the underlying data table + + Properties can be accessed and assigned to items in level1 + + Parameters + ---------- + level1 : int + index of level whose items define sets + level2 : int + index of level whose items are elements in the system of sets + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level1 item : AttrList(level2 items)} + + See Also + -------- + elements, memberships : dual system of sets representations of the first two levels (columns) + elements_by_column : same functionality, takes column names instead of level indices + + """ + col1 = self._data_cols[level1] + col2 = self._data_cols[level2] + return self.elements_by_column(col1, col2)
+ +
[docs] def elements_by_column(self, col1, col2): + + """System of sets representation of two columns (levels) of the underlying data table + + Each item in col1 defines a set containing all the col2 items + with which it appears in the same row of the underlying data table + + Properties can be accessed and assigned to items in col1 + + Parameters + ---------- + col1 : Hashable + name of column whose items define sets + col2 : Hashable + name of column whose items are elements in the system of sets + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {col1 item : AttrList(col2 items)} + + See Also + -------- + elements, memberships : dual system of sets representations of the first two columns (levels) + elements_by_level : same functionality, takes level indices instead of column names + + """ + if "elements" not in self._state_dict: + self._state_dict["elements"] = defaultdict(dict) + if col2 not in self._state_dict["elements"][col1]: + level = self.index(col1) + elements = self._dataframe.groupby(col1)[col2].unique().to_dict() + self._state_dict["elements"][col1][col2] = { + item: AttrList(entity=self, key=(level, item), initlist=elem) + for item, elem in elements.items() + } + + return self._state_dict["elements"][col1][col2]
+ + @property + def dataframe(self): + """The underlying data table stored by the Entity + + Returns + ------- + pandas.DataFrame + """ + return self._dataframe + + @property + def isstatic(self): + # Dev Note: I'm guessing this is no longer necessary? + """Whether to treat the underlying data as static or not + + If True, the underlying data may not be altered, and the state_dict will never be cleared + Otherwise, rows may be added to and removed from the data table, and updates will clear the state_dict + + Returns + ------- + bool + """ + return self._static + +
[docs] def size(self, level=0): + """The number of items in a level of the underlying data table + + Equivalent to ``self.dimensions[level]`` + + Parameters + ---------- + level : int, default=0 + + Returns + ------- + int + + See Also + -------- + dimensions + """ + # TODO: Since `level` is not validated, we assume that self.dimensions should be an array large enough to access index `level` + return self.dimensions[level]
+ + @property + def empty(self): + """Whether the underlying data table is empty or not + + Returns + ------- + bool + + See Also + -------- + is_empty : for checking whether a specified level (column) is empty + dimsize : 0 if empty + """ + return self._dimsize == 0 + +
[docs] def is_empty(self, level=0): + """Whether a specified level (column) of the underlying data table is empty or not + + Returns + ------- + bool + + See Also + -------- + empty : for checking whether the underlying data table is empty + size : number of items in a level (columns); 0 if level is empty + """ + return self.empty or self.size(level) == 0
+ + def __len__(self): + """Number of items in level 0 (first column) + + Returns + ------- + int + """ + return self.dimensions[0] + + def __contains__(self, item): + """Whether an item is contained within any level of the data + + Parameters + ---------- + item : str + + Returns + ------- + bool + """ + for labels in self.labels.values(): + if item in labels: + return True + return False + + def __getitem__(self, item): + """Access into the system of sets representation of the first two levels (columns) given by `elements` + + Can be used to access and assign properties to an ``item`` in level 0 (first column) + + Parameters + ---------- + item : str + label of an item in level 0 (first column) + + Returns + ------- + AttrList : + list of level 1 items in the set defined by ``item`` + + See Also + -------- + uidset, elements + """ + return self.elements[item] + + def __iter__(self): + """Iterates over items in level 0 (first column) of the underlying data table + + Returns + ------- + Iterator + + See Also + -------- + uidset, elements + """ + return iter(self.elements) + + def __call__(self, label_index=0): + # Dev Note (Madelyn) : I don't think this is the intended use of __call__, can we change/deprecate? + """Iterates over items labels in a specified level (column) of the underlying data table + + Parameters + ---------- + label_index : int + level index + + Returns + ------- + Iterator + + See Also + -------- + labels + """ + return iter(self.labels[self._data_cols[label_index]]) + + # def __repr__(self): + # """String representation of the Entity + + # e.g., "Entity(uid, [level 0 items], {item: {property name: property value}})" + + # Returns + # ------- + # str + # """ + # return "hypernetx.classes.entity.Entity" + + # def __str__(self): + # return "<class 'hypernetx.classes.entity.Entity'>" + +
[docs] def index(self, column, value=None): + """Get level index corresponding to a column and (optionally) the index of a value in that column + + The index of ``value`` is its position in the list given by ``self.labels[column]``, which is used + in the integer encoding of the data table ``self.data`` + + Parameters + ---------- + column: str + name of a column in self.dataframe + value : str, optional + label of an item in the specified column + + Returns + ------- + int or (int, int) + level index corresponding to column, index of value if provided + + See Also + -------- + indices : for finding indices of multiple values in a column + level : same functionality, search for the value without specifying column + """ + if "keyindex" not in self._state_dict: + self._state_dict["keyindex"] = {} + if column not in self._state_dict["keyindex"]: + self._state_dict["keyindex"][column] = self._dataframe[ + self._data_cols + ].columns.get_loc(column) + + if value is None: + return self._state_dict["keyindex"][column] + + if "index" not in self._state_dict: + self._state_dict["index"] = defaultdict(dict) + if value not in self._state_dict["index"][column]: + self._state_dict["index"][column][value] = self._dataframe[ + column + ].cat.categories.get_loc(value) + + return ( + self._state_dict["keyindex"][column], + self._state_dict["index"][column][value], + )
+ +
[docs] def indices(self, column, values): + """Get indices of one or more value(s) in a column + + Parameters + ---------- + column : str + values : str or iterable of str + + Returns + ------- + list of int + indices of values + + See Also + -------- + index : for finding level index of a column and index of a single value + """ + if isinstance(values, Hashable): + values = [values] + + if "index" not in self._state_dict: + self._state_dict["index"] = defaultdict(dict) + for v in values: + if v not in self._state_dict["index"][column]: + self._state_dict["index"][column][v] = self._dataframe[ + column + ].cat.categories.get_loc(v) + + return [self._state_dict["index"][column][v] for v in values]
+ +
[docs] def translate(self, level, index): + """Given indices of a level and value(s), return the corresponding value label(s) + + Parameters + ---------- + level : int + level index + index : int or list of int + value index or indices + + Returns + ------- + str or list of str + label(s) corresponding to value index or indices + + See Also + -------- + translate_arr : translate a full row of value indices across all levels (columns) + """ + column = self._data_cols[level] + + if isinstance(index, (int, np.integer)): + return self.labels[column][index] + + return [self.labels[column][i] for i in index]
+ +
[docs] def translate_arr(self, coords): + """Translate a full encoded row of the data table e.g., a row of ``self.data`` + + Parameters + ---------- + coords : tuple of ints + encoded value indices, with one value index for each level of the data + + Returns + ------- + list of str + full row of translated value labels + """ + assert len(coords) == self._dimsize + translation = [] + for level, index in enumerate(coords): + translation.append(self.translate(level, index)) + + return translation
+ +
[docs] def level(self, item, min_level=0, max_level=None, return_index=True): + """First level containing the given item label + + Order of levels corresponds to order of columns in `self.dataframe` + + Parameters + ---------- + item : str + min_level, max_level : int, optional + inclusive bounds on range of levels to search for item + return_index : bool, default=True + If True, return index of item within the level + + Returns + ------- + int, (int, int), or None + index of first level containing the item, index of item if `return_index=True` + returns None if item is not found + + See Also + -------- + index, indices : for finding level and/or value indices when the column is known + """ + if max_level is None or max_level >= self._dimsize: + max_level = self._dimsize - 1 + + columns = self._data_cols[min_level : max_level + 1] + levels = range(min_level, max_level + 1) + + for col, lev in zip(columns, levels): + if item in self.labels[col]: + if return_index: + return self.index(col, item) + + return lev + + print(f'"{item}" not found.') + return None
+ +
[docs] def add(self, *args): + """Updates the underlying data table with new entity data from multiple sources + + Parameters + ---------- + *args + variable length argument list of Entity and/or representations of entity data + + Returns + ------- + self : Entity + + Warnings + -------- + Adding an element directly to an Entity will not add the + element to any Hypergraphs constructed from that Entity, and will cause an error. Use + :func:`Hypergraph.add_edge <classes.hypergraph.Hypergraph.add_edge>` or + :func:`Hypergraph.add_node_to_edge <classes.hypergraph.Hypergraph \ + .add_node_to_edge>` instead. + + See Also + -------- + add_element : update from a single source + Hypergraph.add_edge, Hypergraph.add_node_to_edge : for adding elements to a Hypergraph + + """ + for item in args: + self.add_element(item) + return self
+ +
[docs] def add_elements_from(self, arg_set): + """Adds arguments from an iterable to the data table one at a time + + ..deprecated:: 2.0.0 + Duplicates `add` + + Parameters + ---------- + arg_set : iterable + list of Entity and/or representations of entity data + + Returns + ------- + self : Entity + + """ + for item in arg_set: + self.add_element(item) + return self
+ +
[docs] def add_element(self, data): + """Updates the underlying data table with new entity data + + Supports adding from either an existing Entity or a representation of entity + (data table or labeled system of sets are both supported representations) + + Parameters + ---------- + data : Entity, `pandas.DataFrame`, or dict of lists or sets + new entity data + + Returns + ------- + self : Entity + + Warnings + -------- + Adding an element directly to an Entity will not add the + element to any Hypergraphs constructed from that Entity, and will cause an error. Use + `Hypergraph.add_edge` or `Hypergraph.add_node_to_edge` instead. + + See Also + -------- + add : takes multiple sources of new entity data as variable length argument list + Hypergraph.add_edge, Hypergraph.add_node_to_edge : for adding elements to a Hypergraph + + """ + if isinstance(data, Entity): + df = data.dataframe + self.__add_from_dataframe(df) + + if isinstance(data, dict): + df = pd.DataFrame.from_dict(data) + self.__add_from_dataframe(df) + + if isinstance(data, pd.DataFrame): + self.__add_from_dataframe(data) + + return self
+ + def __add_from_dataframe(self, df): + """Helper function to append rows to `self.dataframe` + + Parameters + ---------- + data : pd.DataFrame + + Returns + ------- + self : Entity + + """ + if all(col in df for col in self._data_cols): + new_data = pd.concat((self._dataframe, df), ignore_index=True) + new_data[self._cell_weight_col] = new_data[self._cell_weight_col].fillna(1) + + self._dataframe, _ = remove_row_duplicates( + new_data, + self._data_cols, + weights=self._cell_weight_col, + ) + + self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + "category" + ) + + self._state_dict.clear() + +
[docs] def remove(self, *args): + """Removes all rows containing specified item(s) from the underlying data table + + Parameters + ---------- + *args + variable length argument list of item labels + + Returns + ------- + self : Entity + + See Also + -------- + remove_element : remove all rows containing a single specified item + + """ + for item in args: + self.remove_element(item) + return self
+ +
[docs] def remove_elements_from(self, arg_set): + """Removes all rows containing specified item(s) from the underlying data table + + ..deprecated: 2.0.0 + Duplicates `remove` + + Parameters + ---------- + arg_set : iterable + list of item labels + + Returns + ------- + self : Entity + + """ + for item in arg_set: + self.remove_element(item) + return self
+ +
[docs] def remove_element(self, item): + """Removes all rows containing a specified item from the underlying data table + + Parameters + ---------- + item + item label + + Returns + ------- + self : Entity + + See Also + -------- + remove : same functionality, accepts variable length argument list of item labels + + """ + updated_dataframe = self._dataframe + + for column in self._dataframe: + updated_dataframe = updated_dataframe[updated_dataframe[column] != item] + + self._dataframe, _ = remove_row_duplicates( + updated_dataframe, + self._data_cols, + weights=self._cell_weight_col, + ) + self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + "category" + ) + + self._state_dict.clear() + for col in self._data_cols: + self._dataframe[col] = self._dataframe[col].cat.remove_unused_categories()
+ +
[docs] def encode(self, data): + """ + Encode dataframe to numpy array + + Parameters + ---------- + data : dataframe + + Returns + ------- + numpy.array + + """ + encoded_array = data.apply(lambda x: x.cat.codes).to_numpy() + return encoded_array
+ +
[docs] def incidence_matrix( + self, level1=0, level2=1, weights=False, aggregateby=None, index=False + ) -> csr_matrix | None: + """Incidence matrix representation for two levels (columns) of the underlying data table + + If `level1` and `level2` contain N and M distinct items, respectively, the incidence matrix will be M x N. + In other words, the items in `level1` and `level2` correspond to the columns and rows of the incidence matrix, + respectively, in the order in which they appear in `self.labels[column1]` and `self.labels[column2]` + (`column1` and `column2` are the column labels of `level1` and `level2`) + + Parameters + ---------- + level1 : int, default=0 + index of first level (column) + level2 : int, default=1 + index of second level + weights : bool or dict, default=False + If False all nonzero entries are 1. + If True all nonzero entries are filled by self.cell_weight + dictionary values, use :code:`aggregateby` to specify how duplicate + entries should have weights aggregated. + If dict of {(level1 item, level2 item): weight value} form; + only nonzero cells in the incidence matrix will be updated by dictionary, + i.e., `level1 item` and `level2 item` must appear in the same row at least once in the underlying data table + aggregateby : {'last', count', 'sum', 'mean','median', max', 'min', 'first', 'last', None}, default='count' + Method to aggregate weights of duplicate rows in data table. + If None, then all cell weights will be set to 1. + + Returns + ------- + scipy.sparse.csr.csr_matrix + sparse representation of incidence matrix (i.e. Compressed Sparse Row matrix) + + Other Parameters + ---------------- + index : bool, optional + Not used + + Note + ---- + In the context of Hypergraphs, think `level1 = edges, level2 = nodes` + """ + if self.dimsize < 2: + warnings.warn("Incidence matrix requires two levels of data.") + return None + + data_cols = [self._data_cols[level2], self._data_cols[level1]] + weights = self._cell_weight_col if weights else None + + df, weight_col = remove_row_duplicates( + self._dataframe, + data_cols, + weights=weights, + aggregateby=aggregateby, + ) + + return csr_matrix( + (df[weight_col], tuple(df[col].cat.codes for col in data_cols)) + )
+ +
[docs] def restrict_to_levels( + self, + levels: int | Iterable[int], + weights: bool = False, + aggregateby: str | None = "sum", + **kwargs, + ) -> Entity: + """Create a new Entity by restricting to a subset of levels (columns) in the + underlying data table + + Parameters + ---------- + levels : array-like of int + indices of a subset of levels (columns) of data + weights : bool, default=False + If True, aggregate existing cell weights to get new cell weights + Otherwise, all new cell weights will be 1 + aggregateby : {'sum', 'first', 'last', 'count', 'mean', 'median', 'max', \ + 'min', None}, optional + Method to aggregate weights of duplicate rows in data table + If None or `weights`=False then all new cell weights will be 1 + **kwargs + Extra arguments to `Entity` constructor + + Returns + ------- + Entity + + Raises + ------ + KeyError + If `levels` contains any invalid values + + See Also + -------- + EntitySet + """ + + levels = np.asarray(levels) + invalid_levels = (levels < 0) | (levels >= self.dimsize) + if invalid_levels.any(): + raise KeyError(f"Invalid levels: {levels[invalid_levels]}") + + cols = [self._data_cols[lev] for lev in levels] + + if weights: + weights = self._cell_weight_col + cols.append(weights) + kwargs.update(weights=weights) + + properties = self.properties.loc[levels] + properties.index = properties.index.remove_unused_levels() + level_map = {old: new for new, old in enumerate(levels)} + new_levels = properties.index.levels[0].map(level_map) + properties.index = properties.index.set_levels(new_levels, level=0) + level_col, id_col = properties.index.names + + return self.__class__( + entity=self.dataframe[cols], + data_cols=cols, + aggregateby=aggregateby, + properties=properties, + misc_props_col=self._misc_props_col, + level_col=level_col, + id_col=id_col, + **kwargs, + )
+ +
[docs] def restrict_to_indices(self, indices, level=0, **kwargs): + """Create a new Entity by restricting the data table to rows containing specific items in a given level + + Parameters + ---------- + indices : int or iterable of int + indices of item label(s) in `level` to restrict to + level : int, default=0 + level index + **kwargs + Extra arguments to `Entity` constructor + + Returns + ------- + Entity + """ + column = self._dataframe[self._data_cols[level]] + values = self.translate(level, indices) + entity = self._dataframe.loc[column.isin(values)].copy() + + for col in self._data_cols: + entity[col] = entity[col].cat.remove_unused_categories() + restricted = self.__class__( + entity=entity, misc_props_col=self._misc_props_col, **kwargs + ) + + if not self.properties.empty: + prop_idx = [ + (lv, uid) + for lv in range(restricted.dimsize) + for uid in restricted.uidset_by_level(lv) + ] + properties = self.properties.loc[prop_idx] + restricted.assign_properties(properties) + return restricted
+ +
[docs] def assign_properties( + self, + props: pd.DataFrame | dict[int, dict[T, dict[Any, Any]]], + misc_col: Optional[str] = None, + level_col=0, + id_col=1, + ) -> None: + """Assign new properties to items in the data table, update :attr:`properties` + + Parameters + ---------- + props : pandas.DataFrame or doubly-nested dict + See documentation of the `properties` parameter in :class:`Entity` + level_col, id_col, misc_col : str, optional + column names corresponding to the levels, items, and misc. properties; + if None, default to :attr:`_level_col`, :attr:`_id_col`, :attr:`_misc_props_col`, + respectively. + + See Also + -------- + properties + """ + # mapping from user-specified level, id, misc column names to internal names + ### This will fail if there isn't a level column + + if isinstance(props, pd.DataFrame): + ### Fix to check the shape of properties or redo properties format + column_map = { + old: new + for old, new in zip( + (level_col, id_col, misc_col), + (*self.properties.index.names, self._misc_props_col), + ) + if old is not None + } + props = props.rename(columns=column_map) + props = props.rename_axis(index=column_map) + self._properties_from_dataframe(props) + + if isinstance(props, dict): + ### Expects nested dictionary with keys corresponding to level and id + self._properties_from_dict(props)
+ + def _properties_from_dataframe(self, props: pd.DataFrame) -> None: + """Private handler for updating :attr:`properties` from a DataFrame + + Parameters + ---------- + props + + Notes + ----- + For clarity in in-line developer comments: + + idx-level + refers generally to a level of a MultiIndex + level + refers specifically to the idx-level in the MultiIndex of :attr:`properties` + that stores the level/column id for the item + """ + # names of property table idx-levels for level and item id, respectively + # ``item`` used instead of ``id`` to avoid redefining python built-in func `id` + level, item = self.properties.index.names + if props.index.nlevels > 1: # props has MultiIndex + # drop all idx-levels from props other than level and id (if present) + extra_levels = [ + idx_lev for idx_lev in props.index.names if idx_lev not in (level, item) + ] + props = props.reset_index(level=extra_levels) + + try: + # if props index is already in the correct format, + # enforce the correct idx-level ordering + props.index = props.index.reorder_levels((level, item)) + except AttributeError: # props is not in (level, id) MultiIndex format + # if the index matches level or id, drop index to column + if props.index.name in (level, item): + props = props.reset_index() + index_cols = [item] + if level in props: + index_cols.insert(0, level) + try: + props = props.set_index(index_cols, verify_integrity=True) + except ValueError: + warnings.warn( + "duplicate (level, ID) rows will be dropped after first occurrence" + ) + props = props.drop_duplicates(index_cols) + props = props.set_index(index_cols) + + if self._misc_props_col in props: + try: + props[self._misc_props_col] = props[self._misc_props_col].apply( + literal_eval + ) + except ValueError: + pass # data already parsed, no literal eval needed + else: + warnings.warn("parsed property dict column from string literal") + + if props.index.nlevels == 1: + props = props.reindex(self.properties.index, level=1) + + # combine with existing properties + # non-null values in new props override existing value + properties = props.combine_first(self.properties) + # update misc. column to combine existing and new misc. property dicts + # new props override existing value for overlapping misc. property dict keys + properties[self._misc_props_col] = self.properties[ + self._misc_props_col + ].combine( + properties[self._misc_props_col], + lambda x, y: {**(x if pd.notna(x) else {}), **(y if pd.notna(y) else {})}, + fill_value={}, + ) + self._properties = properties.sort_index() + + def _properties_from_dict(self, props: dict[int, dict[T, dict[Any, Any]]]) -> None: + """Private handler for updating :attr:`properties` from a doubly-nested dict + + Parameters + ---------- + props + """ + # TODO: there may be a more efficient way to convert this to a dataframe instead + # of updating one-by-one via nested loop, but checking whether each prop_name + # belongs in a designated existing column or the misc. property dict column + # makes it more challenging + # For now: only use nested loop update if non-misc. columns currently exist + if len(self.properties.columns) > 1: + for level in props: + for item in props[level]: + for prop_name, prop_val in props[level][item].items(): + self.set_property(item, prop_name, prop_val, level) + else: + item_keys = pd.MultiIndex.from_tuples( + [(level, item) for level in props for item in props[level]], + names=self.properties.index.names, + ) + props_data = [props[level][item] for level, item in item_keys] + props = pd.DataFrame({self._misc_props_col: props_data}, index=item_keys) + self._properties_from_dataframe(props) + + def _property_loc(self, item: T) -> tuple[int, T]: + """Get index in :attr:`properties` of an item of unspecified level + + Parameters + ---------- + item : hashable + name of an item + + Returns + ------- + item_key : tuple of (int, hashable) + ``(level, item)`` + + Raises + ------ + KeyError + If `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `item` appears in multiple levels, returns the first (closest to 0) + + """ + try: + item_loc = self.properties.xs(item, level=1, drop_level=False).index + except KeyError as ex: # item not in df + raise KeyError(f"no properties initialized for 'item': {item}") from ex + + try: + item_key = item_loc.item() + except ValueError: + item_loc, _ = item_loc.sortlevel(sort_remaining=False) + item_key = item_loc[0] + warnings.warn(f"item found in multiple levels: {tuple(item_loc)}") + return item_key + +
[docs] def set_property( + self, + item: T, + prop_name: Any, + prop_val: Any, + level: Optional[int] = None, + ) -> None: + """Set a property of an item + + Parameters + ---------- + item : hashable + name of an item + prop_name : hashable + name of the property to set + prop_val : any + value of the property to set + level : int, optional + level index of the item; + required if `item` is not already in :attr:`properties` + + Raises + ------ + ValueError + If `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_property, get_properties + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError as ex: + raise ValueError( + "cannot infer 'level' when initializing 'item' properties" + ) from ex + + if prop_name in self.properties: + self._properties.loc[item_key, prop_name] = prop_val + else: + try: + self._properties.loc[item_key, self._misc_props_col].update( + {prop_name: prop_val} + ) + except KeyError: + self._properties.loc[item_key, :] = { + self._misc_props_col: {prop_name: prop_val} + }
+ +
[docs] def get_property(self, item: T, prop_name: Any, level: Optional[int] = None) -> Any: + """Get a property of an item + + Parameters + ---------- + item : hashable + name of an item + prop_name : hashable + name of the property to get + level : int, optional + level index of the item + + Returns + ------- + prop_val : any + value of the property + + Raises + ------ + KeyError + if (`level`, `item`) is not in :attr:`properties`, + or if `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_properties, set_property + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError: + raise # item not in properties + + try: + prop_val = self.properties.loc[item_key, prop_name] + except KeyError as ex: + if ex.args[0] == prop_name: + prop_val = self.properties.loc[item_key, self._misc_props_col].get( + prop_name + ) + else: + raise KeyError( + f"no properties initialized for ('level','item'): {item_key}" + ) from ex + + return prop_val
+ +
[docs] def get_properties(self, item: T, level: Optional[int] = None) -> dict[Any, Any]: + """Get all properties of an item + + Parameters + ---------- + item : hashable + name of an item + level : int, optional + level index of the item + + Returns + ------- + prop_vals : dict + ``{named property: property value, ..., + misc. property column name: {property name: property value}}`` + + Raises + ------ + KeyError + if (`level`, `item`) is not in :attr:`properties`, + or if `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_property, set_property + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError: + raise + + try: + prop_vals = self.properties.loc[item_key].to_dict() + except KeyError as ex: + raise KeyError( + f"no properties initialized for ('level','item'): {item_key}" + ) from ex + + return prop_vals
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/classes/entityset.html b/_modules/classes/entityset.html new file mode 100644 index 00000000..8515cd0d --- /dev/null +++ b/_modules/classes/entityset.html @@ -0,0 +1,773 @@ + + + + + + classes.entityset — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for classes.entityset

+from __future__ import annotations
+
+import warnings
+from ast import literal_eval
+from collections import OrderedDict
+from collections.abc import Iterable, Sequence
+from typing import Mapping
+from typing import Optional, Any, TypeVar, Union
+from pprint import pformat
+
+import numpy as np
+import pandas as pd
+
+from hypernetx.classes import Entity
+from hypernetx.classes.helpers import AttrList
+
+# from hypernetx.utils.log import get_logger
+
+# _log = get_logger("entity_set")
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class EntitySet(Entity): + """Class for handling 2-dimensional (i.e., system of sets, bipartite) data when + building network-like models, i.e., :class:`Hypergraph` + + Parameters + ---------- + entity : Entity, pandas.DataFrame, dict of lists or sets, or list of lists or sets, optional + If an ``Entity`` with N levels or a ``DataFrame`` with N columns, + represents N-dimensional entity data (data table). + If N > 2, only considers levels (columns) `level1` and `level2`. + Otherwise, represents 2-dimensional entity data (system of sets). + data : numpy.ndarray, optional + 2D M x N ``ndarray`` of ``ints`` (data table); + sparse representation of an N-dimensional incidence tensor with M nonzero cells. + If N > 2, only considers levels (columns) `level1` and `level2`. + Ignored if `entity` is provided. + labels : collections.OrderedDict of lists, optional + User-specified labels in corresponding order to ``ints`` in `data`. + For M x N `data`, N > 2, `labels` must contain either 2 or N keys. + If N keys, only considers labels for levels (columns) `level1` and `level2`. + Ignored if `entity` is provided or `data` is not provided. + level1, level2 : str or int, default=0,1 + Each item in `level1` defines a set containing all the `level2` items with which + it appears in the same row of the underlying data table. + If ``int``, gives the index of a level; + if ``str``, gives the name of a column in `entity`. + Ignored if `entity`, `data` (if `entity` not provided), and `labels` all (if + provided) represent 1- or 2-dimensional data (set or system of sets). + weights : str or sequence of float, optional + User-specified cell weights corresponding to entity data. + If sequence of ``floats`` and `entity` or `data` defines a data table, + length must equal the number of rows. + If sequence of ``floats`` and `entity` defines a system of sets, + length must equal the total sum of the sizes of all sets. + If ``str`` and `entity` is a ``DataFrame``, + must be the name of a column in `entity`. + Otherwise, weight for all cells is assumed to be 1. + Ignored if `entity` is an ``Entity`` and `keep_weights`=True. + keep_weights : bool, default=True + Whether to preserve any existing cell weights; + ignored if `entity` is not an ``Entity``. + cell_properties : str, list of str, pandas.DataFrame, or doubly-nested dict, optional + User-specified properties to be assigned to cells of the incidence matrix, i.e., + rows in a data table; pairs of (set, element of set) in a system of sets. + See Notes for detailed explanation. + Ignored if underlying data is 1-dimensional (set). + If doubly-nested dict, + ``{level1 item: {level2 item: {cell property name: cell property value}}}``. + misc_cell_props_col : str, default='cell_properties' + Column name for miscellaneous cell properties; see Notes for explanation. + kwargs + Keyword arguments passed to the ``Entity`` constructor, e.g., `static`, + `uid`, `aggregateby`, `properties`, etc. See :class:`Entity` for documentation + of these parameters. + + Notes + ----- + A **cell property** is a named attribute assigned jointly to a set and one of its + elements, i.e, a cell of the incidence matrix. + + When an ``Entity`` or ``DataFrame`` is passed to the `entity` parameter of the + constructor, it should represent a data table: + + +--------------+--------------+--------------+-------+--------------+ + | Column_1 | Column_2 | Column_3 | [...] | Column_N | + +==============+==============+==============+=======+==============+ + | level 1 item | level 2 item | level 3 item | ... | level N item | + +--------------+--------------+--------------+-------+--------------+ + | ... | ... | ... | ... | ... | + +--------------+--------------+--------------+-------+--------------+ + + Assuming the default values for parameters `level1`, `level2`, the data table will + be restricted to the set system defined by Column 1 and Column 2. + Since each row of the data table represents an incidence or cell, values from other + columns may contain data that should be converted to cell properties. + + By passing a **column name or list of column names** as `cell_properties`, each + given column will be preserved in the :attr:`cell_properties` as an explicit cell + property type. An additional column in :attr:`cell_properties` will be created to + store a ``dict`` of miscellaneous cell properties, which will store cell properties + of types that have not been explicitly defined and do not have a dedicated column + (which may be assigned after construction). The name of the miscellaneous column is + determined by `misc_cell_props_col`. + + You can also pass a **pre-constructed table** to `cell_properties` as a + ``DataFrame``: + + +----------+----------+----------------------------+-------+-----------------------+ + | Column_1 | Column_2 | [explicit cell prop. type] | [...] | misc. cell properties | + +==========+==========+============================+=======+=======================+ + | level 1 | level 2 | cell property value | ... | {cell property name: | + | item | item | | | cell property value} | + +----------+----------+----------------------------+-------+-----------------------+ + | ... | ... | ... | ... | ... | + +----------+----------+----------------------------+-------+-----------------------+ + + Column 1 and Column 2 must have the same names as the corresponding columns in the + `entity` data table, and `misc_cell_props_col` can be used to specify the name of the + column to be used for miscellaneous cell properties. If no column by that name is + found, a new column will be created and populated with empty ``dicts``. All other + columns will be considered explicit cell property types. The order of the columns + does not matter. + + Both of these methods assume that there are no row duplicates in the tables passed + to `entity` and/or `cell_properties`; if duplicates are found, all but the first + occurrence will be dropped. + + """ + + def __init__( + self, + entity: Optional[ + pd.DataFrame + | np.ndarray + | Mapping[T, Iterable[T]] + | Iterable[Iterable[T]] + | Mapping[T, Mapping[T, Mapping[T, Any]]] + ] = None, + data: Optional[np.ndarray] = None, + labels: Optional[OrderedDict[T, Sequence[T]]] = None, + level1: str | int = 0, + level2: str | int = 1, + weight_col: str | int = "cell_weights", + weights: Sequence[float] | float | int | str = 1, + # keep_weights: bool = True, + cell_properties: Optional[ + Sequence[T] | pd.DataFrame | dict[T, dict[T, dict[Any, Any]]] + ] = None, + misc_cell_props_col: str = "cell_properties", + uid: Optional[Hashable] = None, + aggregateby: Optional[str] = "sum", + properties: Optional[pd.DataFrame | dict[int, dict[T, dict[Any, Any]]]] = None, + misc_props_col: str = "properties", + # level_col: str = "level", + # id_col: str = "id", + **kwargs, + ): + self._misc_cell_props_col = misc_cell_props_col + + # if the entity data is passed as an Entity, get its underlying data table and + # proceed to the case for entity data passed as a DataFrame + # if isinstance(entity, Entity): + # # _log.info(f"Changing entity from type {Entity} to {type(entity.dataframe)}") + # if keep_weights: + # # preserve original weights + # weights = entity._cell_weight_col + # entity = entity.dataframe + + # if the entity data is passed as a DataFrame, restrict to two columns if needed + if isinstance(entity, pd.DataFrame) and len(entity.columns) > 2: + # _log.info(f"Processing parameter of 'entity' of type {type(entity)}...") + # metadata columns are not considered levels of data, + # remove them before indexing by level + # if isinstance(cell_properties, str): + # cell_properties = [cell_properties] + + prop_cols = [] + if isinstance(cell_properties, Sequence): + for col in {*cell_properties, self._misc_cell_props_col}: + if col in entity: + # _log.debug(f"Adding column to prop_cols: {col}") + prop_cols.append(col) + + # meta_cols = prop_cols + # if weights in entity and weights not in meta_cols: + # meta_cols.append(weights) + # # _log.debug(f"meta_cols: {meta_cols}") + if weight_col in prop_cols: + prop_cols.remove(weight_col) + if not weight_col in entity: + entity[weight_col] = weights + + # if both levels are column names, no need to index by level + if isinstance(level1, int): + level1 = entity.columns[level1] + if isinstance(level2, int): + level2 = entity.columns[level2] + # if isinstance(level1, str) and isinstance(level2, str): + columns = [level1, level2, weight_col] + prop_cols + # if one or both of the levels are given by index, get column name + # else: + # all_columns = entity.columns.drop(meta_cols) + # columns = [ + # all_columns[lev] if isinstance(lev, int) else lev + # for lev in (level1, level2) + # ] + + # if there is a column for cell properties, convert to separate DataFrame + # if len(prop_cols) > 0: + # cell_properties = entity[[*columns, *prop_cols]] + + # if there is a column for weights, preserve it + # if weights in entity and weights not in prop_cols: + # columns.append(weights) + # _log.debug(f"columns: {columns}") + + # pass level1, level2, and weights (optional) to Entity constructor + entity = entity[columns] + + # if a 2D ndarray is passed, restrict to two columns if needed + elif isinstance(data, np.ndarray) and data.ndim == 2 and data.shape[1] > 2: + # _log.info(f"Processing parameter 'data' of type {type(data)}...") + data = data[:, (level1, level2)] + + # if a dict of labels is provided, restrict to labels for two columns if needed + if isinstance(labels, dict) and len(labels) > 2: + label_keys = list(labels) + columns = (label_keys[level1], label_keys[level2]) + labels = {col: labels[col] for col in columns} + # _log.debug(f"Restricted labels to columns:\n{pformat(labels)}") + + # _log.info( + # f"Creating instance of {Entity} using reformatted params: \n\tentity: {type(entity)} \n\tdata: {type(data)} \n\tlabels: {type(labels)}, \n\tweights: {weights}, \n\tkwargs: {kwargs}" + # ) + # _log.debug(f"entity:\n{pformat(entity)}") + # _log.debug(f"data: {pformat(data)}") + super().__init__( + entity=entity, + data=data, + labels=labels, + uid=uid, + weight_col=weight_col, + weights=weights, + aggregateby=aggregateby, + properties=properties, + misc_props_col=misc_props_col, + **kwargs, + ) + + # if underlying data is 2D (system of sets), create and assign cell properties + if self.dimsize == 2: + # self._cell_properties = pd.DataFrame( + # columns=[*self._data_cols, self._misc_cell_props_col] + # ) + self._cell_properties = pd.DataFrame(self._dataframe) + self._cell_properties.set_index(self._data_cols, inplace=True) + if isinstance(cell_properties, (dict, pd.DataFrame)): + self.assign_cell_properties(cell_properties) + else: + self._cell_properties = None + + @property + def cell_properties(self) -> Optional[pd.DataFrame]: + """Properties assigned to cells of the incidence matrix + + Returns + ------- + pandas.Series, optional + Returns None if :attr:`dimsize` < 2 + """ + return self._cell_properties + + @property + def memberships(self) -> dict[str, AttrList[str]]: + """Extends :attr:`Entity.memberships` + + Each item in level 1 (second column) defines a set containing all the level 0 + (first column) items with which it appears in the same row of the underlying + data table. + + Returns + ------- + dict of AttrList + System of sets representation as dict of + ``{level 1 item: AttrList(level 0 items)}``. + + See Also + -------- + elements : dual of this representation, + i.e., each item in level 0 (first column) defines a set + restrict_to_levels : for more information on how memberships work for + 1-dimensional (set) data + """ + if self._dimsize == 1: + return self._state_dict.get("memberships") + + return super().memberships + +
[docs] def restrict_to_levels( + self, + levels: int | Iterable[int], + weights: bool = False, + aggregateby: Optional[str] = "sum", + keep_memberships: bool = True, + **kwargs, + ) -> EntitySet: + """Extends :meth:`Entity.restrict_to_levels` + + Parameters + ---------- + levels : array-like of int + indices of a subset of levels (columns) of data + weights : bool, default=False + If True, aggregate existing cell weights to get new cell weights. + Otherwise, all new cell weights will be 1. + aggregateby : {'sum', 'first', 'last', 'count', 'mean', 'median', 'max', \ + 'min', None}, optional + Method to aggregate weights of duplicate rows in data table + If None or `weights`=False then all new cell weights will be 1 + keep_memberships : bool, default=True + Whether to preserve membership information for the discarded level when + the new ``EntitySet`` is restricted to a single level + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + EntitySet + + Raises + ------ + KeyError + If `levels` contains any invalid values + """ + restricted = super().restrict_to_levels( + levels, + weights, + aggregateby, + misc_cell_props_col=self._misc_cell_props_col, + **kwargs, + ) + + if keep_memberships: + # use original memberships to set memberships for the new EntitySet + # TODO: This assumes levels=[1], add explicit checks for other cases + restricted._state_dict["memberships"] = self.memberships + + return restricted
+ +
[docs] def restrict_to(self, indices: int | Iterable[int], **kwargs) -> EntitySet: + """Alias of :meth:`restrict_to_indices` with default parameter `level`=0 + + Parameters + ---------- + indices : array_like of int + indices of item label(s) in `level` to restrict to + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + EntitySet + + See Also + -------- + restrict_to_indices + """ + restricted = self.restrict_to_indices( + indices, misc_cell_props_col=self._misc_cell_props_col, **kwargs + ) + if not self.cell_properties.empty: + cell_properties = self.cell_properties.loc[ + list(restricted.uidset) + ].reset_index() + restricted.assign_cell_properties(cell_properties) + return restricted
+ +
[docs] def assign_cell_properties( + self, + cell_props: pd.DataFrame | dict[T, dict[T, dict[Any, Any]]], + misc_col: Optional[str] = None, + replace: bool = False, + ) -> None: + """Assign new properties to cells of the incidence matrix and update + :attr:`properties` + + Parameters + ---------- + cell_props : pandas.DataFrame, dict of iterables, or doubly-nested dict, optional + See documentation of the `cell_properties` parameter in :class:`EntitySet` + misc_col: str, optional + name of column to be used for miscellaneous cell property dicts + replace: bool, default=False + If True, replace existing :attr:`cell_properties` with result; + otherwise update with new values from result + + Raises + ----- + AttributeError + Not supported for :attr:`dimsize`=1 + """ + if self.dimsize < 2: + raise AttributeError( + f"cell properties are not supported for 'dimsize'={self.dimsize}" + ) + + misc_col = misc_col or self._misc_cell_props_col + try: + cell_props = cell_props.rename( + columns={misc_col: self._misc_cell_props_col} + ) + except AttributeError: # handle cell props in nested dict format + self._cell_properties_from_dict(cell_props) + else: # handle cell props in DataFrame format + self._cell_properties_from_dataframe(cell_props)
+ + def _cell_properties_from_dataframe(self, cell_props: pd.DataFrame) -> None: + """Private handler for updating :attr:`properties` from a DataFrame + + Parameters + ---------- + props + + Parameters + ---------- + cell_props : DataFrame + """ + if cell_props.index.nlevels > 1: + extra_levels = [ + idx_lev + for idx_lev in cell_props.index.names + if idx_lev not in self._data_cols + ] + cell_props = cell_props.reset_index(level=extra_levels) + + misc_col = self._misc_cell_props_col + + try: + cell_props.index = cell_props.index.reorder_levels(self._data_cols) + except AttributeError: + if cell_props.index.name in self._data_cols: + cell_props = cell_props.reset_index() + + try: + cell_props = cell_props.set_index( + self._data_cols, verify_integrity=True + ) + except ValueError: + warnings.warn( + "duplicate cell rows will be dropped after first occurrence" + ) + cell_props = cell_props.drop_duplicates(self._data_cols) + cell_props = cell_props.set_index(self._data_cols) + + if misc_col in cell_props: + try: + cell_props[misc_col] = cell_props[misc_col].apply(literal_eval) + except ValueError: + pass # data already parsed, no literal eval needed + else: + warnings.warn("parsed cell property dict column from string literal") + + cell_properties = cell_props.combine_first(self.cell_properties) + # import ipdb; ipdb.set_trace() + # cell_properties[misc_col] = self.cell_properties[misc_col].combine( + # cell_properties[misc_col], + # lambda x, y: {**(x if pd.notna(x) else {}), **(y if pd.notna(y) else {})}, + # fill_value={}, + # ) + + self._cell_properties = cell_properties.sort_index() + + def _cell_properties_from_dict( + self, cell_props: dict[T, dict[T, dict[Any, Any]]] + ) -> None: + """Private handler for updating :attr:`cell_properties` from a doubly-nested dict + + Parameters + ---------- + cell_props + """ + # TODO: there may be a more efficient way to convert this to a dataframe instead + # of updating one-by-one via nested loop, but checking whether each prop_name + # belongs in a designated existing column or the misc. property dict column + # makes it more challenging. + # For now: only use nested loop update if non-misc. columns currently exist + if len(self.cell_properties.columns) > 1: + for item1 in cell_props: + for item2 in cell_props[item1]: + for prop_name, prop_val in cell_props[item1][item2].items(): + self.set_cell_property(item1, item2, prop_name, prop_val) + else: + cells = pd.MultiIndex.from_tuples( + [(item1, item2) for item1 in cell_props for item2 in cell_props[item1]], + names=self._data_cols, + ) + props_data = [cell_props[item1][item2] for item1, item2 in cells] + cell_props = pd.DataFrame( + {self._misc_cell_props_col: props_data}, index=cells + ) + self._cell_properties_from_dataframe(cell_props) + +
[docs] def collapse_identical_elements( + self, return_equivalence_classes: bool = False, **kwargs + ) -> EntitySet | tuple[EntitySet, dict[str, list[str]]]: + """Create a new :class:`EntitySet` by collapsing sets with the same set elements + + Each item in level 0 (first column) defines a set containing all the level 1 + (second column) items with which it appears in the same row of the underlying + data table. + + Parameters + ---------- + return_equivalence_classes : bool, default=False + If True, return a dictionary of equivalence classes keyed by new edge names + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + new_entity : EntitySet + new :class:`EntitySet` with identical sets collapsed; + if all sets are unique, the system of sets will be the same as the original. + equivalence_classes : dict of lists, optional + if `return_equivalence_classes`=True, + ``{collapsed set label: [level 0 item labels]}``. + """ + # group by level 0 (set), aggregate level 1 (set elements) as frozenset + collapse = ( + self._dataframe[self._data_cols] + .groupby(self._data_cols[0], as_index=False) + .agg(frozenset) + ) + + # aggregation method to rename equivalence classes as [first item]: [# items] + agg_kwargs = {"name": (self._data_cols[0], lambda x: f"{x.iloc[0]}: {len(x)}")} + if return_equivalence_classes: + # aggregation method to list all items in each equivalence class + agg_kwargs.update(equivalence_class=(self._data_cols[0], list)) + # group by frozenset of level 1 items (set elements), aggregate to get names of + # equivalence classes and (optionally) list of level 0 items (sets) in each + collapse = collapse.groupby(self._data_cols[1], as_index=False).agg( + **agg_kwargs + ) + # convert to nested dict representation of collapsed system of sets + collapse = collapse.set_index("name") + new_entity_dict = collapse[self._data_cols[1]].to_dict() + # construct new EntitySet from system of sets + new_entity = EntitySet(new_entity_dict, **kwargs) + + if return_equivalence_classes: + # lists of equivalent sets, keyed by equivalence class name + equivalence_classes = collapse.equivalence_class.to_dict() + return new_entity, equivalence_classes + return new_entity
+ +
[docs] def set_cell_property( + self, item1: T, item2: T, prop_name: Any, prop_val: Any + ) -> None: + """Set a property of a cell i.e., incidence between items of different levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + prop_name : hashable + name of the cell property to set + prop_val : any + value of the cell property to set + + See Also + -------- + get_cell_property, get_cell_properties + """ + if item2 in self.elements[item1]: + if prop_name in self.properties: + self._cell_properties.loc[(item1, item2), prop_name] = pd.Series( + [prop_val] + ) + else: + try: + self._cell_properties.loc[ + (item1, item2), self._misc_cell_props_col + ].update({prop_name: prop_val}) + except KeyError: + self._cell_properties.loc[(item1, item2), :] = { + self._misc_cell_props_col: {prop_name: prop_val} + }
+ +
[docs] def get_cell_property(self, item1: T, item2: T, prop_name: Any) -> Any: + """Get a property of a cell i.e., incidence between items of different levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + prop_name : hashable + name of the cell property to get + + Returns + ------- + prop_val : any + value of the cell property + + See Also + -------- + get_cell_properties, set_cell_property + """ + try: + cell_props = self.cell_properties.loc[(item1, item2)] + except KeyError: + raise + # TODO: raise informative exception + + try: + prop_val = cell_props.loc[prop_name] + except KeyError: + prop_val = cell_props.loc[self._misc_cell_props_col].get(prop_name) + + return prop_val
+ +
[docs] def get_cell_properties(self, item1: T, item2: T) -> dict[Any, Any]: + """Get all properties of a cell, i.e., incidence between items of different + levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + + Returns + ------- + dict + ``{named cell property: cell property value, ..., misc. cell property column + name: {cell property name: cell property value}}`` + + See Also + -------- + get_cell_property, set_cell_property + """ + try: + cell_props = self.cell_properties.loc[(item1, item2)] + except KeyError: + raise + # TODO: raise informative exception + + return cell_props.to_dict()
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/classes/helpers.html b/_modules/classes/helpers.html new file mode 100644 index 00000000..995372ae --- /dev/null +++ b/_modules/classes/helpers.html @@ -0,0 +1,387 @@ + + + + + + classes.helpers — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for classes.helpers

+from __future__ import annotations
+
+from typing import Any, Optional
+import numpy as np
+import pandas as pd
+from collections import UserList
+from collections.abc import Hashable, Iterable
+from pandas.api.types import CategoricalDtype
+from ast import literal_eval
+
+from hypernetx.classes.entity import *
+
+
+
[docs]class AttrList(UserList): + """Custom list wrapper for integrated property storage in :class:`Entity` + + Parameters + ---------- + entity : hypernetx.Entity + key : tuple of (int, str or int) + ``(level, item)`` + initlist : list, optional + list of elements, passed to ``UserList`` constructor + """ + + def __init__( + self, + entity: Entity, + key: tuple[int, str | int], + initlist: Optional[list] = None, + ): + self._entity = entity + self._key = key + super().__init__(initlist) + + def __getattr__(self, attr: str) -> Any: + """Get attribute value from properties of :attr:`entity` + + Parameters + ---------- + attr : str + + Returns + ------- + any + attribute value; None if not found + """ + if attr == "uidset": + return frozenset(self.data) + if attr in ["memberships", "elements"]: + return self._entity.__getattribute__(attr).get(self._key[1]) + return self._entity.get_property(self._key[1], attr, self._key[0]) + + def __setattr__(self, attr: str, val: Any) -> None: + """Set attribute value in properties of :attr:`entity` + + Parameters + ---------- + attr : str + val : any + """ + if attr in ["_entity", "_key", "data"]: + object.__setattr__(self, attr, val) + else: + self._entity.set_property(self._key[1], attr, val, level=self._key[0])
+ + +
[docs]def encode(data: pd.DataFrame): + """ + Encode dataframe to numpy array + + Parameters + ---------- + data : dataframe + + Returns + ------- + numpy.array + + """ + encoded_array = data.apply(lambda x: x.cat.codes).to_numpy() + return encoded_array
+ + +
[docs]def assign_weights(df, weights=1, weight_col="cell_weights"): + """ + Parameters + ---------- + df : pandas.DataFrame + A DataFrame to assign a weight column to + weights : array-like or Hashable, optional + If numpy.ndarray with the same length as df, create a new weight column with + these values. + If Hashable, must be the name of a column of df to assign as the weight column + Otherwise, create a new weight column assigning a weight of 1 to every row + weight_col : Hashable + Name for new column if one is created (not used if the name of an existing + column is passed as weights) + + Returns + ------- + df : pandas.DataFrame + The original DataFrame with a new column added if needed + weight_col : str + Name of the column assigned to hold weights + + Note + ---- + TODO: move logic for default weights inside this method + """ + + if isinstance(weights, (list, np.ndarray)): + df[weight_col] = weights + else: + if not weight_col in df: + df[weight_col] = weights + # import ipdb; ipdb.set_trace() + return df, weight_col
+ + +
[docs]def create_properties( + props: pd.DataFrame + | dict[str | int, Iterable[str | int]] + | dict[str | int, dict[str | int, dict[Any, Any]]] + | None, + index_cols: list[str], + misc_col: str, +) -> pd.DataFrame: + """Helper function for initializing properties and cell properties + + Parameters + ---------- + props : pandas.DataFrame, dict of iterables, doubly-nested dict, or None + See documentation of the `properties` parameter in :class:`Entity`, + `cell_properties` parameter in :class:`EntitySet` + index_cols : list of str + names of columns to be used as levels of the MultiIndex + misc_col : str + name of column to be used for miscellaneous property dicts + + Returns + ------- + pandas.DataFrame + with ``MultiIndex`` on `index_cols`; + each entry of the miscellaneous column holds dict of + ``{property name: property value}`` + """ + + if isinstance(props, pd.DataFrame) and not props.empty: + try: + data = props.set_index(index_cols, verify_integrity=True) + except ValueError: + warnings.warn( + "duplicate (level, ID) rows will be dropped after first occurrence" + ) + props = props.drop_duplicates(index_cols) + data = props.set_index(index_cols) + + if misc_col not in data: + data[misc_col] = [{} for _ in range(len(data))] + try: + data[misc_col] = data[misc_col].apply(literal_eval) + except ValueError: + pass # data already parsed, no literal eval needed + else: + warnings.warn("parsed property dict column from string literal") + + return data.sort_index() + + # build MultiIndex from dict of {level: iterable of items} + try: + item_levels = [(level, item) for level in props for item in props[level]] + index = pd.MultiIndex.from_tuples(item_levels, names=index_cols) + # empty MultiIndex if props is None or other unexpected type + except TypeError: + index = pd.MultiIndex(levels=([], []), codes=([], []), names=index_cols) + + # get inner data from doubly-nested dict of {level: {item: {prop: val}}} + try: + data = [props[level][item] for level, item in index] + # empty prop dict for each (level, ID) if iterable of items is not a dict + except (TypeError, IndexError): + data = [{} for _ in index] + + return pd.DataFrame({misc_col: data}, index=index).sort_index()
+ + +
[docs]def remove_row_duplicates( + df, data_cols, weights=1, weight_col="cell_weights", aggregateby=None +): + """ + Removes and aggregates duplicate rows of a DataFrame using groupby + + Parameters + ---------- + df : pandas.DataFrame + A DataFrame to remove or aggregate duplicate rows from + data_cols : list + A list of column names in df to perform the groupby on / remove duplicates from + weights : array-like or Hashable, optional + Argument passed to assign_weights + aggregateby : str, optional, default='sum' + A valid aggregation method for pandas groupby + If None, drop duplicates without aggregating weights + + Returns + ------- + df : pandas.DataFrame + The DataFrame with duplicate rows removed or aggregated + weight_col : Hashable + The name of the column holding aggregated weights, or None if aggregateby=None + """ + df = df.copy() + categories = {} + for col in data_cols: + if df[col].dtype.name == "category": + categories[col] = df[col].cat.categories + df[col] = df[col].astype(categories[col].dtype) + df, weight_col = assign_weights( + df, weights=weights, weight_col=weight_col + ) ### reconcile this with defaults weights. + if not aggregateby: + df = df.drop_duplicates(subset=data_cols) + df[data_cols] = df[data_cols].astype("category") + return df, weight_col + + else: + aggby = {col: "first" for col in df.columns} + if isinstance(aggregateby, str): + aggby[weight_col] = aggregateby + else: + aggby.update(aggregateby) + # import ipdb; ipdb.set_trace(context=8) + df = df.groupby(data_cols, as_index=False, sort=False).agg(aggby) + + # for col in categories: + # df[col] = df[col].astype(CategoricalDtype(categories=categories[col])) + df[data_cols] = df[data_cols].astype("category") + + return df, weight_col
+ + +# https://stackoverflow.com/a/7205107 +
[docs]def merge_nested_dicts(a, b, path=None): + "merges b into a" + if path is None: + path = [] + for key in b: + if key in a: + if isinstance(a[key], dict) and isinstance(b[key], dict): + merge_nested_dicts(a[key], b[key], path + [str(key)]) + elif a[key] == b[key]: + pass # same leaf value + else: + warnings.warn( + f'Conflict at {",".join(path + [str(key)])}, keys ignored' + ) + else: + a[key] = b[key] + return a
+ + +## https://www.geeksforgeeks.org/python-find-depth-of-a-dictionary/ +
[docs]def dict_depth(dic, level=0): + ### checks if there is a nested dict, quits once level > 2 + if level > 2: + return level + if not isinstance(dic, dict) or not dic: + return level + return min(dict_depth(dic[key], level + 1) for key in dic)
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/classes/hypergraph.html b/_modules/classes/hypergraph.html new file mode 100644 index 00000000..8e46c1a7 --- /dev/null +++ b/_modules/classes/hypergraph.html @@ -0,0 +1,2502 @@ + + + + + + classes.hypergraph — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for classes.hypergraph

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+from __future__ import annotations
+
+import pickle
+import warnings
+from copy import deepcopy
+from collections import defaultdict
+from collections.abc import Sequence, Iterable
+from typing import Optional, Any, TypeVar, Union, Mapping, Hashable
+
+import networkx as nx
+import numpy as np
+import pandas as pd
+from networkx.algorithms import bipartite
+from scipy.sparse import coo_matrix, csr_matrix
+
+from hypernetx.classes import Entity, EntitySet
+from hypernetx.exception import HyperNetXError
+from hypernetx.utils.decorators import warn_nwhy
+from hypernetx.classes.helpers import merge_nested_dicts, dict_depth
+
+__all__ = ["Hypergraph"]
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class Hypergraph: + """ + Parameters + ---------- + + setsystem : (optional) dict of iterables, dict of dicts,iterable of iterables, + pandas.DataFrame, numpy.ndarray, default = None + See SetSystem above for additional setsystem requirements. + + edge_col : (optional) str | int, default = 0 + column index (or name) in pandas.dataframe or numpy.ndarray, + used for (hyper)edge ids. Will be used to reference edgeids for + all set systems. + + node_col : (optional) str | int, default = 1 + column index (or name) in pandas.dataframe or numpy.ndarray, + used for node ids. Will be used to reference nodeids for all set systems. + + cell_weight_col : (optional) str | int, default = None + column index (or name) in pandas.dataframe or numpy.ndarray used for + referencing cell weights. For a dict of dicts references key in cell + property dicts. + + cell_weights : (optional) Sequence[float,int] | int | float , default = 1.0 + User specified cell_weights or default cell weight. + Sequential values are only used if setsystem is a + dataframe or ndarray in which case the sequence must + have the same length and order as these objects. + Sequential values are ignored for dataframes if cell_weight_col is already + a column in the data frame. + If cell_weights is assigned a single value + then it will be used as default for missing values or when no cell_weight_col + is given. + + cell_properties : (optional) Sequence[int | str] | Mapping[T,Mapping[T,Mapping[str,Any]]], + default = None + Column names from pd.DataFrame to use as cell properties + or a dict assigning cell_property to incidence pairs of edges and + nodes. Will generate a misc_cell_properties, which may have variable lengths per cell. + + misc_cell_properties : (optional) str | int, default = None + Column name of dataframe corresponding to a column of variable + length property dictionaries for the cell. Ignored for other setsystem + types. + + aggregateby : (optional) str, dict, default = 'first' + By default duplicate edge,node incidences will be dropped unless + specified with `aggregateby`. + See pandas.DataFrame.agg() methods for additional syntax and usage + information. + + edge_properties : (optional) pd.DataFrame | dict, default = None + Properties associated with edge ids. + First column of dataframe or keys of dict link to edge ids in + setsystem. + + node_properties : (optional) pd.DataFrame | dict, default = None + Properties associated with node ids. + First column of dataframe or keys of dict link to node ids in + setsystem. + + properties : (optional) pd.DataFrame | dict, default = None + Concatenation/union of edge_properties and node_properties. + By default, the object id is used and should be the first column of + the dataframe, or key in the dict. If there are nodes and edges + with the same ids and different properties then use the edge_properties + and node_properties keywords. + + misc_properties : (optional) int | str, default = None + Column of property dataframes with dtype=dict. Intended for variable + length property dictionaries for the objects. + + edge_weight_prop : (optional) str, default = None, + Name of property in edge_properties to use for weight. + + node_weight_prop : (optional) str, default = None, + Name of property in node_properties to use for weight. + + weight_prop : (optional) str, default = None + Name of property in properties to use for 'weight' + + default_edge_weight : (optional) int | float, default = 1 + Used when edge weight property is missing or undefined. + + default_node_weight : (optional) int | float, default = 1 + Used when node weight property is missing or undefined + + name : (optional) str, default = None + Name assigned to hypergraph + + + ====================== + Hypergraphs in HNX 2.0 + ====================== + + An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: + V = nodes (vertices) and E = (hyper)edges. + + HNX allows for multi-edges by distinguishing edges by + their identifiers instead of their contents. For example, if + V = {1,2,3} and E = {e1,e2,e3}, + where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, + the edges e1 and e2 contain the same set of nodes and yet + are distinct and are distinguishable within H = (V,E). + + New as of version 2.0, HNX provides methods to easily store and + access additional metadata such as cell, edge, and node weights. + Metadata associated with (edge,node) incidences + are referenced as **cell_properties**. + Metadata associated with a single edge or node is referenced + as its **properties**. + + The fundamental object needed to create a hypergraph is a **setsystem**. The + setsystem defines the many-to-many relationships between edges and nodes in + the hypergraph. Cell properties for the incidence pairs can be defined within + the setsystem or in a separate pandas.Dataframe or dict. + Edge and node properties are defined with a pandas.DataFrame or dict. + + SetSystems + ---------- + There are five types of setsystems currently accepted by the library. + + 1. **iterable of iterables** : Barebones hypergraph uses Pandas default + indexing to generate hyperedge ids. Elements must be hashable.: :: + + >>> H = Hypergraph([{1,2},{1,2},{1,2,3}]) + + 2. **dictionary of iterables** : the most basic way to express many-to-many + relationships providing edge ids. The elements of the iterables must be + hashable): :: + + >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]}) + + 3. **dictionary of dictionaries** : allows cell properties to be assigned + to a specific (edge, node) incidence. This is particularly useful when + there are variable length dictionaries assigned to each pair: :: + + >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.1, 'name': 'related_to', + >>> 'startdate': '05.13.2020'}}, + >>> 'e2':{ 1: {'w':0.52, 'name': 'owned_by'}, + >>> 2: {'w':0.2}}, + >>> 'e3':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.2, 'name': 'owner_of'}, + >>> 3: {'w':1, 'type': 'relationship'}} + + >>> H = Hypergraph(d, cell_weight_col='w') + + 4. **pandas.DataFrame** For large datasets and for datasets with cell + properties it is most efficient to construct a hypergraph directly from + a pandas.DataFrame. Incidence pairs are in the first two columns. + Cell properties shared by all incidence pairs can be placed in their own + column of the dataframe. Variable length dictionaries of cell properties + particular to only some of the incidence pairs may be placed in a single + column of the dataframe. Representing the data above as a dataframe df: + + +-----------+-----------+-----------+-----------------------------------+ + | col1 | col2 | w | col3 | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 1 | 0.5 | {'name':'related_to'} | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 2 | 0.1 | {"name":"related_to", | + | | | | "startdate":"05.13.2020"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 1 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 2 | 0.2 | | + +-----------+-----------+-----------+-----------------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+-----------------------------------+ + + The first row of the dataframe is used to reference each column. :: + + >>> H = Hypergraph(df,edge_col="col1",node_col="col2", + >>> cell_weight_col="w",misc_cell_properties="col3") + + 5. **numpy.ndarray** For homogeneous datasets given in an ndarray a + pandas dataframe is generated and column names are added from the + edge_col and node_col arguments. Cell properties containing multiple data + types are added with a separate dataframe or dict and passed through the + cell_properties keyword. :: + + >>> arr = np.array([['e1','1'],['e1','2'], + >>> ['e2','1'],['e2','2'], + >>> ['e3','1'],['e3','2'],['e3','3']]) + >>> H = hnx.Hypergraph(arr, column_names=['col1','col2']) + + + Edge and Node Properties + ------------------------ + Properties specific to a single edge or node are passed through the + keywords: **edge_properties, node_properties, properties**. + Properties may be passed as dataframes or dicts. + The first column or index of the dataframe or keys of the dict keys + correspond to the edge and/or node identifiers. + If identifiers are shared among edges and nodes, or are distinct + for edges and nodes, properties may be combined into a single + object and passed to the **properties** keyword. For example: + + +-----------+-----------+---------------------------------------+ + | id | weight | properties | + +-----------+-----------+---------------------------------------+ + | e1 | 5.0 | {'type':'event'} | + +-----------+-----------+---------------------------------------+ + | e2 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+---------------------------------------+ + | ... | ... | {...} | + +-----------+-----------+---------------------------------------+ + | 1 | 1.2 | {'color':'red'} | + +-----------+-----------+---------------------------------------+ + | 2 | .003 | {'name':'Fido','color':'brown'} | + +-----------+-----------+---------------------------------------+ + | 3 | 1.0 | {} | + +-----------+-----------+---------------------------------------+ + + A properties dictionary should have the format: :: + + dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... } + + A properties dataframe may be used for nodes and edges sharing ids + but differing in cell properties by adding a level index using 0 + for edges and 1 for nodes: + + +-----------+-----------+-----------+---------------------------+ + | level | id | weight | properties | + +-----------+-----------+-----------+---------------------------+ + | 0 | e1 | 5.0 | {'type':'event'} | + +-----------+-----------+-----------+---------------------------+ + | 0 | e2 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+-----------+---------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+---------------------------+ + | 1 | 1.2 | {'color':'red'} | + +-----------+-----------+-----------+---------------------------+ + | 2 | .003 | {'name':'Fido','color':'brown'} | + +-----------+-----------+-----------+---------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+---------------------------+ + + + + Weights + ------- + The default key for cell and object weights is "weight". The default value + is 1. Weights may be assigned and/or a new default prescribed in the + constructor using **cell_weight_col** and **cell_weights** for incidence pairs, + and using **edge_weight_prop, node_weight_prop, weight_prop, + default_edge_weight,** and **default_node_weight** for node and edge weights. + + """ + + @warn_nwhy + def __init__( + self, + setsystem: Optional[ + pd.DataFrame + | np.ndarray + | Mapping[T, Iterable[T]] + | Iterable[Iterable[T]] + | Mapping[T, Mapping[T, Mapping[str, Any]]] + ] = None, + edge_col: str | int = 0, + node_col: str | int = 1, + cell_weight_col: Optional[str | int] = "cell_weights", + cell_weights: Sequence[float] | float = 1.0, + cell_properties: Optional[ + Sequence[str | int] | Mapping[T, Mapping[T, Mapping[str, Any]]] + ] = None, + misc_cell_properties_col: Optional[str | int] = None, + aggregateby: str | dict[str, str] = "first", + edge_properties: Optional[pd.DataFrame | dict[T, dict[Any, Any]]] = None, + node_properties: Optional[pd.DataFrame | dict[T, dict[Any, Any]]] = None, + properties: Optional[ + pd.DataFrame | dict[T, dict[Any, Any]] | dict[T, dict[T, dict[Any, Any]]] + ] = None, + misc_properties_col: Optional[str | int] = None, + edge_weight_prop_col: str | int = "weight", + node_weight_prop_col: str | int = "weight", + weight_prop_col: str | int = "weight", + default_edge_weight: Optional[float | None] = None, + default_node_weight: Optional[float | None] = None, + default_weight: float = 1.0, + name: Optional[str] = None, + **kwargs, + ): + self.name = name or "" + self.misc_cell_properties_col = misc_cell_properties = ( + misc_cell_properties_col or "cell_properties" + ) + self.misc_properties_col = misc_properties_col = ( + misc_properties_col or "properties" + ) + self.default_edge_weight = default_edge_weight = ( + default_edge_weight or default_weight + ) + self.default_node_weight = default_node_weight = ( + default_node_weight or default_weight + ) + ### cell properties + + if setsystem is None: #### Empty Case + + self._edges = EntitySet({}) + self._nodes = EntitySet({}) + self._state_dict = {} + + else: #### DataFrame case + if isinstance(setsystem, pd.DataFrame): + if isinstance(edge_col, int): + self._edge_col = edge_col = setsystem.columns[edge_col] + if isinstance(edge_col, int): + setsystem = setsystem.rename(columns={edge_col: "edges"}) + self._edge_col = edge_col = "edges" + else: + self._edge_col = edge_col + + if isinstance(node_col, int): + self._node_col = node_col = setsystem.columns[node_col] + if isinstance(node_col, int): + setsystem = setsystem.rename(columns={node_col: "nodes"}) + self._node_col = node_col = "nodes" + else: + self._node_col = node_col + + entity = setsystem.copy() + + if isinstance(cell_weight_col, int): + self._cell_weight_col = setsystem.columns[cell_weight_col] + else: + self._cell_weight_col = cell_weight_col + + if cell_weight_col in entity: + entity = entity.fillna({cell_weight_col: cell_weights}) + else: + entity[cell_weight_col] = cell_weights + + if isinstance(cell_properties, Sequence): + cell_properties = [ + c + for c in cell_properties + if not c in [edge_col, node_col, cell_weight_col] + ] + cols = [edge_col, node_col, cell_weight_col] + cell_properties + entity = entity[cols] + elif isinstance(cell_properties, dict): + cp = [] + for idx in entity.index: + edge, node = entity.iloc[idx][[edge_col, node_col]].values + cp.append(cell_properties[edge][node]) + entity["cell_properties"] = cp + + else: ### Cases Other than DataFrame + self._edge_col = edge_col = edge_col or "edges" + if node_col == 1: + self._node_col = node_col = "nodes" + else: + self._node_col = node_col + self._cell_weight_col = cell_weight_col + + if isinstance(setsystem, np.ndarray): + if setsystem.shape[1] != 2: + raise HyperNetXError("Numpy array must have exactly 2 columns.") + entity = pd.DataFrame(setsystem, columns=[edge_col, node_col]) + entity[cell_weight_col] = cell_weights + + elif isinstance(setsystem, dict): + ## check if it is a dict of iterables or a nested dict. if the latter then pull + ## out the nested dicts as cell properties. + ## cell properties must be of the same type as setsystem + + entity = pd.Series(setsystem).explode() + entity = pd.DataFrame( + {edge_col: entity.index.to_list(), node_col: entity.values} + ) + + if dict_depth(setsystem) > 2: + cell_props = dict(setsystem) + if isinstance(cell_properties, dict): + ## if setsystem is a dict then cell properties must be a dict + cell_properties = merge_nested_dicts( + cell_props, cell_properties + ) + else: + cell_properties = cell_props + + df = setsystem + cp = [] + wt = [] + for idx in entity.index: + edge, node = entity.values[idx][[0, 1]] + wt.append(df[edge][node].get(cell_weight_col, cell_weights)) + cp.append(df[edge][node]) + entity[self._cell_weight_col] = wt + entity["cell_properties"] = cp + + else: + entity[self._cell_weight_col] = cell_weights + + elif isinstance(setsystem, Iterable): + entity = pd.Series(setsystem).explode() + entity = pd.DataFrame( + {edge_col: entity.index.to_list(), node_col: entity.values} + ) + entity["cell_weights"] = cell_weights + + else: + raise HyperNetXError( + "setsystem is not supported or is in the wrong format." + ) + + def props2dict(df=None): + if df is None: + return {} + elif isinstance(df, pd.DataFrame): + return df.set_index(df.columns[0]).to_dict(orient="index") + else: + return dict(df) + + if properties is None: + if edge_properties is not None or node_properties is not None: + if edge_properties is not None: + edge_properties = props2dict(edge_properties) + for e in entity[edge_col].unique(): + if not e in edge_properties: + edge_properties[e] = {} + for v in edge_properties.values(): + v.setdefault(edge_weight_prop_col, default_edge_weight) + else: + edge_properties = {} + if node_properties is not None: + node_properties = props2dict(node_properties) + for nd in entity[node_col].unique(): + if not nd in node_properties: + node_properties[nd] = {} + for v in node_properties.values(): + v.setdefault(node_weight_prop_col, default_node_weight) + else: + node_properties = {} + properties = {0: edge_properties, 1: node_properties} + else: + if isinstance(properties, pd.DataFrame): + if weight_prop_col in properties.columns: + properties = properties.fillna( + {weight_prop_col: default_weight} + ) + elif misc_properties_col in properties.columns: + for idx in properties.index: + if not isinstance( + properties[misc_properties_col][idx], dict + ): + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + else: + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + else: + properties[weight_prop_col] = default_weight + if isinstance(properties, dict): + if dict_depth(properties) <= 2: + properties = pd.DataFrame( + [ + {"id": k, misc_properties_col: v} + for k, v in properties.items() + ] + ) + for idx in properties.index: + if isinstance(properties[misc_properties_col][idx], dict): + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + else: + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + elif set(properties.keys()) == {0, 1}: + edge_properties = properties[0] + for e in entity[edge_col].unique(): + if not e in edge_properties: + edge_properties[e] = { + edge_weight_prop_col: default_edge_weight + } + else: + edge_properties[e].setdefault( + edge_weight_prop_col, default_edge_weight + ) + node_properties = properties[1] + for nd in entity[node_col].unique(): + if not nd in node_properties: + node_properties[nd] = { + node_weight_prop_col: default_node_weight + } + else: + node_properties[nd].setdefault( + node_weight_prop_col, default_node_weight + ) + for idx in properties.index: + if not isinstance( + properties[misc_properties_col][idx], dict + ): + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + else: + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + + self.E = EntitySet( + entity=entity, + level1=edge_col, + level2=node_col, + weight_col=cell_weight_col, + weights=cell_weights, + cell_properties=cell_properties, + misc_cell_props_col=misc_cell_properties_col or "cell_properties", + aggregateby=aggregateby or "sum", + properties=properties, + misc_props_col=misc_properties_col, + ) + + self._edges = self.E + self._nodes = self.E.restrict_to_levels([1]) + self._dataframe = self.E.cell_properties.reset_index() + self._data_cols = data_cols = [self._edge_col, self._node_col] + self._dataframe[data_cols] = self._dataframe[data_cols].astype("category") + + self.__dict__.update(locals()) + self._set_default_state() + + @property + def edges(self): + """ + Object associated with self._edges. + + Returns + ------- + EntitySet + """ + return self._edges + + @property + def nodes(self): + """ + Object associated with self._nodes. + + Returns + ------- + EntitySet + """ + return self._nodes + + @property + def dataframe(self): + """Returns dataframe of incidence pairs and their properties. + + Returns + ------- + pd.DataFrame + """ + return self._dataframe + + @property + def properties(self): + """Returns dataframe of edge and node properties. + + Returns + ------- + pd.DataFrame + """ + return self.E.properties + + @property + def edge_props(self): + """Dataframe of edge properties + indexed on edge ids + + Returns + ------- + pd.DataFrame + """ + return self.E.properties.loc[0] + + @property + def node_props(self): + """Dataframe of node properties + indexed on node ids + + Returns + ------- + pd.DataFrame + """ + return self.E.properties.loc[1] + + @property + def incidence_dict(self): + """ + Dictionary keyed by edge uids with values the uids of nodes in each + edge + + Returns + ------- + dict + + """ + return self.E.incidence_dict + + @property + def shape(self): + """ + (number of nodes, number of edges) + + Returns + ------- + tuple + + """ + return len(self._nodes.elements), len(self._edges.elements) + + def __str__(self): + """ + String representation of hypergraph + + Returns + ------- + str + + """ + return f"{self.name}, <class 'hypernetx.classes.hypergraph.Hypergraph'>" + + def __repr__(self): + """ + String representation of hypergraph + + Returns + ------- + str + + """ + return f"{self.name}, hypernetx.classes.hypergraph.Hypergraph" + + def __len__(self): + """ + Number of nodes + + Returns + ------- + int + + """ + return len(self._nodes) + + def __iter__(self): + """ + Iterate over the nodes of the hypergraph + + Returns + ------- + dict_keyiterator + + """ + return iter(self.nodes) + + def __contains__(self, item): + """ + Returns boolean indicating if item is in self.nodes + + Parameters + ---------- + item : hashable or Entity + + """ + return item in self.nodes + + def __getitem__(self, node): + """ + Returns the neighbors of node + + Parameters + ---------- + node : Entity or hashable + If hashable, then must be uid of node in hypergraph + + Returns + ------- + neighbors(node) : iterator + + """ + return self.neighbors(node) + +
[docs] def get_cell_properties( + self, edge: str, node: str, prop_name: Optional[str] = None + ) -> Any | dict[str, Any]: + """Get cell properties on a specified edge and node + + Parameters + ---------- + edge : str + edgeid + node : str + nodeid + prop_name : str, optional + name of a cell property; if None, all cell properties will be returned + + Returns + ------- + : int or str or dict of {str: any} + cell property value if `prop_name` is provided, otherwise ``dict`` of all + cell properties and values + """ + if prop_name is None: + return self.edges.get_cell_properties(edge, node) + + return self.edges.get_cell_property(edge, node, prop_name)
+ +
[docs] def get_properties(self, id, level=None, prop_name=None): + """Returns an object's specific property or all properties + + Parameters + ---------- + id : hashable + edge or node id + level : int | None , optional, default = None + if separate edge and node properties then enter 0 for edges + and 1 for nodes. + prop_name : str | None, optional, default = None + if None then all properties associated with the object will be + returned. + + Returns + ------- + : str or dict + single property or dictionary of properties + """ + if prop_name == None: + return self.E.get_properties(id, level=level) + else: + return self.E.get_property(id, prop_name, level=level)
+ +
[docs] @warn_nwhy + def get_linegraph(self, s=1, edges=True): + """ + Creates an ::term::s-linegraph for the Hypergraph. + If edges=True (default)then the edges will be the vertices of the line + graph. Two vertices are connected by an s-line-graph edge if the + corresponding hypergraph edges intersect in at least s hypergraph nodes. + If edges=False, the hypergraph nodes will be the vertices of the line + graph. Two vertices are connected if the nodes they correspond to share + at least s incident hyper edges. + + Parameters + ---------- + s : int + The width of the connections. + edges : bool, optional, default = True + Determine if edges or nodes will be the vertices in the linegraph. + + Returns + ------- + nx.Graph + A NetworkX graph. + """ + d = self._state_dict + key = "sedgelg" if edges else "snodelg" + if s in d[key]: + return d[key][s] + + if edges: + A, Amap = self.edge_adjacency_matrix(s=s, index=True) + Amaplst = [(k, self.edge_props.loc[k].to_dict()) for k in Amap] + else: + A, Amap = self.adjacency_matrix(s=s, index=True) + Amaplst = [(k, self.node_props.loc[k].to_dict()) for k in Amap] + + ### TODO: add key function to compute weights lambda x,y : funcval + + A = np.array(np.nonzero(A)) + e1 = np.array([Amap[idx] for idx in A[0]]) + e2 = np.array([Amap[idx] for idx in A[1]]) + A = np.array([e1, e2]).T + g = nx.Graph() + g.add_edges_from(A) + g.add_nodes_from(Amaplst) + d[key][s] = g + return g
+ +
[docs] def set_state(self, **kwargs): + """ + Allow state_dict updates from outside of class. Use with caution. + + Parameters + ---------- + **kwargs + key=value pairs to save in state dictionary + """ + self._state_dict.update(kwargs)
+ + def _set_default_state(self): + """Populate state_dict with default values""" + self._state_dict = {} + + self._state_dict["dataframe"] = df = self.dataframe + self._state_dict["labels"] = { + "edges": np.array(df[self._edge_col].cat.categories), + "nodes": np.array(df[self._node_col].cat.categories), + } + self._state_dict["data"] = np.array( + [df[self._edge_col].cat.codes, df[self._node_col].cat.codes], dtype=int + ).T + self._state_dict["snodelg"] = dict() ### s: nx.graph + self._state_dict["sedgelg"] = dict() + self._state_dict["neighbors"] = defaultdict(dict) ### s: {node: neighbors} + self._state_dict["edge_neighbors"] = defaultdict( + dict + ) ### s: {edge: edge_neighbors} + self._state_dict["adjacency_matrix"] = dict() ### s: scipy.sparse.csr_matrix + self._state_dict["edge_adjacency_matrix"] = dict() + +
[docs] def edge_size_dist(self): + """ + Returns the size for each edge + + Returns + ------- + np.array + + """ + + if "edge_size_dist" not in self._state_dict: + dist = np.array(np.sum(self.incidence_matrix(), axis=0))[0].tolist() + self.set_state(edge_size_dist=dist) + return dist + else: + return self._state_dict["edge_size_dist"]
+ +
[docs] def degree(self, node, s=1, max_size=None): + """ + The number of edges of size s that contain node. + + Parameters + ---------- + node : hashable + identifier for the node. + s : positive integer, optional, default 1 + smallest size of edge to consider in degree + max_size : positive integer or None, optional, default = None + largest size of edge to consider in degree + + Returns + ------- + : int + + """ + if s == 1 and max_size == None: + return len(self.E.memberships[node]) + else: + memberships = set() + for edge in self.E.memberships[node]: + size = len(self.edges[edge]) + if size >= s and (max_size is None or size <= max_size): + memberships.add(edge) + + return len(memberships)
+ +
[docs] def size(self, edge, nodeset=None): + """ + The number of nodes in nodeset that belong to edge. + If nodeset is None then returns the size of edge + + Parameters + ---------- + edge : hashable + The uid of an edge in the hypergraph + + Returns + ------- + size : int + + """ + if nodeset is not None: + return len(set(nodeset).intersection(set(self.edges[edge]))) + + return len(self.edges[edge])
+ +
[docs] def number_of_nodes(self, nodeset=None): + """ + The number of nodes in nodeset belonging to hypergraph. + + Parameters + ---------- + nodeset : an interable of Entities, optional, default = None + If None, then return the number of nodes in hypergraph. + + Returns + ------- + number_of_nodes : int + + """ + if nodeset is not None: + return len([n for n in self.nodes if n in nodeset]) + + return len(self.nodes)
+ +
[docs] def number_of_edges(self, edgeset=None): + """ + The number of edges in edgeset belonging to hypergraph. + + Parameters + ---------- + edgeset : an iterable of Entities, optional, default = None + If None, then return the number of edges in hypergraph. + + Returns + ------- + number_of_edges : int + """ + if edgeset: + return len([e for e in self.edges if e in edgeset]) + + return len(self.edges)
+ +
[docs] def order(self): + """ + The number of nodes in hypergraph. + + Returns + ------- + order : int + """ + return len(self.nodes)
+ +
[docs] def dim(self, edge): + """ + Same as size(edge)-1. + """ + return self.size(edge) - 1
+ +
[docs] def neighbors(self, node, s=1): + """ + The nodes in hypergraph which share s edge(s) with node. + + Parameters + ---------- + node : hashable or Entity + uid for a node in hypergraph or the node Entity + + s : int, list, optional, default = 1 + Minimum number of edges shared by neighbors with node. + + Returns + ------- + neighbors : list + s-neighbors share at least s edges in the hypergraph + + """ + if node not in self.nodes: + print(f"{node} is not in hypergraph {self.name}.") + return None + if node in self._state_dict["neighbors"][s]: + return self._state_dict["neighbors"][s][node] + else: + M = self.incidence_matrix() + rdx = self._state_dict["labels"]["nodes"] + jdx = np.where(rdx == node) + idx = (M[jdx].dot(M.T) >= s) * 1 + idx = np.nonzero(idx)[1] + neighbors = list(rdx[idx]) + if len(neighbors) > 0: + neighbors.remove(node) + self._state_dict["neighbors"][s][node] = neighbors + else: + self._state_dict["neighbors"][s][node] = [] + return neighbors
+ +
[docs] def edge_neighbors(self, edge, s=1): + """ + The edges in hypergraph which share s nodes(s) with edge. + + Parameters + ---------- + edge : hashable or Entity + uid for a edge in hypergraph or the edge Entity + + s : int, list, optional, default = 1 + Minimum number of nodes shared by neighbors edge node. + + Returns + ------- + : list + List of edge neighbors + + """ + + if edge not in self.edges: + print(f"Edge is not in hypergraph {self.name}.") + return None + if edge in self._state_dict["edge_neighbors"][s]: + return self._state_dict["edge_neighbors"][s][edge] + else: + M = self.incidence_matrix() + cdx = self._state_dict["labels"]["edges"] + jdx = np.where(cdx == edge) + idx = (M.T[jdx].dot(M) >= s) * 1 + idx = np.nonzero(idx)[1] + edge_neighbors = list(cdx[idx]) + if len(edge_neighbors) > 0: + edge_neighbors.remove(edge) + self._state_dict["edge_neighbors"][s][edge] = edge_neighbors + else: + self._state_dict["edge_neighbors"][s][edge] = [] + return edge_neighbors
+ +
[docs] def incidence_matrix(self, weights=False, index=False): + """ + An incidence matrix for the hypergraph indexed by nodes x edges. + + Parameters + ---------- + weights : bool, default =False + If False all nonzero entries are 1. + If True and self.static all nonzero entries are filled by + self.edges.cell_weights dictionary values. + + index : boolean, optional, default = False + If True return will include a dictionary of node uid : row number + and edge uid : column number + + Returns + ------- + incidence_matrix : scipy.sparse.csr.csr_matrix or np.ndarray + + row_index : list + index of node ids for rows + + col_index : list + index of edge ids for columns + + """ + sdkey = "incidence_matrix" + if weights: + sdkey = "weighted_" + sdkey + + if sdkey in self._state_dict: + M = self._state_dict[sdkey] + else: + df = self.dataframe + data_cols = [self._node_col, self._edge_col] + if weights == True: + data = df[self._cell_weight_col].values + M = csr_matrix( + (data, tuple(np.array(df[col].cat.codes) for col in data_cols)) + ) + else: + M = csr_matrix( + ( + [1] * len(df), + tuple(np.array(df[col].cat.codes) for col in data_cols), + ) + ) + self._state_dict[sdkey] = M + + if index == True: + rdx = self.dataframe[self._node_col].cat.categories + cdx = self.dataframe[self._edge_col].cat.categories + + return M, rdx, cdx + else: + return M
+ +
[docs] def adjacency_matrix(self, s=1, index=False, remove_empty_rows=False): + """ + The :term:`s-adjacency matrix` for the hypergraph. + + Parameters + ---------- + s : int, optional, default = 1 + + index: boolean, optional, default = False + if True, will return the index of ids for rows and columns + + remove_empty_rows: boolean, optional, default = False + + Returns + ------- + adjacency_matrix : scipy.sparse.csr.csr_matrix + + node_index : list + index of ids for rows and columns + + """ + try: + A = self._state_dict["adjacency_matrix"][s] + except: + M = self.incidence_matrix() + A = M @ (M.T) + A.setdiag(0) + A = (A >= s) * 1 + self._state_dict["adjacency_matrix"][s] = A + if index == True: + return A, self._state_dict["labels"]["nodes"] + else: + return A
+ +
[docs] def edge_adjacency_matrix(self, s=1, index=False): + """ + The :term:`s-adjacency matrix` for the dual hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + index: boolean, optional, default = False + if True, will return the index of ids for rows and columns + + Returns + ------- + edge_adjacency_matrix : scipy.sparse.csr.csr_matrix + + edge_index : list + index of ids for rows and columns + + Notes + ----- + This is also the adjacency matrix for the line graph. + Two edges are s-adjacent if they share at least s nodes. + If remove_zeros is True will return the auxillary matrix + + """ + try: + A = self._state_dict["edge_adjacency_matrix"][s] + except: + M = self.incidence_matrix() + A = (M.T) @ (M) + A.setdiag(0) + A = (A >= s) * 1 + self._state_dict["edge_adjacency_matrix"][s] = A + if index == True: + return A, self._state_dict["labels"]["edges"] + else: + return A
+ +
[docs] def auxiliary_matrix(self, s=1, node=True, index=False): + """ + The unweighted :term:`s-edge or node auxiliary matrix` for hypergraph + + Parameters + ---------- + s : int, optional, default = 1 + node : bool, optional, default = True + whether to return based on node or edge adjacencies + + Returns + ------- + auxiliary_matrix : scipy.sparse.csr.csr_matrix + Node/Edge adjacency matrix with empty rows and columns + removed + index : np.array + row and column index of userids + + """ + if node == True: + A, Amap = self.adjacency_matrix(s, index=True) + else: + A, Amap = self.edge_adjacency_matrix(s, index=True) + + idx = np.nonzero(np.sum(A, axis=1))[0] + if len(idx) < A.shape[0]: + B = A[idx][:, idx] + else: + B = A + if index: + return B, Amap[idx] + else: + return B
+ +
[docs] def bipartite(self): + """ + Constructs the networkX bipartite graph associated to hypergraph. + + Returns + ------- + bipartite : nx.Graph() + + Notes + ----- + Creates a bipartite networkx graph from hypergraph. + The nodes and (hyper)edges of hypergraph become the nodes of bipartite + graph. For every (hyper)edge e in the hypergraph and node n in e there + is an edge (n,e) in the graph. + + """ + B = nx.Graph() + nodes = self._state_dict["labels"]["nodes"] + edges = self._state_dict["labels"]["edges"] + B.add_nodes_from(self.edges, bipartite=0) + B.add_nodes_from(self.nodes, bipartite=1) + B.add_edges_from([(v, e) for e in self.edges for v in self.edges[e]]) + return B
+ +
[docs] def dual(self, name=None, switch_names=True): + """ + Constructs a new hypergraph with roles of edges and nodes of hypergraph + reversed. + + Parameters + ---------- + name : hashable, optional + + switch_names : bool, optional, default = True + reverses edge_col and node_col names + unless edge_col = 'edges' and node_col = 'nodes' + + Returns + ------- + : hypergraph + + """ + dfp = deepcopy(self.edges.properties) + dfp = dfp.reset_index() + dfp.level = dfp.level.apply(lambda x: 1 * (x == 0)) + dfp = dfp.set_index(["level", "id"]) + + edge, node, wt = self._edge_col, self._node_col, self._cell_weight_col + df = deepcopy(self.dataframe) + cprops = [col for col in df.columns if not col in [edge, node, wt]] + + df[[edge, node]] = df[[node, edge]] + if switch_names == True and not ( + self._edge_col == "edges" and self._node_col == "nodes" + ): + # if switch_names == False or (self._edge_col == 'edges' and self._node_col == 'nodes'): + df = df.rename(columns={edge: self._node_col, node: self._edge_col}) + node = self._edge_col + edge = self._node_col + + return Hypergraph( + df, + edge_col=edge, + node_col=node, + cell_weight_col=wt, + cell_properties=cprops, + properties=dfp, + name=name, + )
+ +
[docs] def collapse_edges( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Constructs a new hypergraph gotten by identifying edges containing the + same nodes + + Parameters + ---------- + name : hashable, optional, default = None + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of edge equivalence classes keyed by frozen + sets of nodes + + Returns + ------- + new hypergraph : Hypergraph + Equivalent edges are collapsed to a single edge named by a + representative of the equivalent edges followed by a colon and the + number of edges it represents. + + equivalence_classes : dict + A dictionary keyed by representative edge names with values equal + to the edges in its equivalence class + + Notes + ----- + Two edges are identified if their respective elements are the same. + Using this as an equivalence relation, the uids of the edges are + partitioned into equivalence classes. + + A single edge from the collapsed edges followed by a colon and the + number of elements in its equivalence class as uid for the new edge + + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps ane return_counts are no longer supported keyword + arguments and will throw an error in the next release. + collapsed hypergraph automatically names collapsed objects by a + string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + temp = self.edges.collapse_identical_elements( + return_equivalence_classes=return_equivalence_classes + ) + + if return_equivalence_classes: + return Hypergraph(temp[0].incidence_dict, name), temp[1] + + return Hypergraph(temp.incidence_dict, name)
+ +
[docs] def collapse_nodes( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Constructs a new hypergraph gotten by identifying nodes contained by + the same edges + + Parameters + ---------- + name: str, optional, default = None + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of node equivalence classes keyed by frozen + sets of edges + + use_reps : boolean, optional, default = False - Deprecated, this no + longer works and will be removed. Choose a single element from the + collapsed nodes as uid for the new node, otherwise uses a frozen + set of the uids of nodes in the equivalence class + + return_counts: boolean, - Deprecated, this no longer works and will be + removed if use_reps is True the new nodes have uids given by a + tuple of the rep and the count + + Returns + ------- + new hypergraph : Hypergraph + + Notes + ----- + Two nodes are identified if their respective memberships are the same. + Using this as an equivalence relation, the uids of the nodes are + partitioned into equivalence classes. A single member of the + equivalence class is chosen to represent the class followed by the + number of members of the class. + + Example + ------- + + >>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', / + ['a','b']),Entity('E2',['a','b'])])) + >>> h.incidence_dict + {'E1': {'a', 'b'}, 'E2': {'a', 'b'}} + >>> h.collapse_nodes().incidence_dict + {'E1': {frozenset({'a', 'b'})}, 'E2': {frozenset({'a', 'b'})}} + ### Fix this + >>> h.collapse_nodes(use_reps=True).incidence_dict + {'E1': {('a', 2)}, 'E2': {('a', 2)}} + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps and return_counts are no longer supported keyword arguments and will throw + an error in the next release. + collapsed hypergraph automatically names collapsed objects by a string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + temp = self.dual().edges.collapse_identical_elements( + return_equivalence_classes=return_equivalence_classes + ) + + if return_equivalence_classes: + return Hypergraph(temp[0].incidence_dict).dual(), temp[1] + + return Hypergraph(temp.incidence_dict, name).dual()
+ +
[docs] def collapse_nodes_and_edges( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Returns a new hypergraph by collapsing nodes and edges. + + Parameters + ---------- + + name: str, optional, default = None + + use_reps: boolean, optional, default = False + Choose a single element from the collapsed elements as a + representative + + return_counts: boolean, optional, default = True + if use_reps is True the new elements are keyed by a tuple of the + rep and the count + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of edge equivalence classes keyed by frozen + sets of nodes + + Returns + ------- + new hypergraph : Hypergraph + + Notes + ----- + Collapses the Nodes and Edges EntitySets. Two nodes(edges) are + duplicates if their respective memberships(elements) are the same. + Using this as an equivalence relation, the uids of the nodes(edges) + are partitioned into equivalence classes. A single member of the + equivalence class is chosen to represent the class followed by the + number of members of the class. + + Example + ------- + + >>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', / + ['a','b']),Entity('E2',['a','b'])])) + >>> h.incidence_dict + {'E1': {'a', 'b'}, 'E2': {'a', 'b'}} + >>> h.collapse_nodes_and_edges().incidence_dict ### Fix this + {('E1', 2): {('a', 2)}} + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps and return_counts are no longer supported keyword + arguments and will throw an error in the next release. + collapsed hypergraph automatically names collapsed objects by a + string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + if return_equivalence_classes: + temp, neq = self.collapse_nodes( + name="temp", return_equivalence_classes=True + ) + ntemp, eeq = temp.collapse_edges(name=name, return_equivalence_classes=True) + return ntemp, neq, eeq + + temp = self.collapse_nodes(name="temp") + return temp.collapse_edges(name=name)
+ +
[docs] def restrict_to_nodes(self, nodes, name=None): + """New hypergraph gotten by restricting to nodes + + Parameters + ---------- + nodes : Iterable + nodeids to restrict to + + Returns + ------- + : hnx. Hypergraph + + """ + keys = set(self._state_dict["labels"]["nodes"]).difference(nodes) + return self.remove(keys, level=1)
+ +
[docs] def restrict_to_edges(self, edges, name=None): + """New hypergraph gotten by restricting to edges + + Parameters + ---------- + edges : Iterable + edgeids to restrict to + + Returns + ------- + hnx.Hypergraph + + """ + keys = set(self._state_dict["labels"]["edges"]).difference(edges) + return self.remove(keys, level=0)
+ +
[docs] def remove_edges(self, keys, name=None): + return self.remove(keys, level=0, name=name)
+ +
[docs] def remove_nodes(self, keys, name=None): + return self.remove(keys, level=1, name=name)
+ +
[docs] def remove(self, keys, level=None, name=None): + """Creates a new hypergraph with nodes and/or edges indexed by keys + removed. More efficient for creating a restricted hypergraph if the + restricted set is greater than what is being removed. + + Parameters + ---------- + keys : list | tuple | set | Hashable + node and/or edge id(s) to restrict to + level : None, optional + Enter 0 to remove edges with ids in keys. + Enter 1 to remove nodes with ids in keys. + If None then all objects in nodes and edges with the id will + be removed. + name : str, optional + Name of new hypergraph + + Returns + ------- + : hnx.Hypergraph + + """ + rdfprop = self.properties.copy() + rdf = self.dataframe.copy() + if isinstance(keys, (list, tuple, set)): + nkeys = keys + elif isinstance(keys, Hashable): + nkeys = list() + nkeys.append(keys) + else: + raise TypeError("`keys` parameter must be list | tuple | set | Hashable") + if level == 0: + kdx = set(nkeys).intersection(set(self._state_dict["labels"]["edges"])) + for k in kdx: + rdfprop = rdfprop.drop((0, k)) + rdf = rdf.loc[~(rdf[self._edge_col].isin(kdx))] + elif level == 1: + kdx = set(nkeys).intersection(set(self._state_dict["labels"]["nodes"])) + for k in kdx: + rdfprop = rdfprop.drop((1, k)) + rdf = rdf.loc[~(rdf[self._node_col].isin(kdx))] + else: + rdfprop = rdfprop.reset_index() + kdx = set(nkeys).intersection(rdfprop.id.unique()) + rdfprop = rdfprop.set_index("id") + rdfprop = rdfprop.drop(index=kdx) + rdf = rdf.loc[~(rdf[self._edge_col].isin(kdx))] + rdf = rdf.loc[~(rdf[self._node_col].isin(kdx))] + + return Hypergraph( + setsystem=rdf, + edge_col=self._edge_col, + node_col=self._node_col, + cell_weight_col=self._cell_weight_col, + misc_cell_properties_col=self.edges._misc_cell_props_col, + properties=rdfprop, + misc_properties_col=self.edges._misc_props_col, + name=name, + )
+ +
[docs] def toplexes(self, name=None): + """ + Returns a :term:`simple hypergraph` corresponding to self. + + Warning + ------- + Collapsing is no longer supported inside the toplexes method. Instead + generate a new collapsed hypergraph and compute the toplexes of the + new hypergraph. + + Parameters + ---------- + name: str, optional, default = None + """ + + thdict = {} + for e in self.edges: + thdict[e] = self.edges[e] + + tops = [] + for e in self.edges: + flag = True + old_tops = list(tops) + for top in old_tops: + if set(thdict[e]).issubset(thdict[top]): + flag = False + break + + if set(thdict[top]).issubset(thdict[e]): + tops.remove(top) + if flag: + tops += [e] + return self.restrict_to_edges(tops, name=name)
+ +
[docs] def is_connected(self, s=1, edges=False): + """ + Determines if hypergraph is :term:`s-connected <s-connected, + s-node-connected>`. + + Parameters + ---------- + s: int, optional, default 1 + + edges: boolean, optional, default = False + If True, will determine if s-edge-connected. + For s=1 s-edge-connected is the same as s-connected. + + Returns + ------- + is_connected : boolean + + Notes + ----- + + A hypergraph is s node connected if for any two nodes v0,vn + there exists a sequence of nodes v0,v1,v2,...,v(n-1),vn + such that every consecutive pair of nodes v(i),v(i+1) + share at least s edges. + + A hypergraph is s edge connected if for any two edges e0,en + there exists a sequence of edges e0,e1,e2,...,e(n-1),en + such that every consecutive pair of edges e(i),e(i+1) + share at least s nodes. + + """ + + g = self.get_linegraph(s=s, edges=edges) + is_connected = None + + try: + is_connected = nx.is_connected(g) + except nx.NetworkXPointlessConcept: + warnings.warn("Graph is null; ") + is_connected = False + + return is_connected
+ +
[docs] def singletons(self): + """ + Returns a list of singleton edges. A singleton edge is an edge of + size 1 with a node of degree 1. + + Returns + ------- + singles : list + A list of edge uids. + """ + + M, _, cdict = self.incidence_matrix(index=True) + # which axis has fewest members? if 1 then columns + idx = np.argmax(M.shape).tolist() + # we add down the row index if there are fewer columns + cols = M.sum(idx) + singles = [] + # index along opposite axis with one entry each + for c in np.nonzero((cols - 1 == 0))[(idx + 1) % 2]: + # if the singleton entry in that column is also + # singleton in its row find the entry + if idx == 0: + r = np.argmax(M.getcol(c)) + # and get its sum + s = np.sum(M.getrow(r)) + # if this is also 1 then the entry in r,c represents a + # singleton so we want to change that entry to 0 and + # remove the row. this means we want to remove the + # edge corresponding to c + if s == 1: + singles.append(cdict[c]) + else: # switch the role of r and c + r = np.argmax(M.getrow(c)) + s = np.sum(M.getcol(r)) + if s == 1: + singles.append(cdict[r]) + return singles
+ +
[docs] def remove_singletons(self, name=None): + """ + Constructs clone of hypergraph with singleton edges removed. + + Returns + ------- + new hypergraph : Hypergraph + + """ + singletons = self.singletons() + if len(singletons) > len(self.edges): + E = [e for e in self.edges if e not in singletons] + return self.restrict_to_edges(E, name=name) + else: + return self.remove(singletons, level=0, name=name)
+ +
[docs] def s_connected_components(self, s=1, edges=True, return_singletons=False): + """ + Returns a generator for the :term:`s-edge-connected components + <s-edge-connected component>` + or the :term:`s-node-connected components <s-connected component, + s-node-connected component>` of the hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + edges : boolean, optional, default = True + If True will return edge components, if False will return node + components + return_singletons : bool, optional, default = False + + Notes + ----- + If edges=True, this method returns the s-edge-connected components as + lists of lists of edge uids. + An s-edge-component has the property that for any two edges e1 and e2 + there is a sequence of edges starting with e1 and ending with e2 + such that pairwise adjacent edges in the sequence intersect in at least + s nodes. If s=1 these are the path components of the hypergraph. + + If edges=False this method returns s-node-connected components. + A list of sets of uids of the nodes which are s-walk connected. + Two nodes v1 and v2 are s-walk-connected if there is a + sequence of nodes starting with v1 and ending with v2 such that + pairwise adjacent nodes in the sequence share s edges. If s=1 these + are the path components of the hypergraph. + + Example + ------- + >>> S = {'A':{1,2,3},'B':{2,3,4},'C':{5,6},'D':{6}} + >>> H = Hypergraph(S) + + >>> list(H.s_components(edges=True)) + [{'C', 'D'}, {'A', 'B'}] + >>> list(H.s_components(edges=False)) + [{1, 2, 3, 4}, {5, 6}] + + Yields + ------ + s_connected_components : iterator + Iterator returns sets of uids of the edges (or nodes) in the + s-edge(node) components of hypergraph. + + """ + g = self.get_linegraph(s, edges=edges) + for c in nx.connected_components(g): + if not return_singletons and len(c) == 1: + continue + yield c
+ +
[docs] def s_component_subgraphs( + self, s=1, edges=True, return_singletons=False, name=None + ): + """ + + Returns a generator for the induced subgraphs of s_connected + components. Removes singletons unless return_singletons is set to True. + Computed using s-linegraph generated either by the hypergraph + (edges=True) or its dual (edges = False) + + Parameters + ---------- + s : int, optional, default 1 + + edges : boolean, optional, edges=False + Determines if edge or node components are desired. Returns + subgraphs equal to the hypergraph restricted to each set of + nodes(edges) in the s-connected components or s-edge-connected + components + return_singletons : bool, optional + + Yields + ------ + s_component_subgraphs : iterator + Iterator returns subgraphs generated by the edges (or nodes) in the + s-edge(node) components of hypergraph. + + """ + for idx, c in enumerate( + self.s_components(s=s, edges=edges, return_singletons=return_singletons) + ): + if edges: + yield self.restrict_to_edges(c, name=f"{name or self.name}:{idx}") + else: + yield self.restrict_to_nodes(c, name=f"{name or self.name}:{idx}")
+ +
[docs] def s_components(self, s=1, edges=True, return_singletons=True): + """ + Same as s_connected_components + + See Also + -------- + s_connected_components + """ + return self.s_connected_components( + s=s, edges=edges, return_singletons=return_singletons + )
+ +
[docs] def connected_components(self, edges=False): + """ + Same as :meth:`s_connected_components` with s=1, but nodes are returned + by default. Return iterator. + + See Also + -------- + s_connected_components + """ + return self.s_connected_components(edges=edges, return_singletons=True)
+ +
[docs] def connected_component_subgraphs(self, return_singletons=True, name=None): + """ + Same as :meth:`s_component_subgraphs` with s=1. Returns iterator + + See Also + -------- + s_component_subgraphs + """ + return self.s_component_subgraphs( + return_singletons=return_singletons, name=name + )
+ +
[docs] def components(self, edges=False): + """ + Same as :meth:`s_connected_components` with s=1, but nodes are returned + by default. Return iterator. + + See Also + -------- + s_connected_components + """ + return self.s_connected_components(s=1, edges=edges)
+ +
[docs] def component_subgraphs(self, return_singletons=False, name=None): + """ + Same as :meth:`s_components_subgraphs` with s=1. Returns iterator. + + See Also + -------- + s_component_subgraphs + """ + return self.s_component_subgraphs( + return_singletons=return_singletons, name=name + )
+ +
[docs] def node_diameters(self, s=1): + """ + Returns the node diameters of the connected components in hypergraph. + + Parameters + ---------- + list of the diameters of the s-components and + list of the s-component nodes + """ + A, coldict = self.adjacency_matrix(s=s, index=True) + G = nx.from_scipy_sparse_matrix(A) + diams = [] + comps = [] + for c in nx.connected_components(G): + diamc = nx.diameter(G.subgraph(c)) + temp = set() + for e in c: + temp.add(coldict[e]) + comps.append(temp) + diams.append(diamc) + loc = np.argmax(diams).tolist() + return diams[loc], diams, comps
+ +
[docs] def edge_diameters(self, s=1): + """ + Returns the edge diameters of the s_edge_connected component subgraphs + in hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + Returns + ------- + maximum diameter : int + + list of diameters : list + List of edge_diameters for s-edge component subgraphs in hypergraph + + list of component : list + List of the edge uids in the s-edge component subgraphs. + + """ + A, coldict = self.edge_adjacency_matrix(s=s, index=True) + G = nx.from_scipy_sparse_matrix(A) + diams = [] + comps = [] + for c in nx.connected_components(G): + diamc = nx.diameter(G.subgraph(c)) + temp = set() + for e in c: + temp.add(coldict[e]) + comps.append(temp) + diams.append(diamc) + loc = np.argmax(diams).tolist() + return diams[loc], diams, comps
+ +
[docs] def diameter(self, s=1): + """ + Returns the length of the longest shortest s-walk between nodes in + hypergraph + + Parameters + ---------- + s : int, optional, default 1 + + Returns + ------- + diameter : int + + Raises + ------ + HyperNetXError + If hypergraph is not s-edge-connected + + Notes + ----- + Two nodes are s-adjacent if they share s edges. + Two nodes v_start and v_end are s-walk connected if there is a + sequence of nodes v_start, v_1, v_2, ... v_n-1, v_end such that + consecutive nodes are s-adjacent. If the graph is not connected, + an error will be raised. + + """ + A = self.adjacency_matrix(s=s) + G = nx.from_scipy_sparse_matrix(A) + if nx.is_connected(G): + return nx.diameter(G) + + raise HyperNetXError(f"Hypergraph is not s-connected. s={s}")
+ +
[docs] def edge_diameter(self, s=1): + """ + Returns the length of the longest shortest s-walk between edges in + hypergraph + + Parameters + ---------- + s : int, optional, default 1 + + Return + ------ + edge_diameter : int + + Raises + ------ + HyperNetXError + If hypergraph is not s-edge-connected + + Notes + ----- + Two edges are s-adjacent if they share s nodes. + Two nodes e_start and e_end are s-walk connected if there is a + sequence of edges e_start, e_1, e_2, ... e_n-1, e_end such that + consecutive edges are s-adjacent. If the graph is not connected, an + error will be raised. + + """ + A = self.edge_adjacency_matrix(s=s) + G = nx.from_scipy_sparse_matrix(A) + if nx.is_connected(G): + return nx.diameter(G) + + raise HyperNetXError(f"Hypergraph is not s-connected. s={s}")
+ +
[docs] def distance(self, source, target, s=1): + """ + Returns the shortest s-walk distance between two nodes in the + hypergraph. + + Parameters + ---------- + source : node.uid or node + a node in the hypergraph + + target : node.uid or node + a node in the hypergraph + + s : positive integer + the number of edges + + Returns + ------- + s-walk distance : int + + See Also + -------- + edge_distance + + Notes + ----- + The s-distance is the shortest s-walk length between the nodes. + An s-walk between nodes is a sequence of nodes that pairwise share + at least s edges. The length of the shortest s-walk is 1 less than + the number of nodes in the path sequence. + + Uses the networkx shortest_path_length method on the graph + generated by the s-adjacency matrix. + + """ + g = self.get_linegraph(s=s, edges=False) + try: + dist = nx.shortest_path_length(g, source, target) + except (nx.NetworkXNoPath, nx.NodeNotFound): + warnings.warn(f"No {s}-path between {source} and {target}") + dist = np.inf + + return dist
+ +
[docs] def edge_distance(self, source, target, s=1): + """XX TODO: still need to return path and translate into user defined + nodes and edges Returns the shortest s-walk distance between two edges + in the hypergraph. + + Parameters + ---------- + source : edge.uid or edge + an edge in the hypergraph + + target : edge.uid or edge + an edge in the hypergraph + + s : positive integer + the number of intersections between pairwise consecutive edges + + TODO: add edge weights + weight : None or string, optional, default = None + if None then all edges have weight 1. If string then edge attribute + string is used if available. + + + Returns + ------- + s- walk distance : the shortest s-walk edge distance + A shortest s-walk is computed as a sequence of edges, + the s-walk distance is the number of edges in the sequence + minus 1. If no such path exists returns np.inf. + + See Also + -------- + distance + + Notes + ----- + The s-distance is the shortest s-walk length between the edges. + An s-walk between edges is a sequence of edges such that + consecutive pairwise edges intersect in at least s nodes. The + length of the shortest s-walk is 1 less than the number of edges + in the path sequence. + + Uses the networkx shortest_path_length method on the graph + generated by the s-edge_adjacency matrix. + + """ + g = self.get_linegraph(s=s, edges=True) + try: + edge_dist = nx.shortest_path_length(g, source, target) + except (nx.NetworkXNoPath, nx.NodeNotFound): + warnings.warn(f"No {s}-path between {source} and {target}") + edge_dist = np.inf + + return edge_dist
+ +
[docs] def incidence_dataframe( + self, sort_rows=False, sort_columns=False, cell_weights=True + ): + """ + Returns a pandas dataframe for hypergraph indexed by the nodes and + with column headers given by the edge names. + + Parameters + ---------- + sort_rows : bool, optional, default =True + sort rows based on hashable node names + sort_columns : bool, optional, default =True + sort columns based on hashable edge names + cell_weights : bool, optional, default =True + + """ + + ## An entity dataframe is already an incidence dataframe. + df = self.E.dataframe.pivot( + index=self.E._data_cols[1], + columns=self.E._data_cols[0], + values=self.E._cell_weight_col, + ).fillna(0) + + if sort_rows: + df = df.sort_index("index") + if sort_columns: + df = df.sort_index("columns") + if not cell_weights: + df[df > 0] = 1 + + return df
+ +
[docs] @classmethod + @warn_nwhy + def from_bipartite(cls, B, set_names=("edges", "nodes"), name=None, **kwargs): + """ + Static method creates a Hypergraph from a bipartite graph. + + Parameters + ---------- + + B: nx.Graph() + A networkx bipartite graph. Each node in the graph has a property + 'bipartite' taking the value of 0 or 1 indicating a 2-coloring of + the graph. + + set_names: iterable of length 2, optional, default = ['edges','nodes'] + Category names assigned to the graph nodes associated to each + bipartite set + + name: hashable, optional + + Returns + ------- + : Hypergraph + + Notes + ----- + A partition for the nodes in a bipartite graph generates a hypergraph. + + >>> import networkx as nx + >>> B = nx.Graph() + >>> B.add_nodes_from([1, 2, 3, 4], bipartite=0) + >>> B.add_nodes_from(['a', 'b', 'c'], bipartite=1) + >>> B.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'), (2, 'c'), / + (3, 'c'), (4, 'a')]) + >>> H = Hypergraph.from_bipartite(B) + >>> H.nodes, H.edges + # output: (EntitySet(_:Nodes,[1, 2, 3, 4],{}), / + # EntitySet(_:Edges,['b', 'c', 'a'],{})) + + """ + + edges = [] + nodes = [] + for n, d in B.nodes(data=True): + if d["bipartite"] == 1: + nodes.append(n) + else: + edges.append(n) + + if not bipartite.is_bipartite_node_set(B, nodes): + raise HyperNetXError( + "Error: Method requires a 2-coloring of a bipartite graph." + ) + + elist = [] + for e in list(B.edges): + if e[0] in edges: + elist.append([e[0], e[1]]) + else: + elist.append([e[1], e[0]]) + df = pd.DataFrame(elist, columns=set_names) + return Hypergraph(df, name=name, **kwargs)
+ +
[docs] @classmethod + def from_incidence_matrix( + cls, + M, + node_names=None, + edge_names=None, + node_label="nodes", + edge_label="edges", + name=None, + key=None, + **kwargs, + ): + """ + Same as from_numpy_array. + """ + return Hypergraph.from_numpy_array( + M, + node_names=node_names, + edge_names=edge_names, + node_label=node_label, + edge_label=edge_label, + name=name, + key=key, + )
+ +
[docs] @classmethod + @warn_nwhy + def from_numpy_array( + cls, + M, + node_names=None, + edge_names=None, + node_label="nodes", + edge_label="edges", + name=None, + key=None, + **kwargs, + ): + """ + Create a hypergraph from a real valued matrix represented as a 2 dimensionsl numpy array. + The matrix is converted to a matrix of 0's and 1's so that any truthy cells are converted to 1's and + all others to 0's. + + Parameters + ---------- + M : real valued array-like object, 2 dimensions + representing a real valued matrix with rows corresponding to nodes and columns to edges + + node_names : object, array-like, default=None + List of node names must be the same length as M.shape[0]. + If None then the node names correspond to row indices with 'v' prepended. + + edge_names : object, array-like, default=None + List of edge names must have the same length as M.shape[1]. + If None then the edge names correspond to column indices with 'e' prepended. + + name : hashable + + key : (optional) function + boolean function to be evaluated on each cell of the array, + must be applicable to numpy.array + + Returns + ------- + : Hypergraph + + Note + ---- + The constructor does not generate empty edges. + All zero columns in M are removed and the names corresponding to these + edges are discarded. + + + """ + # Create names for nodes and edges + # Validate the size of the node and edge arrays + + M = np.array(M) + if len(M.shape) != (2): + raise HyperNetXError("Input requires a 2 dimensional numpy array") + # apply boolean key if available + if key is not None: + M = key(M) + + if node_names is not None: + nodenames = np.array(node_names) + if len(nodenames) != M.shape[0]: + raise HyperNetXError( + "Number of node names does not match number of rows." + ) + else: + nodenames = np.array([f"v{idx}" for idx in range(M.shape[0])]) + + if edge_names is not None: + edgenames = np.array(edge_names) + if len(edgenames) != M.shape[1]: + raise HyperNetXError( + "Number of edge_names does not match number of columns." + ) + else: + edgenames = np.array([f"e{jdx}" for jdx in range(M.shape[1])]) + + df = pd.DataFrame(M, columns=edgenames, index=nodenames) + return Hypergraph.from_incidence_dataframe(df, name=name)
+ +
[docs] @classmethod + @warn_nwhy + def from_incidence_dataframe( + cls, + df, + columns=None, + rows=None, + edge_col: str = "edges", + node_col: str = "nodes", + name=None, + fillna=0, + transpose=False, + transforms=[], + key=None, + return_only_dataframe=False, + **kwargs, + ): + """ + Create a hypergraph from a Pandas Dataframe object, which has values equal + to the incidence matrix of a hypergraph. Its index will identify the nodes + and its columns will identify its edges. + + Parameters + ---------- + df : Pandas.Dataframe + a real valued dataframe with a single index + + columns : (optional) list, default = None + restricts df to the columns with headers in this list. + + rows : (optional) list, default = None + restricts df to the rows indexed by the elements in this list. + + name : (optional) string, default = None + + fillna : float, default = 0 + a real value to place in empty cell, all-zero columns will not + generate an edge. + + transpose : (optional) bool, default = False + option to transpose the dataframe, in this case df.Index will + identify the edges and df.columns will identify the nodes, transpose is + applied before transforms and key + + transforms : (optional) list, default = [] + optional list of transformations to apply to each column, + of the dataframe using pd.DataFrame.apply(). + Transformations are applied in the order they are + given (ex. abs). To apply transforms to rows or for additional + functionality, consider transforming df using pandas.DataFrame + methods prior to generating the hypergraph. + + key : (optional) function, default = None + boolean function to be applied to dataframe. will be applied to + entire dataframe. + + return_only_dataframe : (optional) bool, default = False + to use the incidence_dataframe with cell_properties or properties, set this + to true and use it as the setsystem in the Hypergraph constructor. + + See also + -------- + from_numpy_array + + + Returns + ------- + : Hypergraph + + """ + + if not isinstance(df, pd.DataFrame): + raise HyperNetXError("Error: Input object must be a pandas dataframe.") + + if columns: + df = df[columns] + if rows: + df = df.loc[rows] + + df = df.fillna(fillna) + if transpose: + df = df.transpose() + + for t in transforms: + df = df.apply(t) + if key: + mat = key(df.values) * 1 + else: + mat = df.values * 1 + + cols = df.columns + rows = df.index + CM = coo_matrix(mat) + c1 = CM.row + c1 = [rows[c1[idx]] for idx in range(len(c1))] + c2 = CM.col + c2 = [cols[c2[idx]] for idx in range(len(c2))] + c3 = CM.data + + dfnew = pd.DataFrame({edge_col: c2, node_col: c1, "cell_weights": c3}) + if return_only_dataframe == True: + return dfnew + else: + return Hypergraph( + dfnew, + edge_col=edge_col, + node_col=node_col, + weights="cell_weights", + name=None, + )
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/drawing/rubber_band.html b/_modules/drawing/rubber_band.html new file mode 100644 index 00000000..8b91b280 --- /dev/null +++ b/_modules/drawing/rubber_band.html @@ -0,0 +1,613 @@ + + + + + + drawing.rubber_band — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for drawing.rubber_band

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+from hypernetx import Hypergraph
+from hypernetx.drawing.util import (
+    get_frozenset_label,
+    get_collapsed_size,
+    get_set_layering,
+    inflate_kwargs,
+    transpose_inflated_kwargs,
+)
+
+import matplotlib.pyplot as plt
+from matplotlib.collections import PolyCollection
+
+import networkx as nx
+
+
+import numpy as np
+from scipy.spatial.distance import pdist
+from scipy.spatial import ConvexHull
+
+# increases the default figure size to 8in square.
+plt.rcParams["figure.figsize"] = (8, 8)
+
+N_CONTROL_POINTS = 24
+
+theta = np.linspace(0, 2 * np.pi, N_CONTROL_POINTS + 1)[:-1]
+
+cp = np.vstack((np.cos(theta), np.sin(theta))).T
+
+
+
+
+
+
[docs]def get_default_radius(H, pos): + """ + Calculate a reasonable default node radius + + This function iterates over the hyper edges and finds the most distant + pair of points given the positions provided. Then, the node radius is a fraction + of the median of this distance take across all hyper-edges. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + + Returns + ------- + float + the recommended radius + + """ + if len(H) > 1: + return 0.0125 * np.median( + [pdist(np.vstack(list(map(pos.get, H.nodes)))).max() for nodes in H.edges()] + ) + return 1
+ + +
[docs]def draw_hyper_edge_labels(H, polys, labels={}, ax=None, **kwargs): + """ + Draws a label on the hyper edge boundary. + + Should be passed Matplotlib PolyCollection representing the hyper-edges, see + the return value of draw_hyper_edges. + + The label will be draw on the least curvy part of the polygon, and will be + aligned parallel to the orientation of the polygon where it is drawn. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + polys: PolyCollection + collection of polygons returned by draw_hyper_edges + labels: dict + mapping of node id to string label + ax: Axis + matplotlib axis on which the plot is rendered + kwargs: dict + Keyword arguments are passed through to Matplotlib's annotate function. + + """ + ax = ax or plt.gca() + + params = transpose_inflated_kwargs(inflate_kwargs(H.edges, kwargs)) + + for edge, path, params in zip(H.edges, polys.get_paths(), params): + s = labels.get(edge, edge) + + # calculate the xy location of the annotation + # this is the midpoint of the pair of adjacent points the most distant + d = ((path.vertices[:-1] - path.vertices[1:]) ** 2).sum(axis=1) + i = d.argmax() + + x1, x2 = path.vertices[i : i + 2] + x, y = x2 - x1 + theta = 360 * np.arctan2(y, x) / (2 * np.pi) + theta = (theta + 360) % 360 + + while theta > 90: + theta -= 180 + + # the string is a comma separated list of the edge uid + ax.annotate( + s, (x1 + x2) / 2, rotation=theta, ha="center", va="center", **params + )
+ + +
[docs]def layout_hyper_edges(H, pos, node_radius={}, dr=None): + """ + Draws a convex hull for each edge in H. + + Position of the nodes in the graph is specified by the position dictionary, + pos. Convex hulls are spaced out such that if one set contains another, the + convex hull will surround the contained set. The amount of spacing added + between hulls is specified by the parameter, dr. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + node_radius: dict + mapping of node to R^1 (radius of each node) + dr: float + the spacing between concentric rings + ax: Axis + matplotlib axis on which the plot is rendered + + Returns + ------- + dict + A mapping from hyper edge ids to paths (Nx2 numpy matrices) + """ + + if len(node_radius): + r0 = min(node_radius.values()) + else: + r0 = get_default_radius(H, pos) + + dr = dr or r0 + + levels = get_set_layering(H) + + radii = { + v: {v: i for i, v in enumerate(sorted(e, key=levels.get))} + for v, e in H.dual().edges.elements.items() + } + + def get_padded_hull(uid, edge): + # make sure the edge contains at least one node + if len(edge): + points = np.vstack( + [ + cp * (node_radius.get(v, r0) + dr * (2 + radii[v][uid])) + pos[v] + for v in edge + ] + ) + # if not, draw an empty edge centered around the location of the edge node (in the bipartite graph) + else: + points = 4 * r0 * cp + pos[uid] + + hull = ConvexHull(points) + + return hull.points[hull.vertices] + + return [get_padded_hull(uid, list(H.edges[uid])) for uid in H.edges]
+ + +
[docs]def draw_hyper_edges(H, pos, ax=None, node_radius={}, dr=None, **kwargs): + """ + Draws a convex hull around the nodes contained within each edge in H + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + node_radius: dict + mapping of node to R^1 (radius of each node) + dr: float + the spacing between concentric rings + ax: Axis + matplotlib axis on which the plot is rendered + kwargs: dict + keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor + + Returns + ------- + PolyCollection + a Matplotlib PolyCollection that can be further styled + """ + points = layout_hyper_edges(H, pos, node_radius=node_radius, dr=dr) + + polys = PolyCollection(points, **inflate_kwargs(H.edges, kwargs)) + + (ax or plt.gca()).add_collection(polys) + + return polys
+ + +
[docs]def draw_hyper_nodes(H, pos, node_radius={}, r0=None, ax=None, **kwargs): + """ + Draws a circle for each node in H. + + The position of each node is specified by the a dictionary/list-like, pos, + where pos[v] is the xy-coordinate for the vertex. The radius of each node + can be specified as a dictionary where node_radius[v] is the radius. If a + node is missing from this dictionary, or the node_radius is not specified at + all, a sensible default radius is chosen based on distances between nodes + given by pos. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + node_radius: dict + mapping of node to R^1 (radius of each node) + r0: float + minimum distance that concentric rings start from the node position + ax: Axis + matplotlib axis on which the plot is rendered + kwargs: dict + keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor + + Returns + ------- + PolyCollection + a Matplotlib PolyCollection that can be further styled + """ + + ax = ax or plt.gca() + + r0 = r0 or get_default_radius(H, pos) + + points = [node_radius.get(v, r0) * cp + pos[v] for v in H.nodes] + + kwargs.setdefault("facecolors", "black") + + circles = PolyCollection(points, **inflate_kwargs(H, kwargs)) + + ax.add_collection(circles) + + return circles
+ + +
[docs]def draw_hyper_labels(H, pos, node_radius={}, ax=None, labels={}, **kwargs): + """ + Draws text labels for the hypergraph nodes. + + The label is drawn to the right of the node. The node radius is needed (see + draw_hyper_nodes) so the text can be offset appropriately as the node size + changes. + + The text label can be customized by passing in a dictionary, labels, mapping + a node to its custom label. By default, the label is the string + representation of the node. + + Keyword arguments are passed through to Matplotlib's annotate function. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + node_radius: dict + mapping of node to R^1 (radius of each node) + ax: Axis + matplotlib axis on which the plot is rendered + labels: dict + mapping of node to text label + kwargs: dict + keyword arguments passed to matplotlib.annotate + + """ + ax = ax or plt.gca() + + params = transpose_inflated_kwargs(inflate_kwargs(H.nodes, kwargs)) + + for v, v_kwargs in zip(H.nodes, params): + xy = np.array([node_radius.get(v, 0), 0]) + pos[v] + ax.annotate( + labels.get(v, v), + xy, + **{ + k: ( + d[v] + if hasattr(d, "__getitem__") and type(d) not in {str, tuple} + else d + ) + for k, d in kwargs.items() + } + )
+ + +
[docs]def draw( + H, + pos=None, + with_color=True, + with_node_counts=False, + with_edge_counts=False, + layout=nx.spring_layout, + layout_kwargs={}, + ax=None, + node_radius=None, + edges_kwargs={}, + nodes_kwargs={}, + edge_labels={}, + edge_labels_kwargs={}, + node_labels={}, + node_labels_kwargs={}, + with_edge_labels=True, + with_node_labels=True, + label_alpha=0.35, + return_pos=False, +): + """ + Draw a hypergraph as a Matplotlib figure + + By default this will draw a colorful "rubber band" like hypergraph, where + convex hulls represent edges and are drawn around the nodes they contain. + + This is a convenience function that wraps calls with sensible parameters to + the following lower-level drawing functions: + + * draw_hyper_edges, + * draw_hyper_edge_labels, + * draw_hyper_labels, and + * draw_hyper_nodes + + The default layout algorithm is nx.spring_layout, but other layouts can be + passed in. The Hypergraph is converted to a bipartite graph, and the layout + algorithm is passed the bipartite graph. + + If you have a pre-determined layout, you can pass in a "pos" dictionary. + This is a dictionary mapping from node id's to x-y coordinates. For example: + + >>> pos = { + >>> 'A': (0, 0), + >>> 'B': (1, 2), + >>> 'C': (5, -3) + >>> } + + will position the nodes {A, B, C} manually at the locations specified. The + coordinate system is in Matplotlib "data coordinates", and the figure will + be centered within the figure. + + By default, this will draw in a new figure, but the axis to render in can be + specified using :code:`ax`. + + This approach works well for small hypergraphs, and does not guarantee + a rigorously "correct" drawing. Overlapping of sets in the drawing generally + implies that the sets intersect, but sometimes sets overlap if there is no + intersection. It is not possible, in general, to draw a "correct" hypergraph + this way for an arbitrary hypergraph, in the same way that not all graphs + have planar drawings. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + with_color: bool + set to False to disable color cycling of edges + with_node_counts: bool + set to True to replace the label for collapsed nodes with the number of elements + with_edge_counts: bool + set to True to label collapsed edges with number of elements + layout: function + layout algorithm to compute + layout_kwargs: dict + keyword arguments passed to layout function + ax: Axis + matplotlib axis on which the plot is rendered + edges_kwargs: dict + keyword arguments passed to matplotlib.collections.PolyCollection for edges + node_radius: None, int, float, or dict + radius of all nodes, or dictionary of node:value; the default (None) calculates radius based on number of collapsed nodes; reasonable values range between 1 and 3 + nodes_kwargs: dict + keyword arguments passed to matplotlib.collections.PolyCollection for nodes + edge_labels_kwargs: dict + keyword arguments passed to matplotlib.annotate for edge labels + node_labels_kwargs: dict + keyword argumetns passed to matplotlib.annotate for node labels + with_edge_labels: bool + set to False to make edge labels invisible + with_node_labels: bool + set to False to make node labels invisible + label_alpha: float + the transparency (alpha) of the box behind text drawn in the figure + """ + + ax = ax or plt.gca() + + if pos is None: + pos = layout_node_link(H, layout=layout, **layout_kwargs) + + r0 = get_default_radius(H, pos) + a0 = np.pi * r0**2 + + def get_node_radius(v): + if node_radius is None: + return np.sqrt(a0 * get_collapsed_size(v) / np.pi) + elif hasattr(node_radius, "get"): + return node_radius.get(v, 1) * r0 + return node_radius * r0 + + # guarantee that node radius is a dictionary mapping nodes to values + node_radius = {v: get_node_radius(v) for v in H.nodes} + + # for convenience, we are using setdefault to mutate the argument + # however, we need to copy this to prevent side-effects + edges_kwargs = edges_kwargs.copy() + edges_kwargs.setdefault("edgecolors", plt.cm.tab10(np.arange(len(H.edges)) % 10)) + edges_kwargs.setdefault("facecolors", "none") + + polys = draw_hyper_edges(H, pos, node_radius=node_radius, ax=ax, **edges_kwargs) + + if with_edge_labels: + labels = get_frozenset_label( + H.edges, count=with_edge_counts, override=edge_labels + ) + + draw_hyper_edge_labels( + H, + polys, + color=edges_kwargs["edgecolors"], + backgroundcolor=(1, 1, 1, label_alpha), + labels=labels, + ax=ax, + **edge_labels_kwargs + ) + + if with_node_labels: + labels = get_frozenset_label( + H.nodes, count=with_node_counts, override=node_labels + ) + + draw_hyper_labels( + H, + pos, + node_radius=node_radius, + labels=labels, + ax=ax, + va="center", + xytext=(5, 0), + textcoords="offset points", + backgroundcolor=(1, 1, 1, label_alpha), + **node_labels_kwargs + ) + + draw_hyper_nodes(H, pos, node_radius=node_radius, ax=ax, **nodes_kwargs) + + if len(H.nodes) == 1: + x, y = pos[list(H.nodes)[0]] + s = 20 + + ax.axis([x - s, x + s, y - s, y + s]) + else: + ax.axis("equal") + + ax.axis("off") + if return_pos: + return pos
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/drawing/two_column.html b/_modules/drawing/two_column.html new file mode 100644 index 00000000..42142a42 --- /dev/null +++ b/_modules/drawing/two_column.html @@ -0,0 +1,324 @@ + + + + + + drawing.two_column — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for drawing.two_column

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+import matplotlib.pyplot as plt
+from matplotlib.collections import LineCollection
+
+import networkx as nx
+
+from hypernetx.drawing.util import get_frozenset_label
+
+
+
[docs]def layout_two_column(H, spacing=2): + """ + Two column (bipartite) layout algorithm. + + This algorithm first converts the hypergraph into a bipartite graph and + then computes connected components. Disonneccted components are handled + independently and then stacked together. + + Within a connected component, the spectral ordering of the bipartite graph + provides a quick and dirty ordering that minimizes edge crossings in the + diagram. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + spacing: float + amount of whitespace between disconnected components + """ + offset = 0 + pos = {} + + def stack(vertices, x, height): + for i, v in enumerate(vertices): + pos[v] = (x, i + offset + (height - len(vertices)) / 2) + + G = H.bipartite() + for ci in nx.connected_components(G): + Gi = G.subgraph(ci) + key = {v: i for i, v in enumerate(nx.spectral_ordering(Gi))}.get + ci_vertices, ci_edges = [ + sorted([v for v, d in Gi.nodes(data=True) if d["bipartite"] == j], key=key) + for j in [0, 1] + ] + + height = max(len(ci_vertices), len(ci_edges)) + + stack(ci_vertices, 0, height) + stack(ci_edges, 1, height) + + offset += height + spacing + + return pos
+ + +
[docs]def draw_hyper_edges(H, pos, ax=None, **kwargs): + """ + Renders hyper edges for the two column layout. + + Each node-hyper edge membership is rendered as a line connecting the node + in the left column to the edge in the right column. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + ax: Axis + matplotlib axis on which the plot is rendered + kwargs: dict + keyword arguments passed to matplotlib.LineCollection + + Returns + ------- + LineCollection + the hyper edges + """ + ax = ax or plt.gca() + + pairs = [(v, e) for e in H.edges() for v in H.edges[e]] + + kwargs = { + k: v if type(v) != dict else [v.get(e) for _, e in pairs] + for k, v in kwargs.items() + } + + lines = LineCollection([(pos[u], pos[v]) for u, v in pairs], **kwargs) + + ax.add_collection(lines) + + return lines
+ + +
[docs]def draw_hyper_labels( + H, pos, labels={}, with_node_labels=True, with_edge_labels=True, ax=None +): + """ + Renders hyper labels (nodes and edges) for the two column layout. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + labels: dict + custom labels for nodes and edges can be supplied + with_node_labels: bool + False to disable node labels + with_edge_labels: bool + False to disable edge labels + ax: Axis + matplotlib axis on which the plot is rendered + kwargs: dict + keyword arguments passed to matplotlib.LineCollection + + """ + + ax = ax or plt.gca() + + to_draw = [] + if with_node_labels: + to_draw.append((list(H.nodes()), "right")) + + if with_edge_labels: + to_draw.append((list(H.edges()), "left")) + + for points, ha in to_draw: + for p in points: + ax.annotate(labels.get(p, p), pos[p], ha=ha, va="center")
+ + +
[docs]def draw( + H, + with_node_labels=True, + with_edge_labels=True, + with_node_counts=False, + with_edge_counts=False, + with_color=True, + edge_kwargs=None, + ax=None, +): + """ + Draw a hypergraph using a two-collumn layout. + + This is intended reproduce an illustrative technique for bipartite graphs + and hypergraphs that is typically used in papers and textbooks. + + The left column is reserved for nodes and the right column is reserved for + edges. A line is drawn between a node an an edge + + The order of nodes and edges is optimized to reduce line crossings between + the two columns. Spacing between disconnected components is adjusted to make + the diagram easier to read, by reducing the angle of the lines. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + with_node_labels: bool + False to disable node labels + with_edge_labels: bool + False to disable edge labels + with_node_counts: bool + set to True to label collapsed nodes with number of elements + with_edge_counts: bool + set to True to label collapsed edges with number of elements + with_color: bool + set to False to disable color cycling of hyper edges + edge_kwargs: dict + keyword arguments to pass to matplotlib.LineCollection + ax: Axis + matplotlib axis on which the plot is rendered + """ + + edge_kwargs = edge_kwargs or {} + + ax = ax or plt.gca() + + pos = layout_two_column(H) + + V = [v for v in H.nodes()] + E = [e for e in H.edges()] + + labels = {} + labels.update(get_frozenset_label(V, count=with_node_counts)) + labels.update(get_frozenset_label(E, count=with_edge_counts)) + + if with_color: + edge_kwargs["color"] = { + e: plt.cm.tab10(i % 10) for i, e in enumerate(H.edges()) + } + + draw_hyper_edges(H, pos, ax=ax, **edge_kwargs) + draw_hyper_labels( + H, + pos, + labels, + ax=ax, + with_node_labels=with_node_labels, + with_edge_labels=with_edge_labels, + ) + ax.autoscale_view() + + ax.axis("off")
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/drawing/util.html b/_modules/drawing/util.html new file mode 100644 index 00000000..4ef69977 --- /dev/null +++ b/_modules/drawing/util.html @@ -0,0 +1,262 @@ + + + + + + drawing.util — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for drawing.util

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+from itertools import combinations
+
+import numpy as np
+
+import networkx as nx
+
+
+
[docs]def inflate(items, v): + if type(v) in {str, tuple, int, float}: + return [v] * len(items) + elif callable(v): + return [v(i) for i in items] + elif type(v) not in {list, np.ndarray} and hasattr(v, "__getitem__"): + return [v[i] for i in items] + return v
+ + +
[docs]def inflate_kwargs(items, kwargs): + """ + Helper function to expand keyword arguments. + + Parameters + ---------- + n: int + length of resulting list if argument is expanded + kwargs: dict + keyword arguments to be expanded + + Returns + ------- + dict + dictionary with same keys as kwargs and whose values are lists of length n + """ + + return {k: inflate(items, v) for k, v in kwargs.items()}
+ + +
[docs]def transpose_inflated_kwargs(inflated): + return [dict(zip(inflated, v)) for v in zip(*inflated.values())]
+ + +
[docs]def get_collapsed_size(v): + try: + if type(v) == str and ":" in v: + return int(v.split(":")[-1]) + except: + pass + + return 1
+ + +
[docs]def get_frozenset_label(S, count=False, override={}): + """ + Helper function for rendering the labels of possibly collapsed nodes and edges + + Parameters + ---------- + S: iterable + list of entities to be labeled + count: bool + True if labels should be counts of entities instead of list + + Returns + ------- + dict + mapping of entity to its string representation + """ + + def helper(v): + if type(v) == str: + n = get_collapsed_size(v) + if count and n > 1: + return f"x {n}" + elif count: + return "" + return str(v) + + return {v: override.get(v, helper(v)) for v in S}
+ + +
[docs]def get_line_graph(H, collapse=True): + """ + Computes the line graph, a directed graph, where a directed edge (u, v) + exists if the edge u is a subset of the edge v in the hypergraph. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + collapse: bool + True if edges should be added if hyper edges are identical + + Returns + ------- + networkx.DiGraph + A directed graph + """ + D = nx.DiGraph() + + V = {edge: set(nodes) for edge, nodes in H.edges.elements.items()} + + D.add_nodes_from(V) + + for u, v in combinations(V, 2): + if V[u] != V[v] or not collapse: + if V[u].issubset(V[v]): + D.add_edge(u, v) + elif V[v].issubset(V[u]): + D.add_edge(v, u) + + return D
+ + +
[docs]def get_set_layering(H, collapse=True): + """ + Computes a layering of the edges in the hyper graph. + + In this layering, each edge is assigned a level. An edge u will be above + (e.g., have a smaller level value) another edge v if v is a subset of u. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + collapse: bool + True if edges should be added if hyper edges are identical + + Returns + ------- + dict + a mapping of vertices in H to integer levels + """ + + D = get_line_graph(H, collapse=collapse) + + levels = {} + + for v in nx.topological_sort(D): + parent_levels = [levels[u] for u, _ in D.in_edges(v)] + levels[v] = max(parent_levels) + 1 if len(parent_levels) else 0 + + return levels
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/contagion.html b/_modules/hypernetx/algorithms/contagion.html new file mode 100644 index 00000000..9d55186d --- /dev/null +++ b/_modules/hypernetx/algorithms/contagion.html @@ -0,0 +1,1274 @@ + + + + + + hypernetx.algorithms.contagion — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.contagion

+import random
+import numpy as np
+from collections import defaultdict
+from collections import Counter
+import hypernetx as hnx
+
+
+
[docs]def contagion_animation( + fig, + H, + transition_events, + node_state_color_dict, + edge_state_color_dict, + node_radius=1, + fps=1, +): + """ + A function to animate discrete-time contagion models for hypergraphs. Currently only supports a circular layout. + + Parameters + ---------- + fig : matplotlib Figure object + H : HyperNetX Hypergraph object + transition_events : dictionary + The dictionary that is output from the discrete_SIS and discrete_SIR functions with return_full_data=True + node_state_color_dict : dictionary + Dictionary which specifies the colors of each node state. All node states must be specified. + edge_state_color_dict : dictionary + Dictionary with keys that are edge states and values which specify the colors of each edge state + (can specify an alpha parameter). All edge-dependent transition states must be specified + (most common is "I") and there must be a a default "OFF" setting. + node_radius : float, default: 1 + The radius of the nodes to draw + fps : int > 0, default: 1 + Frames per second of the animation + + Returns + ------- + matplotlib Animation object + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> import matplotlib.pyplot as plt + >>> from IPython.display import HTML + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> transition_events = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt, return_full_data=True) + >>> node_state_color_dict = {"S":"green", "I":"red", "R":"blue"} + >>> edge_state_color_dict = {"S":(0, 1, 0, 0.3), "I":(1, 0, 0, 0.3), "R":(0, 0, 1, 0.3), "OFF": (1, 1, 1, 0)} + >>> fps = 1 + >>> fig = plt.figure() + >>> animation = contagion.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=fps) + >>> HTML(animation.to_jshtml()) + """ + + try: + from celluloid import Camera + except ModuleNotFoundError as e: + print( + f" {e}. If you need to use {__name__}, please install additional packages by running the following command: pip install .['all']" + ) + raise + + nodeState = defaultdict(lambda: "S") + + camera = Camera(fig) + + for t in sorted(list(transition_events.keys())): + edgeState = defaultdict(lambda: "OFF") + + # update edge and node states + for event in transition_events[t]: + status = event[0] + node = event[1] + + # update node states + nodeState[node] = status + + try: + # update the edge transmitters list if they are neighbor-dependent transitions + edgeID = event[2] + if edgeID is not None: + edgeState[edgeID] = status + except: + pass + + kwargs = {"layout_kwargs": {"seed": 39}} + + nodeStyle = { + "facecolors": [node_state_color_dict[nodeState[node]] for node in H.nodes] + } + edgeStyle = { + "facecolors": [edge_state_color_dict[edgeState[edge]] for edge in H.edges], + "edgecolors": "black", + } + + # draw hypergraph + hnx.draw( + H, + node_radius=node_radius, + nodes_kwargs=nodeStyle, + edges_kwargs=edgeStyle, + with_edge_labels=False, + with_node_labels=False, + **kwargs, + ) + camera.snap() + + return camera.animate(interval=1000 / fps)
+ + +# Canned Contagion Functions +
[docs]def collective_contagion(node, status, edge): + """ + The collective contagion mechanism described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 + + Parameters + ---------- + node : hashable + the node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> collective_contagion(0, status, (0, 1, 2)) + True + >>> collective_contagion(1, status, (0, 1, 2)) + False + >>> collective_contagion(3, status, (0, 1, 2)) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + for i in neighbors: + if status[i] != "I": + return False + return True
+ + +
[docs]def individual_contagion(node, status, edge): + """ + The individual contagion mechanism described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> individual_contagion(0, status, (0, 1, 3)) + True + >>> individual_contagion(1, status, (0, 1, 2)) + False + >>> collective_contagion(3, status, (0, 3, 4)) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + for i in neighbors: + if status[i] == "I": + return True + return False
+ + +
[docs]def threshold(node, status, edge, tau=0.1): + """ + The threshold contagion mechanism + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False) + tau : float between 0 and 1, default: 0.1 + The fraction of nodes in an edge that must be infected for the edge to be able to transmit to the node + + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> threshold(0, status, (0, 2, 3, 4), tau=0.2) + True + >>> threshold(0, status, (0, 2, 3, 4), tau=0.5) + False + >>> threshold(3, status, (1, 2, 3), tau=1) + False + """ + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + if len(neighbors) > 0: + fraction_infected = sum([status[i] == "I" for i in neighbors]) / len(neighbors) + # The isolated node case + else: + fraction_infected = 0 + return fraction_infected >= tau
+ + +
[docs]def majority_vote(node, status, edge): + """ + The majority vote contagion mechanism. If a majority of neighbors are contagious, + it is possible for an individual to change their opinion. If opinions are divided equally, + choose randomly. + + + Parameters + ---------- + node : hashable + The node uid to infect (If it doesn't have status "S", it will automatically return False) + status : dictionary + The nodes are keys and the values are statuses (The infected state denoted with "I") + edge : iterable + Iterable of node ids (node must be in the edge or it will automatically return False + Returns + ------- + bool + False if there is no potential to infect and True if there is. + + Notes + ----- + + Example:: + + >>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"} + >>> majority_vote(0, status, (0, 1, 2)) + True + >>> majority_vote(0, status, (0, 1, 2, 3)) + True + >>> majority_vote(1, status, (0, 1, 2)) + False + >>> majority_vote(3, status, (0, 1, 2)) + False + """ + + if status[node] != "S" or node not in edge: + return False + + neighbors = set(edge).difference({node}) + if len(neighbors) > 0: + fraction_infected = sum([status[i] == "I" for i in neighbors]) / len(neighbors) + else: + fraction_infected = 0 + + if fraction_infected < 0.5: + return False + elif fraction_infected > 0.5: + return True + else: + return random.choice([False, True])
+ + +# Auxiliary functions + +# The ListDict class is copied from Joel Miller's Github repository Mathematics-of-Epidemics-on-Networks +class _ListDict_(object): + r""" + The Gillespie algorithm will involve a step that samples a random element + from a set based on its weight. This is awkward in Python. + + So I'm introducing a new class based on a stack overflow answer by + Amber (http://stackoverflow.com/users/148870/amber) + for a question by + tba (http://stackoverflow.com/users/46521/tba) + found at + http://stackoverflow.com/a/15993515/2966723 + + This will allow me to select a random element uniformly, and then use + rejection sampling to make sure it's been selected with the appropriate + weight. + """ + + def __init__(self, weighted=False): + self.item_to_position = {} + self.items = [] + + self.weighted = weighted + if self.weighted: + self.weight = defaultdict(int) # presume all weights positive + self.max_weight = 0 + self._total_weight = 0 + self.max_weight_count = 0 + + def __len__(self): + return len(self.items) + + def __contains__(self, item): + return item in self.item_to_position + + def _update_max_weight(self): + C = Counter( + self.weight.values() + ) # may be a faster way to do this, we only need to count the max. + self.max_weight = max(C.keys()) + self.max_weight_count = C[self.max_weight] + + def insert(self, item, weight=None): + r""" + If not present, then inserts the thing (with weight if appropriate) + if already there, replaces the weight unless weight is 0 + + If weight is 0, then it removes the item and doesn't replace. + + WARNING: + replaces weight if already present, does not increment weight. + + + """ + if self.__contains__(item): + self.remove(item) + if weight != 0: + self.update(item, weight_increment=weight) + + def update(self, item, weight_increment=None): + r""" + If not present, then inserts the thing (with weight if appropriate) + if already there, increments weight + + WARNING: + increments weight if already present, cannot overwrite weight. + """ + if ( + weight_increment is not None + ): # will break if passing a weight to unweighted case + if weight_increment > 0 or self.weight[item] != self.max_weight: + self.weight[item] = self.weight[item] + weight_increment + self._total_weight += weight_increment + if self.weight[item] > self.max_weight: + self.max_weight_count = 1 + self.max_weight = self.weight[item] + elif self.weight[item] == self.max_weight: + self.max_weight_count += 1 + else: # it's a negative increment and was at max + self.max_weight_count -= 1 + self.weight[item] = self.weight[item] + weight_increment + self._total_weight += weight_increment + self.max_weight_count -= 1 + if self.max_weight_count == 0: + self._update_max_weight + elif self.weighted: + raise Exception("if weighted, must assign weight_increment") + + if item in self: # we've already got it, do nothing else + return + self.items.append(item) + self.item_to_position[item] = len(self.items) - 1 + + def remove(self, choice): + position = self.item_to_position.pop(choice) + last_item = self.items.pop() + if position != len(self.items): + self.items[position] = last_item + self.item_to_position[last_item] = position + + if self.weighted: + weight = self.weight.pop(choice) + self._total_weight -= weight + if weight == self.max_weight: + # if we find ourselves in this case often + # it may be better just to let max_weight be the + # largest weight *ever* encountered, even if all remaining weights are less + # + self.max_weight_count -= 1 + if self.max_weight_count == 0 and len(self) > 0: + self._update_max_weight() + + def choose_random(self): + # r'''chooses a random node. If there is a weight, it will use rejection + # sampling to choose a random node until it succeeds''' + if self.weighted: + while True: + choice = random.choice(self.items) + if random.random() < self.weight[choice] / self.max_weight: + break + # r = random.random()*self.total_weight + # for item in self.items: + # r-= self.weight[item] + # if r<0: + # break + return choice + + else: + return random.choice(self.items) + + def random_removal(self): + r"""uses other class methods to choose and then remove a random node""" + choice = self.choose_random() + self.remove(choice) + return choice + + def total_weight(self): + if self.weighted: + return self._total_weight + else: + return len(self) + + def update_total_weight(self): + self._total_weight = sum(self.weight[item] for item in self.items) + + +# Contagion Functions +
[docs]def discrete_SIR( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + initial_recovereds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + dt=1.0, + return_full_data=False, + **args, +): + """ + A discrete-time SIR model for hypergraphs similar to the construction described in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + "Simplicial models of social contagion" by Iacopini et al. + https://doi.org/10.1038/s41467-019-10431-6 + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + initial_recovereds : list or numpy array, default: None + An iterable of initially recovered node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: float('Inf') + Time at which the simulation should be terminated if it hasn't already. + dt : float > 0, default: 1.0 + Step forward in time that the simulation takes at each step. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + if return_full_data + dictionary + Time as the keys and events that happen as the values. + else + t, S, I, R : numpy arrays + time (t), number of susceptible (S), infected (I), and recovered (R) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> t, S, I, R = contagion.discrete_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt) + """ + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + if initial_recovereds is None: + initial_recovereds = [] + else: + # check to make sure that the initially recovered nodes are in the hypergraph + initial_recovereds = list(set(H.nodes).intersection(set(initial_recovereds))) + + status = defaultdict(lambda: "S") + + if return_full_data: + transition_events = dict() + transition_events[tmin] = list() + + for node in initial_infecteds: + status[node] = "I" + if return_full_data: + transition_events[tmin].append(("I", node, None)) + + for node in initial_recovereds: + status[node] = "R" + if return_full_data: + transition_events[tmin].append(("R", node)) + + I = [len(initial_infecteds)] + R = [len(initial_recovereds)] + S = [H.number_of_nodes() - I[-1] - R[-1]] + + t = tmin + times = [t] + newStatus = status.copy() + + edge_neighbors = lambda node: H.edges.memberships[node] + + while t < tmax and I[-1] != 0: + # Initialize the next step with the same numbers of S, I, and R as the last step before computing the changes + S.append(S[-1]) + I.append(I[-1]) + R.append(R[-1]) + + if return_full_data: + transition_events[t + dt] = list() + + for node in H.nodes: + if status[node] == "I": + # recover the node. If it is not healed, it stays infected. + if random.random() <= gamma * dt: + newStatus[node] = "R" + I[-1] += -1 + R[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("R", node)) + elif status[node] == "S": + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + if ( + random.random() + <= tau[len(members)] + * transmission_function(node, status, members, **args) + * dt + ): + newStatus[node] = "I" + S[-1] += -1 + I[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("I", node, edge_id)) + break + # This executes after the loop has executed normally without hitting the break statement which indicates infection + else: + newStatus[node] = "S" + status = newStatus.copy() + t += dt + times.append(t) + if return_full_data: + return transition_events + else: + return np.array(times), np.array(S), np.array(I), np.array(R)
+ + +
[docs]def discrete_SIS( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + rho=None, + tmin=0, + tmax=100, + dt=1.0, + return_full_data=False, + **args, +): + """ + A discrete-time SIS model for hypergraphs as implemented in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + "Simplicial models of social contagion" by Iacopini et al. + https://doi.org/10.1038/s41467-019-10431-6 + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: 100 + Time at which the simulation should be terminated if it hasn't already. + dt : float > 0, default: 1.0 + Step forward in time that the simulation takes at each step. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + if return_full_data + dictionary + Time as the keys and events that happen as the values. + else + t, S, I : numpy arrays + time (t), number of susceptible (S), and infected (I) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> dt = 0.1 + >>> t, S, I = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt) + """ + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + status = defaultdict(lambda: "S") + + if return_full_data: + transition_events = dict() + transition_events[tmin] = list() + + for node in initial_infecteds: + status[node] = "I" + if return_full_data: + transition_events[tmin].append(("I", node, None)) + + I = [len(initial_infecteds)] + S = [H.number_of_nodes() - I[-1]] + + t = tmin + times = [t] + newStatus = status.copy() + + edge_neighbors = lambda node: H.edges.memberships[node] + + while t < tmax and I[-1] != 0: + # Initialize the next step with the same numbers of S, I, and R as the last step before computing the changes + S.append(S[-1]) + I.append(I[-1]) + if return_full_data: + transition_events[t + dt] = list() + + for node in H.nodes: + if status[node] == "I": + # recover the node. If it is not healed, it stays infected. + if random.random() <= gamma * dt: + newStatus[node] = "S" + I[-1] += -1 + S[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("S", node)) + elif status[node] == "S": + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + + if ( + random.random() + <= tau[len(members)] + * transmission_function(node, status, members, **args) + * dt + ): + newStatus[node] = "I" + S[-1] += -1 + I[-1] += 1 + if return_full_data: + transition_events[t + dt].append(("I", node, edge_id)) + break + # This executes after the loop has executed normally without hitting the break statement which indicates infection, though I'm not sure we even need it + else: + newStatus[node] = "S" + status = newStatus.copy() + t += dt + times.append(t) + if return_full_data: + return transition_events + else: + return np.array(times), np.array(S), np.array(I)
+ + +
[docs]def Gillespie_SIR( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + initial_recovereds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + **args, +): + """ + A continuous-time SIR model for hypergraphs similar to the model in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + implemented for networks in the EoN package by Joel C. Miller + https://epidemicsonnetworks.readthedocs.io/en/latest/ + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + initial_recovereds : list or numpy array, default: None + An iterable of initially recovered node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: float('Inf') + Time at which the simulation should be terminated if it hasn't already. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + t, S, I, R : numpy arrays + time (t), number of susceptible (S), infected (I), and recovered (R) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> t, S, I, R = contagion.Gillespie_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax) + """ + # Initial infecteds and recovereds should be lists or None. Add a check here. + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + if initial_recovereds is None: + initial_recovereds = [] + else: + # check to make sure that the initially recovered nodes are in the hypergraph + initial_recovereds = list(set(H.nodes).intersection(set(initial_recovereds))) + + status = defaultdict(lambda: "S") + + size_dist = np.unique(H.edge_size_dist()) + + for node in initial_infecteds: + status[node] = "I" + + for node in initial_recovereds: + status[node] = "R" + + I = [len(initial_infecteds)] + R = [len(initial_recovereds)] + S = [H.number_of_nodes() - I[-1] - R[-1]] + + edge_neighbors = lambda node: H.edges.memberships[node] + + t = tmin + times = [t] + + infecteds = _ListDict_() + + infectious_edges = dict() + for size in size_dist: + infectious_edges[size] = _ListDict_() + + for node in initial_infecteds: + infecteds.update(node) + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + infectious_edges[len(members)].update((edge_id, node)) + + total_rates = dict() + total_rates[1] = gamma * infecteds.total_weight() + + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + dt = random.expovariate(total_rate) + t += dt + + while t < tmax and I[-1] != 0: + # choose type of event that happens + while True: + choice = random.choice(list(total_rates.keys())) + if random.random() <= total_rates[choice] / total_rate: + break + + if choice == 1: # recover + recovering_node = infecteds.random_removal() + status[recovering_node] = "R" + + # remove edges that are no longer infectious because of this node recovering + for edge_id in edge_neighbors(recovering_node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].remove((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1]) + I.append(I[-1] - 1) + R.append(R[-1] + 1) + + else: + _, recipient = infectious_edges[choice].choose_random() + status[recipient] = "I" + + infecteds.update(recipient) + + # remove the infectious links, because they can't infect an infected node. + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + try: + infectious_edges[len(members)].remove((edge_id, recipient)) + except: + pass + + # add edges that are infectious because of this node being infected + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].update((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] - 1) + I.append(I[-1] + 1) + R.append(R[-1]) + + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + if total_rate > 0: + dt = random.expovariate(total_rate) + else: + dt = float("Inf") + t += dt + return np.array(times), np.array(S), np.array(I), np.array(R)
+ + +
[docs]def Gillespie_SIS( + H, + tau, + gamma, + transmission_function=threshold, + initial_infecteds=None, + rho=None, + tmin=0, + tmax=float("Inf"), + return_full_data=False, + sim_kwargs=None, + **args, +): + """ + A continuous-time SIS model for hypergraphs similar to the model in + "The effect of heterogeneity on hypergraph contagion models" by Landry and Restrepo + https://doi.org/10.1063/5.0020034 and + implemented for networks in the EoN package by Joel C. Miller + https://epidemicsonnetworks.readthedocs.io/en/latest/ + + Parameters + ---------- + H : HyperNetX Hypergraph object + tau : dictionary + Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float) + gamma : float + The healing rate + transmission_function : lambda function, default: threshold + A lambda function that has required arguments (node, status, edge) and optional arguments + initial_infecteds : list or numpy array, default: None + Iterable of initially infected node uids + rho : float from 0 to 1, default: None + The fraction of initially infected individuals. Both rho and initially infected cannot be specified. + tmin : float, default: 0 + Time at the start of the simulation + tmax : float, default: 100 + Time at which the simulation should be terminated if it hasn't already. + return_full_data : bool, default: False + This returns all the infection and recovery events at each time if True. + **args : Optional arguments to transmission function + This allows user-defined transmission functions with extra parameters. + + Returns + ------- + t, S, I : numpy arrays + time (t), number of susceptible (S), and infected (I) at each time. + + Notes + ----- + + Example:: + + >>> import hypernetx.algorithms.contagion as contagion + >>> import random + >>> import hypernetx as hnx + >>> n = 1000 + >>> m = 10000 + >>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)] + >>> H = hnx.Hypergraph(hyperedgeList) + >>> tau = {2:0.1, 3:0.1} + >>> gamma = 0.1 + >>> tmax = 100 + >>> t, S, I = contagion.Gillespie_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax) + """ + # Initial infecteds and recovereds should be lists or None. Add a check here. + + if rho is not None and initial_infecteds is not None: + raise Exception("Cannot define both initial_infecteds and rho") + + if initial_infecteds is None: + if rho is None: + initial_number = 1 + else: + initial_number = int(round(H.number_of_nodes() * rho)) + initial_infecteds = random.sample(list(H.nodes), initial_number) + else: + # check to make sure that the initially infected nodes are in the hypergraph + initial_infecteds = list(set(H.nodes).intersection(set(initial_infecteds))) + + status = defaultdict(lambda: "S") + + size_dist = np.unique(H.edge_size_dist()) + + for node in initial_infecteds: + status[node] = "I" + + I = [len(initial_infecteds)] + S = [H.number_of_nodes() - I[-1]] + + edge_neighbors = lambda node: H.edges.memberships[node] + + t = tmin + times = [t] + + infecteds = _ListDict_() + + infectious_edges = dict() + for size in size_dist: + infectious_edges[size] = _ListDict_() + + for node in initial_infecteds: + infecteds.update(node) + for edge_id in edge_neighbors(node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + infectious_edges[len(members)].update((edge_id, node)) + + total_rates = dict() + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + dt = random.expovariate(total_rate) + t += dt + + while t < tmax and I[-1] != 0: + # choose type of event that happens + # this can be improved + while True: + choice = random.choice(list(total_rates.keys())) + if random.random() <= total_rates[choice] / total_rate: + break + + if choice == 1: # recover + recovering_node = infecteds.random_removal() + status[recovering_node] = "S" + + # remove edges that are no longer infectious because of this node recovering + for edge_id in edge_neighbors(recovering_node): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].remove((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] + 1) + I.append(I[-1] - 1) + + else: + _, recipient = infectious_edges[choice].choose_random() + status[recipient] = "I" + + infecteds.update(recipient) + + # remove the infectious links, because they can't infect an infected node. + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + try: + infectious_edges[len(members)].remove((edge_id, recipient)) + except: + pass + + # add edges that are infectious because of this node being infected + for edge_id in edge_neighbors(recipient): + members = H.edges[edge_id] + for node in members: + is_infectious = transmission_function(node, status, members, **args) + if is_infectious: + try: + infectious_edges[len(members)].update((edge_id, node)) + except: + pass + times.append(t) + S.append(S[-1] - 1) + I.append(I[-1] + 1) + + total_rates[1] = gamma * infecteds.total_weight() + for size in size_dist: + total_rates[size] = tau[size] * infectious_edges[size].total_weight() + + total_rate = sum(total_rates.values()) + + if total_rate > 0: + dt = random.expovariate(total_rate) + else: + dt = float("Inf") + t += dt + + return np.array(times), np.array(S), np.array(I)
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/generative_models.html b/_modules/hypernetx/algorithms/generative_models.html new file mode 100644 index 00000000..4f5c4fb0 --- /dev/null +++ b/_modules/hypernetx/algorithms/generative_models.html @@ -0,0 +1,367 @@ + + + + + + hypernetx.algorithms.generative_models — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.generative_models

+import random
+import math
+import warnings
+from collections import defaultdict
+import numpy as np
+import pandas as pd
+from hypernetx import Hypergraph
+
+
+
[docs]def erdos_renyi_hypergraph(n, m, p, node_labels=None, edge_labels=None): + """ + A function to generate an Erdos-Renyi hypergraph as implemented by Mirah Shi and described for + bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001 + + Parameters + ---------- + n: int + Number of nodes + m: int + Number of edges + p: float + The probability that a bipartite edge is created + node_labels: list, default=None + Vertex labels + edge_labels: list, default=None + Hyperedge labels + + Returns + ------- + HyperNetX Hypergraph object + + + Example:: + + >>> import hypernetx.algorithms.generative_models as gm + >>> n = 1000 + >>> m = n + >>> p = 0.01 + >>> H = gm.erdos_renyi_hypergraph(n, m, p) + + """ + + if node_labels is not None and edge_labels is not None: + get_node_label = lambda index: node_labels[index] + get_edge_label = lambda index: edge_labels[index] + else: + get_node_label = lambda index: index + get_edge_label = lambda index: index + + bipartite_edges = [] + for u in range(n): + v = 0 + while v < m: + # identify next pair + r = random.random() + v = v + math.floor(math.log(r) / math.log(1 - p)) + if v < m: + # add vertex hyperedge pair + bipartite_edges.append((get_edge_label(v), get_node_label(u))) + v = v + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+ + +
[docs]def chung_lu_hypergraph(k1, k2): + """ + A function to generate an extension of Chung-Lu hypergraph as implemented by Mirah Shi and described for + bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001 + + Parameters + ---------- + k1 : dictionary + This a dictionary where the keys are node ids and the values are node degrees. + k2 : dictionary + This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes. + Returns + ------- + HyperNetX Hypergraph object + + Notes + ----- + The sums of k1 and k2 should be roughly the same. If they are not the same, this function returns a warning but still runs. + The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported. + + Example:: + + >>> import hypernetx.algorithms.generative_models as gm + >>> import random + >>> n = 100 + >>> k1 = {i : random.randint(1, 100) for i in range(n)} + >>> k2 = {i : sorted(k1.values())[i] for i in range(n)} + >>> H = gm.chung_lu_hypergraph(k1, k2) + """ + + # sort dictionary by degree in decreasing order + Nlabels = [n for n, _ in sorted(k1.items(), key=lambda d: d[1], reverse=True)] + Mlabels = [m for m, _ in sorted(k2.items(), key=lambda d: d[1], reverse=True)] + + m = len(k2) + + if sum(k1.values()) != sum(k2.values()): + warnings.warn( + "The sum of the degree sequence does not match the sum of the size sequence" + ) + + S = sum(k1.values()) + + bipartite_edges = [] + for u in Nlabels: + j = 0 + v = Mlabels[j] # start from beginning every time + p = min((k1[u] * k2[v]) / S, 1) + + while j < m: + if p != 1: + r = random.random() + j = j + math.floor(math.log(r) / math.log(1 - p)) + if j < m: + v = Mlabels[j] + q = min((k1[u] * k2[v]) / S, 1) + r = random.random() + if r < q / p: + # no duplicates + bipartite_edges.append((v, u)) + + p = q + j = j + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+ + +
[docs]def dcsbm_hypergraph(k1, k2, g1, g2, omega): + """ + A function to generate an extension of DCSBM hypergraph as implemented by Mirah Shi and described for + bipartite networks by Larremore et al. in https://doi.org/10.1103/PhysRevE.90.012805 + + Parameters + ---------- + k1 : dictionary + This a dictionary where the keys are node ids and the values are node degrees. + k2 : dictionary + This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes. + g1 : dictionary + This a dictionary where the keys are node ids and the values are the group ids to which the node belongs. + The keys must match the keys of k1. + g2 : dictionary + This a dictionary where the keys are edge ids and the values are the group ids to which the edge belongs. + The keys must match the keys of k2. + omega : 2D numpy array + This is a matrix with entries which specify the number of edges between a given node community and edge community. + The number of rows must match the number of node communities and the number of columns + must match the number of edge communities. + + + Returns + ------- + HyperNetX Hypergraph object + + Notes + ----- + The sums of k1 and k2 should be the same. If they are not the same, this function returns a warning but still runs. + The sum of k1 (and k2) and omega should be the same. If they are not the same, this function returns a warning + but still runs and the number of entries in the incidence matrix is determined by the omega matrix. + + The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported. + + Example:: + + >>> n = 100 + >>> k1 = {i : random.randint(1, 100) for i in range(n)} + >>> k2 = {i : sorted(k1.values())[i] for i in range(n)} + >>> g1 = {i : random.choice([0, 1]) for i in range(n)} + >>> g2 = {i : random.choice([0, 1]) for i in range(n)} + >>> omega = np.array([[100, 10], [10, 100]]) + >>> H = gm.dcsbm_hypergraph(k1, k2, g1, g2, omega) + """ + + # sort dictionary by degree in decreasing order + Nlabels = [n for n, _ in sorted(k1.items(), key=lambda d: d[1], reverse=True)] + Mlabels = [m for m, _ in sorted(k2.items(), key=lambda d: d[1], reverse=True)] + + # these checks verify that the sum of node and edge degrees and the sum of node degrees + # and the sum of community connection matrix differ by less than a single edge. + if abs(sum(k1.values()) - sum(k2.values())) > 1: + warnings.warn( + "The sum of the degree sequence does not match the sum of the size sequence" + ) + + if abs(sum(k1.values()) - np.sum(omega)) > 1: + warnings.warn( + "The sum of the degree sequence does not match the entries in the omega matrix" + ) + + # get indices for each community + community1Indices = defaultdict(list) + for label in Nlabels: + group = g1[label] + community1Indices[group].append(label) + + community2Indices = defaultdict(list) + for label in Mlabels: + group = g2[label] + community2Indices[group].append(label) + + bipartite_edges = list() + + kappa1 = defaultdict(lambda: 0) + kappa2 = defaultdict(lambda: 0) + for id, g in g1.items(): + kappa1[g] += k1[id] + for id, g in g2.items(): + kappa2[g] += k2[id] + + for group1 in community1Indices.keys(): + for group2 in community2Indices.keys(): + # for each constant probability patch + try: + groupConstant = omega[group1, group2] / ( + kappa1[group1] * kappa2[group2] + ) + except: + groupConstant = 0 + + for u in community1Indices[group1]: + j = 0 + v = community2Indices[group2][j] # start from beginning every time + # max probability + p = min(k1[u] * k2[v] * groupConstant, 1) + while j < len(community2Indices[group2]): + if p != 1: + r = random.random() + try: + j = j + math.floor(math.log(r) / math.log(1 - p)) + except: + j = np.inf + if j < len(community2Indices[group2]): + v = community2Indices[group2][j] + q = min((k1[u] * k2[v]) * groupConstant, 1) + r = random.random() + if r < q / p: + # no duplicates + bipartite_edges.append((v, u)) + + p = q + j = j + 1 + + df = pd.DataFrame(bipartite_edges) + return Hypergraph(df, static=True)
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/homology_mod2.html b/_modules/hypernetx/algorithms/homology_mod2.html new file mode 100644 index 00000000..39454989 --- /dev/null +++ b/_modules/hypernetx/algorithms/homology_mod2.html @@ -0,0 +1,1012 @@ + + + + + + hypernetx.algorithms.homology_mod2 — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.homology_mod2

+"""
+Homology and Smith Normal Form
+==============================
+The purpose of computing the Homology groups for data generated
+hypergraphs is to identify data sources that correspond to interesting
+features in the topology of the hypergraph.
+
+The elements of one of these Homology groups are generated by $k$
+dimensional cycles of relationships in the original data that are not
+bound together by higher order relationships. Ideally, we want the
+briefest description of these cycles; we want a minimal set of
+relationships exhibiting interesting cyclic behavior. This minimal set
+will be a bases for the Homology group.
+
+The cyclic relationships in the data are discovered using a **boundary
+map** represented as a matrix. To discover the bases we compute the
+**Smith Normal Form** of the boundary map.
+
+Homology Mod2
+-------------
+This module computes the homology groups for data represented as an
+abstract simplicial complex with chain groups $\{C_k\}$ and $Z_2$ additions.
+The boundary matrices are represented as rectangular matrices over $Z_2$.
+These matrices are diagonalized and represented in Smith
+Normal Form. The kernel and image bases are computed and the Betti
+numbers and homology bases are returned.
+
+Methods for obtaining SNF for Z/2Z are based on Ferrario's work:
+http://www.dlfer.xyz/post/2016-10-27-smith-normal-form/
+
+"""
+
+import numpy as np
+import hypernetx as hnx
+import warnings
+import copy
+from hypernetx import HyperNetXError
+from collections import defaultdict
+import itertools as it
+from scipy.sparse import csr_matrix
+
+
+
[docs]def kchainbasis(h, k): + """ + Compute the set of k dimensional cells in the abstract simplicial + complex associated with the hypergraph. + + Parameters + ---------- + h : hnx.Hypergraph + k : int + dimension of cell + + Returns + ------- + : list + an ordered list of kchains represented as tuples of length k+1 + + See also + -------- + hnx.hypergraph.toplexes + + Notes + ----- + - Method works best if h is simple [Berge], i.e. no edge contains another and there are no duplicate edges (toplexes). + - Hypergraph node uids must be sortable. + + """ + + import itertools as it + + kchains = set() + for e in h.edges(): + en = sorted(h.edges[e]) + if len(en) == k + 1: + kchains.add(tuple(en)) + elif len(en) > k + 1: + kchains.update(set(it.combinations(en, k + 1))) + return sorted(list(kchains))
+ + +
[docs]def bkMatrix(km1basis, kbasis): + """ + Compute the boundary map from $C_{k-1}$-basis to $C_k$ basis with + respect to $Z_2$ + + Parameters + ---------- + km1basis : indexable iterable + Ordered list of $k-1$ dimensional cell + kbasis : indexable iterable + Ordered list of $k$ dimensional cells + + Returns + ------- + bk : np.array + boundary matrix in $Z_2$ stored as boolean + + """ + bk = np.zeros((len(km1basis), len(kbasis)), dtype=int) + for cell in kbasis: + for idx in range(len(cell)): + face = cell[:idx] + cell[idx + 1 :] + row = km1basis.index(face) + col = kbasis.index(cell) + bk[row, col] = 1 + return bk
+ + +def _rswap(i, j, S): + """ + Swaps ith and jth row of copy of S + + Parameters + ---------- + i : int + j : int + S : np.array + + Returns + ------- + N : np.array + """ + N = copy.deepcopy(S) + row = copy.deepcopy(N[i]) + N[i] = copy.deepcopy(N[j]) + N[j] = row + return N + + +def _cswap(i, j, S): + """ + Swaps ith and jth column of copy of S + + Parameters + ---------- + i : int + j : int + S : np.array + matrix + + Returns + ------- + N : np.array + """ + N = _rswap(i, j, S.transpose()).transpose() + return N + + +
[docs]def swap_rows(i, j, *args): + """ + Swaps ith and jth row of each matrix in args + Returns a list of new matrices + + Parameters + ---------- + i : int + j : int + args : np.arrays + + Returns + ------- + list + list of copies of args with ith and jth row swapped + """ + output = list() + for M in args: + output.append(_rswap(i, j, M)) + return output
+ + +
[docs]def swap_columns(i, j, *args): + """ + Swaps ith and jth column of each matrix in args + Returns a list of new matrices + + Parameters + ---------- + i : int + j : int + args : np.arrays + + Returns + ------- + list + list of copies of args with ith and jth row swapped + """ + output = list() + for M in args: + output.append(_cswap(i, j, M)) + return output
+ + +
[docs]def add_to_row(M, i, j): + """ + Replaces row i with logical xor between row i and j + + Parameters + ---------- + M : np.array + i : int + index of row being altered + j : int + index of row being added to altered + + Returns + ------- + N : np.array + """ + N = copy.deepcopy(M) + N[i] = 1 * np.logical_xor(N[i], N[j]) + return N
+ + +
[docs]def add_to_column(M, i, j): + """ + Replaces column i (of M) with logical xor between column i and j + + Parameters + ---------- + M : np.array + matrix + i : int + index of column being altered + j : int + index of column being added to altered + + Returns + ------- + N : np.array + """ + N = M.transpose() + return add_to_row(N, i, j).transpose()
+ + +
[docs]def logical_dot(ar1, ar2): + """ + Returns the boolean equivalent of the dot product mod 2 on two 1-d arrays of + the same length. + + Parameters + ---------- + ar1 : numpy.ndarray + 1-d array + ar2 : numpy.ndarray + 1-d array + + Returns + ------- + : bool + boolean value associated with dot product mod 2 + + Raises + ------ + HyperNetXError + If arrays are not of the same length an error will be raised. + """ + if len(ar1) != len(ar2): + raise HyperNetXError("logical_dot requires two 1-d arrays of the same length") + else: + return 1 * np.logical_xor.reduce(np.logical_and(ar1, ar2))
+ + +
[docs]def logical_matmul(mat1, mat2): + """ + Returns the boolean equivalent of matrix multiplication mod 2 on two + binary arrays stored as type boolean + + Parameters + ---------- + mat1 : np.ndarray + 2-d array of boolean values + mat2 : np.ndarray + 2-d array of boolean values + + Returns + ------- + mat : np.ndarray + boolean matrix equivalent to the mod 2 matrix multiplication of the + matrices as matrices over Z/2Z + + Raises + ------ + HyperNetXError + If inner dimensions are not equal an error will be raised. + + """ + L1, R1 = mat1.shape + L2, R2 = mat2.shape + if R1 != L2: + raise HyperNetXError( + "logical_matmul called for matrices with inner dimensions mismatched" + ) + + mat = np.zeros((L1, R2), dtype=int) + mat2T = mat2.transpose() + for i in range(L1): + if np.any(mat1[i]): + for j in range(R2): + mat[i, j] = logical_dot(mat1[i], mat2T[j]) + else: + mat[i] = np.zeros((1, R2), dtype=int) + return mat
+ + +
[docs]def matmulreduce(arr, reverse=False): + """ + Recursively applies a 'logical multiplication' to a list of boolean arrays. + + For arr = [arr[0],arr[1],arr[2]...arr[n]] returns product arr[0]arr[1]...arr[n] + If reverse = True, returns product arr[n]arr[n-1]...arr[0] + + Parameters + ---------- + arr : list of np.array + list of nxm matrices represented as np.array + reverse : bool, optional + order to multiply the matrices + + Returns + ------- + P : np.array + Product of matrices in the list + """ + if reverse: + items = range(len(arr) - 1, -1, -1) + else: + items = range(len(arr)) + P = arr[items[0]] + for i in items[1:]: + P = logical_matmul(P, arr[i]) * 1 + return P
+ + +
[docs]def logical_matadd(mat1, mat2): + """ + Returns the boolean equivalent of matrix addition mod 2 on two + binary arrays stored as type boolean + + Parameters + ---------- + mat1 : np.ndarray + 2-d array of boolean values + mat2 : np.ndarray + 2-d array of boolean values + + Returns + ------- + mat : np.ndarray + boolean matrix equivalent to the mod 2 matrix addition of the + matrices as matrices over Z/2Z + + Raises + ------ + HyperNetXError + If dimensions are not equal an error will be raised. + + """ + S1 = mat1.shape + S2 = mat2.shape + mat = np.zeros(S1, dtype=int) + if S1 != S2: + raise HyperNetXError( + "logical_matadd called for matrices with different dimensions" + ) + if len(S1) == 1: + for idx in range(S1[0]): + mat[idx] = 1 * np.logical_xor(mat1[idx], mat2[idx]) + else: + for idx in range(S1[0]): + for jdx in range(S1[1]): + mat[idx, jdx] = 1 * np.logical_xor(mat1[idx, jdx], mat2[idx, jdx]) + return mat
+ + +# Convenience methods for computing Smith Normal Form +# All of these operations have themselves as inverses + + +def _sr(i, j, M, L): + return swap_rows(i, j, M, L) + + +def _sc(i, j, M, R): + return swap_columns(i, j, M, R) + + +def _ar(i, j, M, L): + return add_to_row(M, i, j), add_to_row(L, i, j) + + +def _ac(i, j, M, R): + return add_to_column(M, i, j), add_to_column(R, i, j) + + +def _get_next_pivot(M, s1, s2=None): + """ + Determines the first r,c indices in the submatrix of M starting + with row s1 and column s2 index (row,col) that is nonzero, + if it exists. + + Search starts with the s2th column and looks for the first nonzero + s1 row. If none is found, search continues to the next column and so + on. + + Parameters + ---------- + M : np.array + matrix represented as np.array + s1 : int + index of row position to start submatrix of M + s2 : int, optional, default = s1 + index of column position to start submatrix of M + + Returns + ------- + (r,c) : tuple of int or None + + """ + # find the next nonzero pivot to put in s,s spot for Smith Normal Form + m, n = M.shape + if not s2: + s2 = s1 + for c in range(s2, n): + for r in range(s1, m): + if M[r, c]: + return (r, c) + return None + + +
[docs]def smith_normal_form_mod2(M): + """ + Computes the invertible transformation matrices needed to compute the + Smith Normal Form of M modulo 2 + + Parameters + ---------- + M : np.array + a rectangular matrix with data type bool + track : bool + if track=True will print out the transformation as Z/2Z matrix as it + discovers L[i] and R[j] + + Returns + ------- + L, R, S, Linv : np.arrays + LMR = S is the Smith Normal Form of the matrix M. + + Note + ---- + Given a mxn matrix $M$ with + entries in $Z_2$ we start with the equation: $L M R = S$, where + $L = I_m$, and $R=I_n$ are identity matrices and $S = M$. We + repeatedly apply actions to the left and right side of the equation + to transform S into a diagonal matrix. + For each action applied to the left side we apply its inverse + action to the right side of I_m to generate $L^{-1}$. + Finally we verify: + $L M R = S$ and $LLinv = I_m$. + """ + + S = copy.copy(M) + dimL, dimR = M.shape + + # initialize left and right transformations with identity matrices + L = np.eye(dimL, dtype=int) + R = np.eye(dimR, dtype=int) + Linv = np.eye(dimL, dtype=int) + for s in range(min(dimL, dimR)): + # Find index pair (rdx,cdx) with value 1 in submatrix M[s:,s:] + pivot = _get_next_pivot(S, s) + if not pivot: + break + else: + rdx, cdx = pivot + # Swap rows and columns as needed so that 1 is in the s,s position + if rdx > s: + S, L = _sr(s, rdx, S, L) + Linv = swap_columns(rdx, s, Linv)[0] + if cdx > s: + S, R = _sc(s, cdx, S, R) + # add sth row to every row with 1 in sth column & sth column to every column with 1 in sth row + row_indices = [idx for idx in range(s + 1, dimL) if S[idx, s] == 1] + for rdx in row_indices: + S, L = _ar(rdx, s, S, L) + Linv = add_to_column(Linv, s, rdx) + column_indices = [jdx for jdx in range(s + 1, dimR) if S[s, jdx] == 1] + for cdx in column_indices: + S, R = _ac(cdx, s, S, R) + return L, R, S, Linv
+ + +
[docs]def reduced_row_echelon_form_mod2(M): + """ + Computes the invertible transformation matrices needed to compute + the reduced row echelon form of M modulo 2 + + Parameters + ---------- + M : np.array + a rectangular matrix with elements in $Z_2$ + + Returns + ------- + L, S, Linv : np.arrays + LM = S where S is the reduced echelon form of M + and M = LinvS + """ + S = copy.deepcopy(M) + dimL, dimR = M.shape + + # method with numpy + Linv = np.eye(dimL, dtype=int) + L = np.eye(dimL, dtype=int) + + s2 = 0 + s1 = 0 + while s2 <= dimR and s1 <= dimL: + # Find index pair (rdx,cdx) with value 1 in submatrix M[s1:,s2:] + # look for the first 1 in the s2 column + pivot = _get_next_pivot(S, s1, s2) + + if not pivot: + return L, S, Linv + else: + rdx, cdx = pivot + if rdx > s1: + # Swap rows as needed so that 1 leads the row + S, L = _sr(s1, rdx, S, L) + Linv = swap_columns(rdx, s1, Linv)[0] + # add s1th row to every nonzero row + row_indices = [ + idx for idx in range(0, dimL) if idx != s1 and S[idx, cdx] == 1 + ] + for idx in row_indices: + S, L = _ar(idx, s1, S, L) + Linv = add_to_column(Linv, s1, idx) + s1, s2 = s1 + 1, cdx + 1 + + return L, S, Linv
+ + +
[docs]def boundary_group(image_basis): + """ + Returns a csr_matrix with rows corresponding to the elements of the + group generated by image basis over $\mathbb{Z}_2$ + + Parameters + ---------- + image_basis : numpy.ndarray or scipy.sparse.csr_matrix + 2d-array of basis elements + + Returns + ------- + : scipy.sparse.csr_matrix + """ + if len(image_basis) > 10: + msg = """ + This method is inefficient for large image bases. + """ + warnings.warn(msg, stacklevel=2) + if np.sum(image_basis) == 0: + return None + dim = image_basis.shape[0] + itm = csr_matrix(list(it.product([0, 1], repeat=dim))) + return csr_matrix(np.mod(itm * image_basis, 2))
+ + +def _compute_matrices_for_snf(bd): + """ + Helper method for smith normal form decomposition for boundary maps + associated to chain complex + + Parameters + ---------- + bd : dict + dict of k-boundary matrices keyed on dimension of domain + + Returns + ------- + L,R,S,Linv : dict + dict of matrices ranging over krange + + """ + L, R, S, Linv = [dict() for i in range(4)] + + for kdx in bd: + L[kdx], R[kdx], S[kdx], Linv[kdx] = smith_normal_form_mod2(bd[kdx]) + return L, R, S, Linv + + +def _get_krange(max_dim, k=None): + """ + Helper method to compute range of appropriate k dimensions for homology + computations given k and the max dimension of a simplicial complex + """ + if k is None: + krange = [1, max_dim] + elif isinstance(k, int): + if k == 0: + msg = ( + "Only kth simplicial homology groups for k>0 may be computed." + "If you are interested in k=0, compute the number connected components." + ) + print(msg) + return + if k > max_dim: + msg = f"No simplices of dim {k} exist. k adjusted to max dim." + print(msg) + krange = [min([k, max_dim])] * 2 + elif not len(k) == 2: + msg = f"Please enter krange as a positive integer or list of integers: [<min k>,<max k>] inclusive." + print(msg) + return None + elif not k[0] <= k[1]: + msg = f"k must be an integer or a list of two integers [min,max] with min <=max" + print(msg) + return None + else: + krange = k + + if krange[1] > max_dim: + msg = f"No simplices of dim {krange[1]} exist. Range adjusted to max dim." + print(msg) + krange[1] = max_dim + if krange[0] < 1: + msg = ( + "Only kth simplicial homology groups for k>0 may be computed." + "If you are interested in k=0, compute the number of connected components." + ) + print(msg) + krange[0] = 1 + return krange + + +
[docs]def chain_complex(h, k=None): + """ + Compute the k-chains and k-boundary maps required to compute homology + for all values in k + + Parameters + ---------- + h : hnx.Hypergraph + k : int or list of length 2, optional, default=None + k must be an integer greater than 0 or a list of + length 2 indicating min and max dimensions to be + computed. eg. if k = [1,2] then 0,1,2,3-chains + and boundary maps for k=1,2,3 will be returned, + if None than k = [1,max dimension of edge in h] + + Returns + ------- + C, bd : dict + C is a dictionary of lists + bd is a dictionary of numpy arrays + """ + max_dim = np.max([len(h.edges[e]) for e in h.edges()]) - 1 + krange = _get_krange(max_dim, k) + if not krange: + return + # Compute chain complex + + C = defaultdict(list) + C[krange[0] - 1] = kchainbasis(h, krange[0] - 1) + bd = dict() + for kdx in range(krange[0], krange[1] + 2): + C[kdx] = kchainbasis(h, kdx) + bd[kdx] = bkMatrix(C[kdx - 1], C[kdx]) + return C, bd
+ + +
[docs]def betti(bd, k=None): + """ + Generate the kth-betti numbers for a chain complex with boundary + matrices given by bd + + Parameters + ---------- + bd: dict of k-boundary matrices keyed on dimension of domain + k : int, list or tuple, optional, default=None + list must be min value and max value of k values inclusive + if None, then all betti numbers for dimensions of existing cells will be + computed. + + Returns + ------- + betti : dict + Description + """ + rank = defaultdict(int) + if k: + max_dim = max(bd.keys()) + krange = _get_krange(max_dim, k) + if not krange: + return + kvals = sorted(set(range(krange[0], krange[1] + 2)).intersection(bd.keys())) + else: + kvals = bd.keys() + for kdx in kvals: + _, S, _ = hnx.reduced_row_echelon_form_mod2(bd[kdx]) + rank[kdx] = np.sum(np.sum(S, axis=1).astype(bool)) + + betti = dict() + for kdx in kvals: + if kdx + 1 in kvals: + betti[kdx] = bd[kdx].shape[1] - rank[kdx] - rank[kdx + 1] + else: + continue + + return betti
+ + +
[docs]def betti_numbers(h, k=None): + """ + Return the kth betti numbers for the simplicial homology of the ASC + associated to h + + Parameters + ---------- + h : hnx.Hypergraph + Hypergraph to compute the betti numbers from + k : int or list, optional, default=None + list must be min value and max value of k values inclusive + if None, then all betti numbers for dimensions of existing cells will be + computed. + + Returns + ------- + betti : dict + A dictionary of betti numbers keyed by dimension + """ + _, bd = chain_complex(h, k) + return betti(bd)
+ + +
[docs]def homology_basis(bd, k=None, boundary=False, **kwargs): + """ + Compute a basis for the kth-simplicial homology group, $H_k$, defined by a + chain complex $C$ with boundary maps given by bd $= \{k:\partial_k \}$ + + Parameters + ---------- + bd : dict + dict of boundary matrices on k-chains to k-1 chains keyed on k + if krange is a tuple then all boundary matrices k \in [krange[0],..,krange[1]] + inclusive must be in the dictionary + k : int or list of ints, optional, default=None + k must be a positive integer or a list of + 2 integers indicating min and max dimensions to be + computed, if none given all homology groups will be computed from + available boundary matrices in bd + boundary : bool + option to return a basis for the boundary group from each dimension. + Needed to compute the shortest generators in the homology group. + + Returns + ------- + basis : dict + dict of generators as 0-1 tuples keyed by dim + basis for dimension k will be returned only if bd[k] and bd[k+1] have + been provided. + im : dict + dict of boundary group generators keyed by dim + """ + max_dim = max(bd.keys()) + if k: + krange = _get_krange(max_dim, k) + kvals = sorted( + set(bd.keys()).intersection(range(krange[0], krange[1] + 2)) + ) # to get kth dim need k+1 bdry matrix + else: + kvals = bd.keys() + + L, R, S, Linv = _compute_matrices_for_snf( + {k: v for k, v in bd.items() if k in kvals} + ) + + rank = dict() + for kdx in kvals: + rank[kdx] = np.sum(S[kdx]) + + basis = dict() + im = dict() + for kdx in kvals: + if kdx + 1 not in kvals: + continue + rank1 = rank[kdx] + rank2 = rank[kdx + 1] + ker1 = R[kdx][:, rank1:] + im2 = Linv[kdx + 1][:, :rank2] + cokernel2 = Linv[kdx + 1][:, rank2:] + cokproj2 = L[kdx + 1][rank2:, :] + + proj = matmulreduce([cokernel2, cokproj2, ker1]).transpose() + _, proj, _ = reduced_row_echelon_form_mod2(proj) + # proj = np.array(proj) + basis[kdx] = np.array([row for row in proj if np.any(row)]) + if boundary: + im[kdx] = im2.transpose() + if boundary: + return basis, im + else: + return basis
+ + +
[docs]def hypergraph_homology_basis(h, k=None, shortest=False, interpreted=True): + """ + Computes the kth-homology groups mod 2 for the ASC + associated with the hypergraph h for k in krange inclusive + + Parameters + ---------- + h : hnx.Hypergraph + k : int or list of length 2, optional, default = None + k must be an integer greater than 0 or a list of + length 2 indicating min and max dimensions to be + computed + shortest : bool, optional, default=False + option to look for shortest representative for each coset in the + homology group, only good for relatively small examples + interpreted : bool, optional, default = True + if True will return an explicit basis in terms of the k-chains + + Returns + ------- + basis : list + list of generators as k-chains as boolean vectors + interpreted_basis : + lists of kchains in basis + + """ + C, bd = chain_complex(h, k) + if shortest: + basis = defaultdict(list) + tbasis, im = homology_basis(bd, boundary=True) + for kdx in tbasis: + imgrp = boundary_group(im[kdx]) + if imgrp is None: + basis[kdx] = tbasis[kdx] + else: + for b in tbasis[kdx]: + coset = np.array( + np.mod(imgrp + b, 2) + ) # dimensions appear to be wrong. See tests2 cell 5 + idx = np.argmin(np.sum(coset, axis=1)) + basis[kdx].append(coset[idx]) + basis = dict(basis) + + else: + basis = homology_basis(bd) + + if interpreted: + interpreted_basis = dict() + for kdx in basis: + interpreted_basis[kdx] = interpret(C[kdx], basis[kdx], labels=None) + return basis, interpreted_basis + else: + return basis
+ + +
[docs]def interpret(Ck, arr, labels=None): + """ + Returns the data as represented in Ck associated with the arr + + Parameters + ---------- + Ck : list + a list of k-cells being referenced by arr + arr : np.array + array of 0-1 vectors + labels : dict, optional + dictionary of labels to associate to the nodes in the cells + + Returns + ---- + : list + list of k-cells referenced by data in Ck + + """ + + def translate(cell, labels=labels): + if not labels: + return cell + else: + temp = list() + for node in cell: + temp.append(labels[node]) + return tuple(temp) + + output = list() + for vec in arr: + if len(Ck) != len(vec): + raise HyperNetXError("elements of arr must have the same length as Ck") + output.append([translate(Ck[idx]) for idx in range(len(vec)) if vec[idx] == 1]) + return output
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/hypergraph_modularity.html b/_modules/hypernetx/algorithms/hypergraph_modularity.html new file mode 100644 index 00000000..0f605d1b --- /dev/null +++ b/_modules/hypernetx/algorithms/hypergraph_modularity.html @@ -0,0 +1,710 @@ + + + + + + hypernetx.algorithms.hypergraph_modularity — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.hypergraph_modularity

+"""
+Hypergraph_Modularity
+---------------------
+Modularity and clustering for hypergraphs using HyperNetX.
+Adapted from F. Théberge's GitHub repository: `Hypergraph Clustering <https://github.com/ftheberge/Hypergraph_Clustering>`_
+See Tutorial 13 in the tutorials folder for library usage.
+
+References
+----------
+.. [1] Kumar T., Vaidyanathan S., Ananthapadmanabhan H., Parthasarathy S. and Ravindran B. "A New Measure of Modularity in Hypergraphs: Theoretical Insights and Implications for Effective Clustering". In: Cherifi H., Gaito S., Mendes J., Moro E., Rocha L. (eds) Complex Networks and Their Applications VIII. COMPLEX NETWORKS 2019. Studies in Computational Intelligence, vol 881. Springer, Cham. https://doi.org/10.1007/978-3-030-36687-2_24
+.. [2] Kamiński  B., Prałat  P. and Théberge  F. "Community Detection Algorithm Using Hypergraph Modularity". In: Benito R.M., Cherifi C., Cherifi H., Moro E., Rocha L.M., Sales-Pardo M. (eds) Complex Networks & Their Applications IX. COMPLEX NETWORKS 2020. Studies in Computational Intelligence, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-65347-7_13
+.. [3] Kamiński  B., Poulin V., Prałat  P., Szufel P. and Théberge  F. "Clustering via hypergraph modularity", Plos ONE 2019, https://doi.org/10.1371/journal.pone.0224307
+"""
+
+from collections import Counter
+import numpy as np
+import itertools
+from scipy.special import comb
+
+try:
+    import igraph as ig
+except ModuleNotFoundError as e:
+    print(
+        f" {e}. If you need to use {__name__}, please install additional packages by running the following command: pip install .['all']"
+    )
+################################################################################
+
+# we use 2 representations for partitions (0-based part ids):
+# (1) dictionary or (2) list of sets
+
+
+
[docs]def dict2part(D): + """ + Given a dictionary mapping the part for each vertex, return a partition as a list of sets; inverse function to part2dict + + Parameters + ---------- + D : dict + Dictionary keyed by vertices with values equal to integer + index of the partition the vertex belongs to + + Returns + ------- + : list + List of sets; one set for each part in the partition + """ + P = [] + k = list(D.keys()) + v = list(D.values()) + for x in range(max(D.values()) + 1): + P.append(set([k[i] for i in range(len(k)) if v[i] == x])) + return P
+ + +
[docs]def part2dict(A): + """ + Given a partition (list of sets), returns a dictionary mapping the part for each vertex; inverse function + to dict2part + + Parameters + ---------- + A : list of sets + a partition of the vertices + + Returns + ------- + : dict + a dictionary with {vertex: partition index} + """ + x = [] + for i in range(len(A)): + x.extend([(a, i) for a in A[i]]) + return {k: v for k, v in x}
+ + +################################################################################ + + +
[docs]def precompute_attributes(H): + """ + Precompute some values on hypergraph HG for faster computing of hypergraph modularity. + This needs to be run before calling either modularity() or last_step(). + + Note + ---- + + If HG is unweighted, v.weight is set to 1 for each vertex v in HG. + The weighted degree for each vertex v is stored in v.strength. + The total edge weigths for each edge cardinality is stored in HG.d_weights. + Binomial coefficients to speed-up modularity computation are stored in HG.bin_coef. + Isolated vertices found only in edge(s) of size 1 are dropped. + + Parameters + ---------- + HG : Hypergraph + + Returns + ------- + H : Hypergraph + New hypergraph with added attributes + + """ + # 1. compute node strenghts (weighted degrees) + for v in H.nodes: + H.nodes[v].strength = 0 + for e in H.edges: + try: + w = H.edges[e].weight + except: + w = 1 + # add unit weight if none to simplify other functions + H.edges[e].weight = 1 + for v in list(H.edges[e]): + H.nodes[v].strength += w + # 2. compute d-weights + ctr = Counter([len(H.edges[e]) for e in H.edges]) + for k in ctr.keys(): + ctr[k] = 0 + for e in H.edges: + ctr[len(H.edges[e])] += H.edges[e].weight + H.d_weights = ctr + H.total_weight = sum(ctr.values()) + # 3. compute binomial coeffcients (modularity speed-up) + bin_coef = {} + for n in H.d_weights.keys(): + for k in np.arange(n // 2 + 1, n + 1): + bin_coef[(n, k)] = comb(n, k, exact=True) + H.bin_coef = bin_coef + return H
+ + +################################################################################ + + +
[docs]def linear(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This is the default choice for modularity() and last_step() functions. + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : float + c/d if c>d/2 else 0 + """ + return c / d if c > d / 2 else 0
+ + +# majority + + +
[docs]def majority(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This corresponds to the majority rule [3]_ + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : bool + 1 if c>d/2 else 0 + + """ + return 1 if c > d / 2 else 0
+ + +# strict + + +
[docs]def strict(d, c): + """ + Hyperparameter for hypergraph modularity [2]_ for d-edge with c vertices in the majority class. + This corresponds to the strict rule [3]_ + + Parameters + ---------- + d : int + Number of vertices in an edge + c : int + Number of vertices in the majority class + + Returns + ------- + : bool + 1 if c==d else 0 + """ + return 1 if c == d else 0
+ + +######################################### + + +def _compute_partition_probas(HG, A): + """ + Compute vol(A_i)/vol(V) for each part A_i in A (list of sets) + + Parameters + ---------- + HG : Hypergraph + A : list of sets + + Returns + ------- + : list + normalized distribution of strengths in partition elements + """ + p = [] + for part in A: + vol = 0 + for v in part: + vol += HG.nodes[v].strength + p.append(vol) + s = sum(p) + return [i / s for i in p] + + +def _degree_tax(HG, Pr, wdc): + """ + Computes the expected fraction of edges falling in + the partition as per [2]_ + + Parameters + ---------- + HG : Hypergraph + + Pr : list + Probability distribution + wdc : func + weight function for edge contribution (ex: strict, majority, linear) + + Returns + ------- + float + + """ + DT = 0 + for d in HG.d_weights.keys(): + tax = 0 + for c in np.arange(d // 2 + 1, d + 1): + for p in Pr: + tax += p**c * (1 - p) ** (d - c) * HG.bin_coef[(d, c)] * wdc(d, c) + tax *= HG.d_weights[d] + DT += tax + DT /= HG.total_weight + return DT + + +def _edge_contribution(HG, A, wdc): + """ + Edge contribution from hypergraph with respect + to partion A. + + Parameters + ---------- + HG : Hypergraph + + A : list of sets + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + + """ + EC = 0 + for e in HG.edges: + d = HG.size(e) + for part in A: + hgs = HG.size(e, part) + if hgs > d / 2: + EC += wdc(d, hgs) * HG.edges[e].weight + break + EC /= HG.total_weight + return EC + + +# HG: HNX hypergraph +# A: partition (list of sets) +# wcd: weight function (ex: strict, majority, linear) + + +
[docs]def modularity(HG, A, wdc=linear): + """ + Computes modularity of hypergraph HG with respect to partition A. + + Parameters + ---------- + HG : Hypergraph + The hypergraph with some precomputed attributes via: precompute_attributes(HG) + A : list of sets + Partition of the vertices in HG + wdc : func, optional + Hyperparameter for hypergraph modularity [2]_ + + Note + ---- + For 'wdc', any function of the format w(d,c) that returns 0 when c <= d/2 and value in [0,1] otherwise can be used. + Default is 'linear'; other supplied choices are 'majority' and 'strict'. + + Returns + ------- + : float + The modularity function for partition A on HG + """ + Pr = _compute_partition_probas(HG, A) + return _edge_contribution(HG, A, wdc) - _degree_tax(HG, Pr, wdc)
+ + +################################################################################ + + +
[docs]def two_section(HG): + """ + Creates a random walk based [1]_ 2-section igraph Graph with transition weights defined by the + weights of the hyperedges. + + Parameters + ---------- + HG : Hypergraph + + Returns + ------- + : igraph.Graph + The 2-section graph built from HG + """ + s = [] + for e in HG.edges: + E = HG.edges[e] + # random-walk 2-section (preserve nodes' weighted degrees) + if len(E) > 1: + try: + w = HG.edges[e].weight / (len(E) - 1) + except: + w = 1 / (len(E) - 1) + s.extend([(k[0], k[1], w) for k in itertools.combinations(E, 2)]) + G = ig.Graph.TupleList(s, weights=True).simplify(combine_edges="sum") + return G
+ + +################################################################################ + + +
[docs]def kumar(HG, delta=0.01): + """ + Compute a partition of the vertices in hypergraph HG as per Kumar's algorithm [1]_ + + Parameters + ---------- + HG : Hypergraph + + delta : float, optional + convergence stopping criterion + + Returns + ------- + : list of sets + A partition of the vertices in HG + + """ + # weights will be modified -- store initial weights + W = { + e: HG.edges[e].weight for e in HG.edges + } # uses edge id for reference instead of int + # build graph + G = two_section(HG) + # apply clustering + CG = G.community_multilevel(weights="weight") + CH = [] + for comm in CG.as_cover(): + CH.append(set([G.vs[x]["name"] for x in comm])) + + # LOOP + diff = 1 + ctr = 0 + while diff > delta: + # re-weight + diff = 0 + for e in HG.edges: + edge = HG.edges[e] + reweight = ( + sum([1 / (1 + HG.size(e, c)) for c in CH]) + * (HG.size(e) + len(CH)) + / HG.number_of_edges() + ) + diff = max(diff, 0.5 * abs(edge.weight - reweight)) + edge.weight = 0.5 * edge.weight + 0.5 * reweight + # re-run louvain + # build graph + G = two_section(HG) + # apply clustering + CG = G.community_multilevel(weights="weight") + CH = [] + for comm in CG.as_cover(): + CH.append(set([G.vs[x]["name"] for x in comm])) + ctr += 1 + if ctr > 50: # this process sometimes gets stuck -- set limit + break + G.vs["part"] = CG.membership + for e in HG.edges: + HG.edges[e].weight = W[e] + return dict2part({v["name"]: v["part"] for v in G.vs})
+ + +################################################################################ + + +def _delta_ec(HG, P, v, a, b, wdc): + """ + Computes change in edge contribution -- + partition P, node v going from P[a] to P[b] + + Parameters + ---------- + HG : Hypergraph + + P : list of sets + + v : int or str + node identifier + a : int + + b : int + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + """ + Pm = P[a] - {v} + Pn = P[b].union({v}) + ec = 0 + + # TODO: Verify the data shape of `memberships` (ie. what are the keys and values) + for e in list(HG.nodes.memberships[v]): + d = HG.size(e) + w = HG.edges[e].weight + ec += w * ( + wdc(d, HG.size(e, Pm)) + + wdc(d, HG.size(e, Pn)) + - wdc(d, HG.size(e, P[a])) + - wdc(d, HG.size(e, P[b])) + ) + return ec / HG.total_weight + + +def _bin_ppmf(d, c, p): + """ + exponential part of the binomial pmf + + Parameters + ---------- + d : int + + c : int + + p : float + + + Returns + ------- + : float + + """ + return p**c * (1 - p) ** (d - c) + + +def _delta_dt(HG, P, v, a, b, wdc): + """ + Compute change in degree tax -- + partition P (list), node v going from P[a] to P[b] + + Parameters + ---------- + HG : Hypergraph + + P : list of sets + + v : int or str + node identifier + a : int + + b : int + + wdc : func + weight function (ex: strict, majority, linear) + + Returns + ------- + : float + + """ + s = HG.nodes[v].strength + vol = sum([HG.nodes[v].strength for v in HG.nodes]) + vola = sum([HG.nodes[v].strength for v in P[a]]) + volb = sum([HG.nodes[v].strength for v in P[b]]) + volm = (vola - s) / vol + voln = (volb + s) / vol + vola /= vol + volb /= vol + DT = 0 + + for d in HG.d_weights.keys(): + x = 0 + for c in np.arange(int(np.floor(d / 2)) + 1, d + 1): + x += ( + HG.bin_coef[(d, c)] + * wdc(d, c) + * ( + _bin_ppmf(d, c, voln) + + _bin_ppmf(d, c, volm) + - _bin_ppmf(d, c, vola) + - _bin_ppmf(d, c, volb) + ) + ) + DT += x * HG.d_weights[d] + return DT / HG.total_weight + + +
[docs]def last_step(HG, L, wdc=linear, delta=0.01): + """ + Given some initial partition L, compute a new partition of the vertices in HG as per Last-Step algorithm [2]_ + + Note + ---- + This is a very simple algorithm that tries moving nodes between communities to improve hypergraph modularity. + It requires an initial non-trivial partition which can be obtained for example via graph clustering on the 2-section of HG, + or via Kumar's algorithm. + + Parameters + ---------- + HG : Hypergraph + + L : list of sets + some initial partition of the vertices in HG + + wdc : func, optional + Hyperparameter for hypergraph modularity [2]_ + + delta : float, optional + convergence stopping criterion + + Returns + ------- + : list of sets + A new partition for the vertices in HG + """ + A = L[:] # we will modify this, copy + D = part2dict(A) + qH = 0 + while True: + for v in list(np.random.permutation(list(HG.nodes))): + c = D[v] + s = list(set([c] + [D[i] for i in HG.neighbors(v)])) + M = [] + if len(s) > 0: + for i in s: + if c == i: + M.append(0) + else: + M.append( + _delta_ec(HG, A, v, c, i, wdc) + - _delta_dt(HG, A, v, c, i, wdc) + ) + i = s[np.argmax(M)] + if c != i: + A[c] = A[c] - {v} + A[i] = A[i].union({v}) + D[v] = i + Pr = _compute_partition_probas(HG, A) + q2 = _edge_contribution(HG, A, wdc) - _degree_tax(HG, Pr, wdc) + if (q2 - qH) < delta: + break + qH = q2 + return [a for a in A if len(a) > 0]
+ + +################################################################################ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/laplacians_clustering.html b/_modules/hypernetx/algorithms/laplacians_clustering.html new file mode 100644 index 00000000..d32964d7 --- /dev/null +++ b/_modules/hypernetx/algorithms/laplacians_clustering.html @@ -0,0 +1,348 @@ + + + + + + hypernetx.algorithms.laplacians_clustering — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.laplacians_clustering

+# Copyright © 2021 Battelle Memorial Institute
+# All rights reserved.
+
+"""
+
+Hypergraph Probability Transition Matrices, Laplacians, and Clustering
+======================================================================
+We contruct hypergraph random walks utilizing optional "edge-dependent vertex weights", which are
+weights associated with each vertex-hyperedge pair (i.e. cell weights on the incidence matrix).
+The probability transition matrix of this random walk is used to construct a normalized Laplacian
+matrix for the hypergraph. That normalized Laplacian then serves as the input for a spectral clustering
+algorithm. This spectral clustering algorithm, as well as the normalized Laplacian and other details of
+this methodology are described in
+
+K. Hayashi, S. Aksoy, C. Park, H. Park, "Hypergraph random walks, Laplacians, and clustering",
+Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020.
+https://doi.org/10.1145/3340531.3412034
+
+Please direct any inquiries concerning the clustering module to Sinan Aksoy, sinan.aksoy@pnnl.gov
+
+"""
+
+import numpy as np
+import sys
+from scipy.sparse import csr_matrix, diags, identity
+from scipy.sparse.linalg import eigs
+from sklearn.cluster import KMeans
+from sklearn import preprocessing
+from hypernetx import HyperNetXError
+
+try:
+    import nwhy
+
+    nwhy_available = True
+except:
+    nwhy_available = False
+
+sys.setrecursionlimit(10000)
+
+
+
[docs]def prob_trans(H, weights=False, index=True, check_connected=True): + """ + The probability transition matrix of a random walk on the vertices of a hypergraph. + At each step in the walk, the next vertex is chosen by: + + 1. Selecting a hyperedge e containing the vertex with probability proportional to w(e) + 2. Selecting a vertex v within e with probability proportional to a \gamma(v,e) + + If weights are not specified, then all weights are uniform and the walk is equivalent + to a simple random walk. + If weights are specified, the hyperedge weights w(e) are determined from the weights + \gamma(v,e). + + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + weights : bool, optional, default : False + Use the cell_weights associated with the hypergraph + If False, uniform weights are utilized. + index : bool, optional + Whether to return matrix index to vertex label mapping + + Returns + ------- + P : scipy.sparse.csr.csr_matrix + Probability transition matrix of the random walk on the hypergraph + index: list + contains list of index of node ids for rows + """ + # hypergraph must be connected + if check_connected: + if not H.is_connected(): + raise HyperNetXError("hypergraph must be connected") + + R = H.incidence_matrix(index=index, weights=weights) + if index: + R, rdx, _ = R + + # transpose incidence matrix for notational convenience + R = R.transpose() + + # generates hyperedge weight matrix, has same nonzero pattern as incidence matrix, + # with values determined by the edge-dependent vertex weight standard deviation + edgeScore = { + i: np.std(R.getrow(i).data) + 1 for i in range(R.shape[0]) + } # hyperedge weights + vals = [edgeScore[i] for i in R.nonzero()[0]] + W = csr_matrix( + (vals, (R.nonzero()[1], R.nonzero()[0])), shape=(R.shape[1], R.shape[0]) + ) + + # generate diagonal degree matrices used to normalize probability transition matrix + [rowSums] = R.sum(axis=1).flatten().tolist() + D_E = diags([1 / x for x in rowSums]) + + [rowSums] = W.sum(axis=1).flatten().tolist() + D_V = diags([1 / x for x in rowSums]) + + # probability transition matrix P + P = D_V * W * D_E * R + + if index: + return P, rdx + return P
+ + +
[docs]def get_pi(P): + """ + Returns the eigenvector corresponding to the largest eigenvalue (in magnitude), + normalized so its entries sum to 1. Intended for the probability transition matrix + of a random walk on a (connected) hypergraph, in which case the output can + be interpreted as the stationary distribution. + + Parameters + ---------- + P : csr matrix + Probability transition matrix + + Returns + ------- + pi : numpy.ndarray + Stationary distribution of random walk defined by P + """ + rho, pi = eigs( + np.transpose(P), k=1, return_eigenvectors=True + ) # dominant eigenvector + pi = np.real(pi / np.sum(pi)).flatten() # normalize as prob distribution + return pi
+ + +
[docs]def norm_lap(H, weights=False, index=True): + """ + Normalized Laplacian matrix of the hypergraph. Symmetrizes the probability transition + matrix of a hypergraph random walk using the stationary distribution, using the digraph + Laplacian defined in: + + Chung, Fan. "Laplacians and the Cheeger inequality for directed graphs." + Annals of Combinatorics 9.1 (2005): 1-19. + + and studied in the context of hypergraphs in: + + Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. + Hypergraph random walks, laplacians, and clustering. + In Proceedings of CIKM 2020, (2020): 495-504. + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + weight : bool, optional, default : False + Uses cell_weights, if False, uniform weights are utilized. + index : bool, optional + Whether to return matrix-index to vertex-label mapping + + Returns + ------- + P : scipy.sparse.csr.csr_matrix + Probability transition matrix of the random walk on the hypergraph + id: list + contains list of index of node ids for rows + """ + P = prob_trans(H, weights=weights, index=index) + if index: + P, idx = P + + pi = get_pi(P) + gamma = diags(np.power(pi, 1 / 2)) * P * diags(np.power(pi, -1 / 2)) + L = identity(gamma.shape[0]) - (1 / 2) * (gamma + gamma.transpose()) + + if index: + return L, idx + return L
+ + +
[docs]def spec_clus(H, k, existing_lap=None, weights=False): + """ + Hypergraph spectral clustering of the vertex set into k disjoint clusters + using the normalized hypergraph Laplacian. Equivalent to the "RDC-Spec" + Algorithm 1 in: + + Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. + Hypergraph random walks, laplacians, and clustering. + In Proceedings of CIKM 2020, (2020): 495-504. + + + Parameters + ---------- + H : hnx.Hypergraph + The hypergraph must be connected, meaning there is a path linking any two + vertices + k : int + Number of clusters + existing_lap: csr matrix, optional + Whether to use an existing Laplacian; otherwise, normalized hypergraph Laplacian + will be utilized + weights : bool, optional + Use the cell_weights of the hypergraph. If False uniform weights are used. + + Returns + ------- + clusters : dict + Vertex cluster dictionary, keyed by integers 0,...,k-1, with lists of + vertices as values. + """ + if existing_lap == None: + if weights == None: + L, index = norm_lap(H) + else: + L, index = norm_lap(H, weights=weights) + else: + L = existing_lap + + # compute top eigenvectors + e, v = eigs(identity(L.shape[0]) - L, k=k, which="LM", return_eigenvectors=True) + v = np.real(v) # ignore zero complex parts + v = preprocessing.normalize(v, norm="l2", axis=1) # normalize + U = np.array(v) + km = KMeans(init="k-means++", n_clusters=k, random_state=0) # k-means + km.fit(U) + d = km.labels_ + + # organize cluster assingments in dictionary of form cluster #: ips + clusters = {i: [] for i in range(k)} + for i in range(len(index)): + clusters[d[i]].append(index[i]) + + return clusters
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/algorithms/s_centrality_measures.html b/_modules/hypernetx/algorithms/s_centrality_measures.html new file mode 100644 index 00000000..3bea0eae --- /dev/null +++ b/_modules/hypernetx/algorithms/s_centrality_measures.html @@ -0,0 +1,455 @@ + + + + + + hypernetx.algorithms.s_centrality_measures — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.algorithms.s_centrality_measures

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+"""
+
+S-Centrality Measures
+=====================
+We generalize graph metrics to s-metrics for a hypergraph by using its s-connected
+components. This is accomplished by computing the s edge-adjacency matrix and
+constructing the corresponding graph of the matrix. We then use existing graph metrics
+on this representation of the hypergraph. In essence we construct an *s*-line graph
+corresponding to the hypergraph on which to apply our methods.
+
+S-Metrics for hypergraphs are discussed in depth in:
+*Aksoy, S.G., Joslyn, C., Ortiz Marrero, C. et al. Hypernetwork science via high-order hypergraph walks.
+EPJ Data Sci. 9, 16 (2020). https://doi.org/10.1140/epjds/s13688-020-00231-0*
+
+"""
+
+import networkx as nx
+import warnings
+import sys
+from functools import partial
+
+try:
+    import nwhy
+
+    nwhy_available = True
+except:
+    nwhy_available = False
+
+sys.setrecursionlimit(10000)
+
+
+def _s_centrality(func, H, s=1, edges=True, f=None, return_singletons=True, **kwargs):
+    """
+    Wrapper for computing s-centrality either in NetworkX or in NWHy
+
+    Parameters
+    ----------
+    func : function
+        Function or partial function from NetworkX or NWHy
+    H : hnx.Hypergraph
+
+    s : int, optional
+        s-width for computation
+    edges : bool, optional
+        If True, an edge linegraph will be used, otherwise a node linegraph will be used
+    f : str, optional
+        Identifier of node or edge of interest for computing centrality
+    return_singletons : bool, optional
+        If True will return 0 value for each singleton in the s-linegraph
+    **kwargs
+        Centrality metric specific keyword arguments to be passed to func
+
+    Returns
+    -------
+    dict
+        dictionary of centrality scores keyed by names
+    """
+    comps = H.s_component_subgraphs(
+        s=s, edges=edges, return_singletons=return_singletons
+    )
+    if f is not None:
+        for cps in comps:
+            if (edges and f in cps.edges) or (not edges and f in cps.nodes):
+                comps = [cps]
+                break
+        else:
+            return {f: 0}
+
+    stats = dict()
+    for h in comps:
+        if edges:
+            vertices = h.edges
+        else:
+            vertices = h.nodes
+
+        if h.shape[edges * 1] == 1:
+            stats = {v: 0 for v in vertices}
+        else:
+            g = h.get_linegraph(s=s, edges=edges)
+            stats.update({k: v for k, v in func(g, **kwargs).items()})
+        if f:
+            return {f: stats[f]}
+
+    return stats
+
+
+
[docs]def s_betweenness_centrality( + H, s=1, edges=True, normalized=True, return_singletons=True +): + r""" + A centrality measure for an s-edge(node) subgraph of H based on shortest paths. + Equals the betweenness centrality of vertices in the edge(node) s-linegraph. + + In a graph (2-uniform hypergraph) the betweenness centrality of a vertex $v$ + is the ratio of the number of non-trivial shortest paths between any pair of + vertices in the graph that pass through $v$ divided by the total number of + non-trivial shortest paths in the graph. + + The centrality of edge to all shortest s-edge paths + $V$ = the set of vertices in the linegraph. + $\sigma(s,t)$ = the number of shortest paths between vertices $s$ and $t$. + $\sigma(s,t|v)$ = the number of those paths that pass through vertex $v$. + + .. math:: + + c_B(v) = \sum_{s \neq t \in V} \frac{\sigma(s, t|v)}{\sigma(s,t)} + + Parameters + ---------- + H : hnx.Hypergraph + s : int + s connectedness requirement + edges : bool, optional + determines if edge or node linegraph + normalized + bool, default=False, + If true the betweenness values are normalized by `2/((n-1)(n-2))`, + where n is the number of edges in H + return_singletons : bool, optional + if False will ignore singleton components of linegraph + + Returns + ------- + dict + A dictionary of s-betweenness centrality value of the edges. + + """ + func = partial(nx.betweenness_centrality, normalized=False) + result = _s_centrality( + func, + H, + s=s, + edges=edges, + return_singletons=return_singletons, + ) + + if normalized and H.shape[edges * 1] > 2: + n = H.shape[edges * 1] + return {k: v * 2 / ((n - 1) * (n - 2)) for k, v in result.items()} + else: + return result
+ + +
[docs]def s_closeness_centrality(H, s=1, edges=True, return_singletons=True, source=None): + r""" + In a connected component the reciprocal of the sum of the distance between an + edge(node) and all other edges(nodes) in the component times the number of edges(nodes) + in the component minus 1. + + $V$ = the set of vertices in the linegraph. + $n = |V|$ + $d$ = shortest path distance + + .. math:: + + C(u) = \frac{n - 1}{\sum_{v \neq u \in V} d(v, u)} + + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-closeness centrality value of the edges(nodes). + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + """ + func = partial(nx.closeness_centrality) + return _s_centrality( + func, + H, + s=s, + edges=edges, + return_singletons=return_singletons, + f=source, + )
+ + +
[docs]def s_harmonic_closeness_centrality(H, s=1, edge=None): + msg = """ + s_harmonic_closeness_centrality is being replaced with s_harmonic_centrality + and will not be available in future releases. + """ + warnings.warn(msg) + return s_harmonic_centrality(H, s=s, edges=True, normalized=True, source=edge)
+ + +
[docs]def s_harmonic_centrality( + H, + s=1, + edges=True, + source=None, + normalized=False, + return_singletons=True, +): + r""" + A centrality measure for an s-edge subgraph of H. A value equal to 1 means the s-edge + intersects every other s-edge in H. All values range between 0 and 1. + Edges of size less than s return 0. If H contains only one s-edge a 0 is returned. + + The denormalized reciprocal of the harmonic mean of all distances from $u$ to all other vertices. + $V$ = the set of vertices in the linegraph. + $d$ = shortest path distance + + .. math:: + + C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)} + + Normalized this becomes: + $$C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)}\cdot\frac{2}{(n-1)(n-2)}$$ + where $n$ is the number vertices. + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-harmonic closeness centrality value of the edges, a number between 0 and 1 inclusive. + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + + """ + + # func = partial(nx.harmonic_centrality) + # result = _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # return_singletons=return_singletons, + # f=source, + # ) + g = H.get_linegraph(s=s, edges=edges) + result = nx.harmonic_centrality(g) + + if normalized and H.shape[edges * 1] > 2: + n = H.shape[edges * 1] + factor = 2 / ((n - 1) * (n - 2)) + else: + factor = 1 + + if source: + return result[source] * factor + else: + return {k: v * factor for k, v in result.items()}
+ + # if normalized and H.shape[edges * 1] > 2: + # n = H.shape[edges * 1] + # result = {k: v * 2 / ((n - 1) * (n - 2)) for k, v in result.items()} + # else: + # return result + + +
[docs]def s_eccentricity(H, s=1, edges=True, source=None, return_singletons=True): + r""" + The length of the longest shortest path from a vertex $u$ to every other vertex in + the s-linegraph. + $V$ = set of vertices in the s-linegraph + $d$ = shortest path distance + + .. math:: + + \text{s-ecc}(u) = \text{max}\{d(u,v): v \in V\} + + Parameters + ---------- + H : hnx.Hypergraph + + s : int, optional + + edges : bool, optional + Indicates if method should compute edge linegraph (default) or node linegraph. + return_singletons : bool, optional + Indicates if method should return values for singleton components. + source : str, optional + Identifier of node or edge of interest for computing centrality + + Returns + ------- + dict or float + returns the s-eccentricity value of the edges(nodes). + If source=None a dictionary of values for each s-edge in H is returned. + If source then a single value is returned. + If the s-linegraph is disconnected, np.inf is returned. + + """ + + g = H.get_linegraph(s=s, edges=edges) + result = nx.eccentricity(g) + if source: + return result[source] + else: + return result
+ + # func = nx.eccentricity + + # if source is not None: + # return _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # f=source, + # return_singletons=return_singletons, + # ) + # else: + # return _s_centrality( + # func, + # H, + # s=s, + # edges=edges, + # return_singletons=return_singletons, + # ) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/classes/entity.html b/_modules/hypernetx/classes/entity.html new file mode 100644 index 00000000..8593b59e --- /dev/null +++ b/_modules/hypernetx/classes/entity.html @@ -0,0 +1,1739 @@ + + + + + + hypernetx.classes.entity — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for hypernetx.classes.entity

+from __future__ import annotations
+
+import warnings
+from ast import literal_eval
+from collections import OrderedDict, defaultdict
+from collections.abc import Hashable, Mapping, Sequence, Iterable
+from typing import Union, TypeVar, Optional, Any
+
+import numpy as np
+import pandas as pd
+from scipy.sparse import csr_matrix
+
+from hypernetx.classes.helpers import (
+    AttrList,
+    assign_weights,
+    remove_row_duplicates,
+    dict_depth,
+)
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class Entity: + """Base class for handling N-dimensional data when building network-like models, + i.e., :class:`Hypergraph` + + Parameters + ---------- + entity : pandas.DataFrame, dict of lists or sets, list of lists or sets, optional + If a ``DataFrame`` with N columns, + represents N-dimensional entity data (data table). + Otherwise, represents 2-dimensional entity data (system of sets). + TODO: Test for compatibility with list of Entities and update docs + data : numpy.ndarray, optional + 2D M x N ``ndarray`` of ``ints`` (data table); + sparse representation of an N-dimensional incidence tensor with M nonzero cells. + Ignored if `entity` is provided. + static : bool, default=True + If ``True``, entity data may not be altered, + and the :attr:`state_dict <_state_dict>` will never be cleared. + Otherwise, rows may be added to and removed from the data table, + and updates will clear the :attr:`state_dict <_state_dict>`. + labels : collections.OrderedDict of lists, optional + User-specified labels in corresponding order to ``ints`` in `data`. + Ignored if `entity` is provided or `data` is not provided. + uid : hashable, optional + A unique identifier for the object + weights : str or sequence of float, optional + User-specified cell weights corresponding to entity data. + If sequence of ``floats`` and `entity` or `data` defines a data table, + length must equal the number of rows. + If sequence of ``floats`` and `entity` defines a system of sets, + length must equal the total sum of the sizes of all sets. + If ``str`` and `entity` is a ``DataFrame``, + must be the name of a column in `entity`. + Otherwise, weight for all cells is assumed to be 1. + aggregateby : {'sum', 'last', count', 'mean','median', max', 'min', 'first', None} + Name of function to use for aggregating cell weights of duplicate rows when + `entity` or `data` defines a data table, default is "sum". + If None, duplicate rows will be dropped without aggregating cell weights. + Effectively ignored if `entity` defines a system of sets. + properties : pandas.DataFrame or doubly-nested dict, optional + User-specified properties to be assigned to individual items in the data, i.e., + cell entries in a data table; sets or set elements in a system of sets. + See Notes for detailed explanation. + If ``DataFrame``, each row gives + ``[optional item level, item label, optional named properties, + {property name: property value}]`` + (order of columns does not matter; see note for an example). + If doubly-nested dict, + ``{item level: {item label: {property name: property value}}}``. + misc_props_col, level_col, id_col : str, default="properties", "level, "id" + Column names for miscellaneous properties, level index, and item name in + :attr:`properties`; see Notes for explanation. + + Notes + ----- + A property is a named attribute assigned to a single item in the data. + + You can pass a **table of properties** to `properties` as a ``DataFrame``: + + +------------+---------+----------------+-------+------------------+ + | Level | ID | [explicit | [...] | misc. properties | + | (optional) | | property type] | | | + +============+=========+================+=======+==================+ + | 0 | level 0 | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + | 1 | level 1 | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + | ... | ... | ... | ... | ... | + +------------+---------+----------------+-------+------------------+ + | N | level N | property value | ... | {property name: | + | | item | | | property value} | + +------------+---------+----------------+-------+------------------+ + + The Level column is optional. If not provided, properties will be assigned by ID + (i.e., if an ID appears at multiple levels, the same properties will be assigned to + all occurrences). + + The names of the Level (if provided) and ID columns must be specified by `level_col` + and `id_col`. `misc_props_col` can be used to specify the name of the column to be used + for miscellaneous properties; if no column by that name is found, + a new column will be created and populated with empty ``dicts``. + All other columns will be considered explicit property types. + The order of the columns does not matter. + + This method assumes that there are no rows with the same (Level, ID); + if duplicates are found, all but the first occurrence will be dropped. + + """ + + def __init__( + self, + entity: Optional[ + pd.DataFrame | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] + ] = None, + data_cols: Sequence[T] = [0, 1], + data: Optional[np.ndarray] = None, + static: bool = False, + labels: Optional[OrderedDict[T, Sequence[T]]] = None, + uid: Optional[Hashable] = None, + weight_col: Optional[str | int] = "cell_weights", + weights: Optional[Sequence[float] | float | int | str] = 1, + aggregateby: Optional[str | dict] = "sum", + properties: Optional[pd.DataFrame | dict[int, dict[T, dict[Any, Any]]]] = None, + misc_props_col: str = "properties", + level_col: str = "level", + id_col: str = "id", + ): + # set unique identifier + self._uid = uid or None + + # if static, the original data cannot be altered + # the state dict stores all computed values that may need to be updated + # if the data is altered - the dict will be cleared when data is added + # or removed + self._static = static + self._state_dict = {} + + # entity data is stored in a DataFrame for basic access without the + # need for any label encoding lookups + if isinstance(entity, pd.DataFrame): + self._dataframe = entity.copy() + + # if the entity data is passed as a dict of lists or a list of lists, + # we convert it to a 2-column dataframe by exploding each list to cover + # one row per element for a dict of lists, the first level/column will + # be filled in with dict keys for a list of N lists, 0,1,...,N will be + # used to fill the first level/column + elif isinstance(entity, (dict, list)): + # convert dict of lists to 2-column dataframe + entity = pd.Series(entity).explode() + self._dataframe = pd.DataFrame( + {data_cols[0]: entity.index.to_list(), data_cols[1]: entity.values} + ) + + # if a 2d numpy ndarray is passed, store it as both a DataFrame and an + # ndarray in the state dict + elif isinstance(data, np.ndarray) and data.ndim == 2: + self._state_dict["data"] = data + self._dataframe = pd.DataFrame(data) + # if a dict of labels was passed, use keys as column names in the + # DataFrame, translate the dataframe, and store the dict of labels + # in the state dict + if isinstance(labels, dict) and len(labels) == len(self._dataframe.columns): + self._dataframe.columns = labels.keys() + self._state_dict["labels"] = labels + + for col in self._dataframe: + self._dataframe[col] = pd.Categorical.from_codes( + self._dataframe[col], categories=labels[col] + ) + + # create an empty Entity + else: + self._dataframe = pd.DataFrame() + + # assign a new or existing column of the dataframe to hold cell weights + self._dataframe, self._cell_weight_col = assign_weights( + self._dataframe, weights=weights, weight_col=weight_col + ) + # import ipdb; ipdb.set_trace() + # store a list of columns that hold entity data (not properties or + # weights) + # self._data_cols = list(self._dataframe.columns.drop(self._cell_weight_col)) + self._data_cols = [] + for col in data_cols: + # TODO: default arguments fail for empty Entity; data_cols has two elements but _dataframe has only one element + if isinstance(col, int): + self._data_cols.append(self._dataframe.columns[col]) + else: + self._data_cols.append(col) + + # each entity data column represents one dimension of the data + # (data updates can only add or remove rows, so this isn't stored in + # state dict) + self._dimsize = len(self._data_cols) + + # remove duplicate rows and aggregate cell weights as needed + # import ipdb; ipdb.set_trace() + self._dataframe, _ = remove_row_duplicates( + self._dataframe, + self._data_cols, + weight_col=self._cell_weight_col, + aggregateby=aggregateby, + ) + + # set the dtype of entity data columns to categorical (simplifies + # encoding, etc.) + ### This is automatically done in remove_row_duplicates + # self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + # "category" + # ) + + # create properties + item_levels = [ + (level, item) + for level, col in enumerate(self._data_cols) + for item in self.dataframe[col].cat.categories + ] + index = pd.MultiIndex.from_tuples(item_levels, names=[level_col, id_col]) + data = [(i, 1, {}) for i in range(len(index))] + self._properties = pd.DataFrame( + data=data, index=index, columns=["uid", "weight", misc_props_col] + ).sort_index() + self._misc_props_col = misc_props_col + if properties is not None: + self.assign_properties(properties) + + @property + def data(self): + """Sparse representation of the data table as an incidence tensor + + This can also be thought of as an encoding of `dataframe`, where items in each column of + the data table are translated to their int position in the `self.labels[column]` list + Returns + ------- + numpy.ndarray + 2D array of ints representing rows of the underlying data table as indices in an incidence tensor + + See Also + -------- + labels, dataframe + + """ + # generate if not already stored in state dict + if "data" not in self._state_dict: + if self.empty: + self._state_dict["data"] = np.zeros((0, 0), dtype=int) + else: + # assumes dtype of data cols is already converted to categorical + # and state dict has been properly cleared after updates + self._state_dict["data"] = ( + self._dataframe[self._data_cols] + .apply(lambda x: x.cat.codes) + .to_numpy() + ) + + return self._state_dict["data"] + + @property + def labels(self): + """Labels of all items in each column of the underlying data table + + Returns + ------- + dict of lists + dict of {column name: [item labels]} + The order of [item labels] corresponds to the int encoding of each item in `self.data`. + + See Also + -------- + data, dataframe + """ + # generate if not already stored in state dict + if "labels" not in self._state_dict: + # assumes dtype of data cols is already converted to categorical + # and state dict has been properly cleared after updates + self._state_dict["labels"] = { + col: self._dataframe[col].cat.categories.to_list() + for col in self._data_cols + } + + return self._state_dict["labels"] + + @property + def cell_weights(self): + """Cell weights corresponding to each row of the underlying data table + + Returns + ------- + dict of {tuple: int or float} + Keyed by row of data table (as a tuple) + """ + # generate if not already stored in state dict + if "cell_weights" not in self._state_dict: + if self.empty: + self._state_dict["cell_weights"] = {} + else: + self._state_dict["cell_weights"] = self._dataframe.set_index( + self._data_cols + )[self._cell_weight_col].to_dict() + + return self._state_dict["cell_weights"] + + @property + def dimensions(self): + """Dimensions of data i.e., the number of distinct items in each level (column) of the underlying data table + + Returns + ------- + tuple of ints + Length and order corresponds to columns of `self.dataframe` (excluding cell weight column) + """ + # generate if not already stored in state dict + if "dimensions" not in self._state_dict: + if self.empty: + self._state_dict["dimensions"] = tuple() + else: + self._state_dict["dimensions"] = tuple( + self._dataframe[self._data_cols].nunique() + ) + + return self._state_dict["dimensions"] + + @property + def dimsize(self): + """Number of levels (columns) in the underlying data table + + Returns + ------- + int + Equal to length of `self.dimensions` + """ + return self._dimsize + + @property + def properties(self) -> pd.DataFrame: + # Dev Note: Not sure what this contains, when running tests it contained an empty pandas series + """Properties assigned to items in the underlying data table + + Returns + ------- + pandas.DataFrame + """ + + return self._properties + + @property + def uid(self): + # Dev Note: This also returned nothing in my harry potter dataset, not sure if it was supposed to contain anything + """User-defined unique identifier for the `Entity` + + Returns + ------- + hashable + """ + return self._uid + + @property + def uidset(self): + """Labels of all items in level 0 (first column) of the underlying data table + + Returns + ------- + frozenset + + See Also + -------- + children : Labels of all items in level 1 (second column) + uidset_by_level, uidset_by_column : + Labels of all items in any level (column); specified by level index or column name + """ + return self.uidset_by_level(0) + + @property + def children(self): + """Labels of all items in level 1 (second column) of the underlying data table + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + uidset_by_level, uidset_by_column : + Labels of all items in any level (column); specified by level index or column name + """ + return self.uidset_by_level(1) + +
[docs] def uidset_by_level(self, level): + """Labels of all items in a particular level (column) of the underlying data table + + Parameters + ---------- + level : int + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + children : Labels of all items in level 1 (second column) + uidset_by_column : Same functionality, takes the column name instead of level index + """ + if self.is_empty(level): + return {} + col = self._data_cols[level] + return self.uidset_by_column(col)
+ +
[docs] def uidset_by_column(self, column): + # Dev Note: This threw an error when trying it on the harry potter dataset, + # when trying 0, or 1 for column. I'm not sure how this should be used + """Labels of all items in a particular column (level) of the underlying data table + + Parameters + ---------- + column : Hashable + Name of a column in `self.dataframe` + + Returns + ------- + frozenset + + See Also + -------- + uidset : Labels of all items in level 0 (first column) + children : Labels of all items in level 1 (second column) + uidset_by_level : Same functionality, takes the level index instead of column name + """ + # generate if not already stored in state dict + if "uidset" not in self._state_dict: + self._state_dict["uidset"] = {} + if column not in self._state_dict["uidset"]: + self._state_dict["uidset"][column] = set( + self._dataframe[column].dropna().unique() + ) + + return self._state_dict["uidset"][column]
+ + @property + def elements(self): + """System of sets representation of the first two levels (columns) of the underlying data table + + Each item in level 0 (first column) defines a set containing all the level 1 + (second column) items with which it appears in the same row of the underlying + data table + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level 0 item : AttrList(level 1 items)} + + See Also + -------- + incidence_dict : same data as dict of list + memberships : + dual of this representation, + i.e., each item in level 1 (second column) defines a set + elements_by_level, elements_by_column : + system of sets representation of any two levels (columns); specified by level index or column name + + """ + if self._dimsize < 2: + return {k: AttrList(entity=self, key=(0, k)) for k in self.uidset} + + return self.elements_by_level(0, 1) + + @property + def incidence_dict(self) -> dict[T, list[T]]: + """System of sets representation of the first two levels (columns) of the underlying data table + + Returns + ------- + dict of list + System of sets representation as dict of {level 0 item : AttrList(level 1 items)} + + See Also + -------- + elements : same data as dict of AttrList + + """ + return {item: elements.data for item, elements in self.elements.items()} + + @property + def memberships(self): + """System of sets representation of the first two levels (columns) of the + underlying data table + + Each item in level 1 (second column) defines a set containing all the level 0 + (first column) items with which it appears in the same row of the underlying + data table + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level 1 item : AttrList(level 0 items)} + + See Also + -------- + elements : dual of this representation i.e., each item in level 0 (first column) defines a set + elements_by_level, elements_by_column : + system of sets representation of any two levels (columns); specified by level index or column name + + """ + + return self.elements_by_level(1, 0) + +
[docs] def elements_by_level(self, level1, level2): + """System of sets representation of two levels (columns) of the underlying data table + + Each item in level1 defines a set containing all the level2 items + with which it appears in the same row of the underlying data table + + Properties can be accessed and assigned to items in level1 + + Parameters + ---------- + level1 : int + index of level whose items define sets + level2 : int + index of level whose items are elements in the system of sets + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {level1 item : AttrList(level2 items)} + + See Also + -------- + elements, memberships : dual system of sets representations of the first two levels (columns) + elements_by_column : same functionality, takes column names instead of level indices + + """ + col1 = self._data_cols[level1] + col2 = self._data_cols[level2] + return self.elements_by_column(col1, col2)
+ +
[docs] def elements_by_column(self, col1, col2): + + """System of sets representation of two columns (levels) of the underlying data table + + Each item in col1 defines a set containing all the col2 items + with which it appears in the same row of the underlying data table + + Properties can be accessed and assigned to items in col1 + + Parameters + ---------- + col1 : Hashable + name of column whose items define sets + col2 : Hashable + name of column whose items are elements in the system of sets + + Returns + ------- + dict of `AttrList` + System of sets representation as dict of {col1 item : AttrList(col2 items)} + + See Also + -------- + elements, memberships : dual system of sets representations of the first two columns (levels) + elements_by_level : same functionality, takes level indices instead of column names + + """ + if "elements" not in self._state_dict: + self._state_dict["elements"] = defaultdict(dict) + if col2 not in self._state_dict["elements"][col1]: + level = self.index(col1) + elements = self._dataframe.groupby(col1)[col2].unique().to_dict() + self._state_dict["elements"][col1][col2] = { + item: AttrList(entity=self, key=(level, item), initlist=elem) + for item, elem in elements.items() + } + + return self._state_dict["elements"][col1][col2]
+ + @property + def dataframe(self): + """The underlying data table stored by the Entity + + Returns + ------- + pandas.DataFrame + """ + return self._dataframe + + @property + def isstatic(self): + # Dev Note: I'm guessing this is no longer necessary? + """Whether to treat the underlying data as static or not + + If True, the underlying data may not be altered, and the state_dict will never be cleared + Otherwise, rows may be added to and removed from the data table, and updates will clear the state_dict + + Returns + ------- + bool + """ + return self._static + +
[docs] def size(self, level=0): + """The number of items in a level of the underlying data table + + Equivalent to ``self.dimensions[level]`` + + Parameters + ---------- + level : int, default=0 + + Returns + ------- + int + + See Also + -------- + dimensions + """ + # TODO: Since `level` is not validated, we assume that self.dimensions should be an array large enough to access index `level` + return self.dimensions[level]
+ + @property + def empty(self): + """Whether the underlying data table is empty or not + + Returns + ------- + bool + + See Also + -------- + is_empty : for checking whether a specified level (column) is empty + dimsize : 0 if empty + """ + return self._dimsize == 0 + +
[docs] def is_empty(self, level=0): + """Whether a specified level (column) of the underlying data table is empty or not + + Returns + ------- + bool + + See Also + -------- + empty : for checking whether the underlying data table is empty + size : number of items in a level (columns); 0 if level is empty + """ + return self.empty or self.size(level) == 0
+ + def __len__(self): + """Number of items in level 0 (first column) + + Returns + ------- + int + """ + return self.dimensions[0] + + def __contains__(self, item): + """Whether an item is contained within any level of the data + + Parameters + ---------- + item : str + + Returns + ------- + bool + """ + for labels in self.labels.values(): + if item in labels: + return True + return False + + def __getitem__(self, item): + """Access into the system of sets representation of the first two levels (columns) given by `elements` + + Can be used to access and assign properties to an ``item`` in level 0 (first column) + + Parameters + ---------- + item : str + label of an item in level 0 (first column) + + Returns + ------- + AttrList : + list of level 1 items in the set defined by ``item`` + + See Also + -------- + uidset, elements + """ + return self.elements[item] + + def __iter__(self): + """Iterates over items in level 0 (first column) of the underlying data table + + Returns + ------- + Iterator + + See Also + -------- + uidset, elements + """ + return iter(self.elements) + + def __call__(self, label_index=0): + # Dev Note (Madelyn) : I don't think this is the intended use of __call__, can we change/deprecate? + """Iterates over items labels in a specified level (column) of the underlying data table + + Parameters + ---------- + label_index : int + level index + + Returns + ------- + Iterator + + See Also + -------- + labels + """ + return iter(self.labels[self._data_cols[label_index]]) + + # def __repr__(self): + # """String representation of the Entity + + # e.g., "Entity(uid, [level 0 items], {item: {property name: property value}})" + + # Returns + # ------- + # str + # """ + # return "hypernetx.classes.entity.Entity" + + # def __str__(self): + # return "<class 'hypernetx.classes.entity.Entity'>" + +
[docs] def index(self, column, value=None): + """Get level index corresponding to a column and (optionally) the index of a value in that column + + The index of ``value`` is its position in the list given by ``self.labels[column]``, which is used + in the integer encoding of the data table ``self.data`` + + Parameters + ---------- + column: str + name of a column in self.dataframe + value : str, optional + label of an item in the specified column + + Returns + ------- + int or (int, int) + level index corresponding to column, index of value if provided + + See Also + -------- + indices : for finding indices of multiple values in a column + level : same functionality, search for the value without specifying column + """ + if "keyindex" not in self._state_dict: + self._state_dict["keyindex"] = {} + if column not in self._state_dict["keyindex"]: + self._state_dict["keyindex"][column] = self._dataframe[ + self._data_cols + ].columns.get_loc(column) + + if value is None: + return self._state_dict["keyindex"][column] + + if "index" not in self._state_dict: + self._state_dict["index"] = defaultdict(dict) + if value not in self._state_dict["index"][column]: + self._state_dict["index"][column][value] = self._dataframe[ + column + ].cat.categories.get_loc(value) + + return ( + self._state_dict["keyindex"][column], + self._state_dict["index"][column][value], + )
+ +
[docs] def indices(self, column, values): + """Get indices of one or more value(s) in a column + + Parameters + ---------- + column : str + values : str or iterable of str + + Returns + ------- + list of int + indices of values + + See Also + -------- + index : for finding level index of a column and index of a single value + """ + if isinstance(values, Hashable): + values = [values] + + if "index" not in self._state_dict: + self._state_dict["index"] = defaultdict(dict) + for v in values: + if v not in self._state_dict["index"][column]: + self._state_dict["index"][column][v] = self._dataframe[ + column + ].cat.categories.get_loc(v) + + return [self._state_dict["index"][column][v] for v in values]
+ +
[docs] def translate(self, level, index): + """Given indices of a level and value(s), return the corresponding value label(s) + + Parameters + ---------- + level : int + level index + index : int or list of int + value index or indices + + Returns + ------- + str or list of str + label(s) corresponding to value index or indices + + See Also + -------- + translate_arr : translate a full row of value indices across all levels (columns) + """ + column = self._data_cols[level] + + if isinstance(index, (int, np.integer)): + return self.labels[column][index] + + return [self.labels[column][i] for i in index]
+ +
[docs] def translate_arr(self, coords): + """Translate a full encoded row of the data table e.g., a row of ``self.data`` + + Parameters + ---------- + coords : tuple of ints + encoded value indices, with one value index for each level of the data + + Returns + ------- + list of str + full row of translated value labels + """ + assert len(coords) == self._dimsize + translation = [] + for level, index in enumerate(coords): + translation.append(self.translate(level, index)) + + return translation
+ +
[docs] def level(self, item, min_level=0, max_level=None, return_index=True): + """First level containing the given item label + + Order of levels corresponds to order of columns in `self.dataframe` + + Parameters + ---------- + item : str + min_level, max_level : int, optional + inclusive bounds on range of levels to search for item + return_index : bool, default=True + If True, return index of item within the level + + Returns + ------- + int, (int, int), or None + index of first level containing the item, index of item if `return_index=True` + returns None if item is not found + + See Also + -------- + index, indices : for finding level and/or value indices when the column is known + """ + if max_level is None or max_level >= self._dimsize: + max_level = self._dimsize - 1 + + columns = self._data_cols[min_level : max_level + 1] + levels = range(min_level, max_level + 1) + + for col, lev in zip(columns, levels): + if item in self.labels[col]: + if return_index: + return self.index(col, item) + + return lev + + print(f'"{item}" not found.') + return None
+ +
[docs] def add(self, *args): + """Updates the underlying data table with new entity data from multiple sources + + Parameters + ---------- + *args + variable length argument list of Entity and/or representations of entity data + + Returns + ------- + self : Entity + + Warnings + -------- + Adding an element directly to an Entity will not add the + element to any Hypergraphs constructed from that Entity, and will cause an error. Use + :func:`Hypergraph.add_edge <classes.hypergraph.Hypergraph.add_edge>` or + :func:`Hypergraph.add_node_to_edge <classes.hypergraph.Hypergraph \ + .add_node_to_edge>` instead. + + See Also + -------- + add_element : update from a single source + Hypergraph.add_edge, Hypergraph.add_node_to_edge : for adding elements to a Hypergraph + + """ + for item in args: + self.add_element(item) + return self
+ +
[docs] def add_elements_from(self, arg_set): + """Adds arguments from an iterable to the data table one at a time + + ..deprecated:: 2.0.0 + Duplicates `add` + + Parameters + ---------- + arg_set : iterable + list of Entity and/or representations of entity data + + Returns + ------- + self : Entity + + """ + for item in arg_set: + self.add_element(item) + return self
+ +
[docs] def add_element(self, data): + """Updates the underlying data table with new entity data + + Supports adding from either an existing Entity or a representation of entity + (data table or labeled system of sets are both supported representations) + + Parameters + ---------- + data : Entity, `pandas.DataFrame`, or dict of lists or sets + new entity data + + Returns + ------- + self : Entity + + Warnings + -------- + Adding an element directly to an Entity will not add the + element to any Hypergraphs constructed from that Entity, and will cause an error. Use + `Hypergraph.add_edge` or `Hypergraph.add_node_to_edge` instead. + + See Also + -------- + add : takes multiple sources of new entity data as variable length argument list + Hypergraph.add_edge, Hypergraph.add_node_to_edge : for adding elements to a Hypergraph + + """ + if isinstance(data, Entity): + df = data.dataframe + self.__add_from_dataframe(df) + + if isinstance(data, dict): + df = pd.DataFrame.from_dict(data) + self.__add_from_dataframe(df) + + if isinstance(data, pd.DataFrame): + self.__add_from_dataframe(data) + + return self
+ + def __add_from_dataframe(self, df): + """Helper function to append rows to `self.dataframe` + + Parameters + ---------- + data : pd.DataFrame + + Returns + ------- + self : Entity + + """ + if all(col in df for col in self._data_cols): + new_data = pd.concat((self._dataframe, df), ignore_index=True) + new_data[self._cell_weight_col] = new_data[self._cell_weight_col].fillna(1) + + self._dataframe, _ = remove_row_duplicates( + new_data, + self._data_cols, + weights=self._cell_weight_col, + ) + + self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + "category" + ) + + self._state_dict.clear() + +
[docs] def remove(self, *args): + """Removes all rows containing specified item(s) from the underlying data table + + Parameters + ---------- + *args + variable length argument list of item labels + + Returns + ------- + self : Entity + + See Also + -------- + remove_element : remove all rows containing a single specified item + + """ + for item in args: + self.remove_element(item) + return self
+ +
[docs] def remove_elements_from(self, arg_set): + """Removes all rows containing specified item(s) from the underlying data table + + ..deprecated: 2.0.0 + Duplicates `remove` + + Parameters + ---------- + arg_set : iterable + list of item labels + + Returns + ------- + self : Entity + + """ + for item in arg_set: + self.remove_element(item) + return self
+ +
[docs] def remove_element(self, item): + """Removes all rows containing a specified item from the underlying data table + + Parameters + ---------- + item + item label + + Returns + ------- + self : Entity + + See Also + -------- + remove : same functionality, accepts variable length argument list of item labels + + """ + updated_dataframe = self._dataframe + + for column in self._dataframe: + updated_dataframe = updated_dataframe[updated_dataframe[column] != item] + + self._dataframe, _ = remove_row_duplicates( + updated_dataframe, + self._data_cols, + weights=self._cell_weight_col, + ) + self._dataframe[self._data_cols] = self._dataframe[self._data_cols].astype( + "category" + ) + + self._state_dict.clear() + for col in self._data_cols: + self._dataframe[col] = self._dataframe[col].cat.remove_unused_categories()
+ +
[docs] def encode(self, data): + """ + Encode dataframe to numpy array + + Parameters + ---------- + data : dataframe + + Returns + ------- + numpy.array + + """ + encoded_array = data.apply(lambda x: x.cat.codes).to_numpy() + return encoded_array
+ +
[docs] def incidence_matrix( + self, level1=0, level2=1, weights=False, aggregateby=None, index=False + ) -> csr_matrix | None: + """Incidence matrix representation for two levels (columns) of the underlying data table + + If `level1` and `level2` contain N and M distinct items, respectively, the incidence matrix will be M x N. + In other words, the items in `level1` and `level2` correspond to the columns and rows of the incidence matrix, + respectively, in the order in which they appear in `self.labels[column1]` and `self.labels[column2]` + (`column1` and `column2` are the column labels of `level1` and `level2`) + + Parameters + ---------- + level1 : int, default=0 + index of first level (column) + level2 : int, default=1 + index of second level + weights : bool or dict, default=False + If False all nonzero entries are 1. + If True all nonzero entries are filled by self.cell_weight + dictionary values, use :code:`aggregateby` to specify how duplicate + entries should have weights aggregated. + If dict of {(level1 item, level2 item): weight value} form; + only nonzero cells in the incidence matrix will be updated by dictionary, + i.e., `level1 item` and `level2 item` must appear in the same row at least once in the underlying data table + aggregateby : {'last', count', 'sum', 'mean','median', max', 'min', 'first', 'last', None}, default='count' + Method to aggregate weights of duplicate rows in data table. + If None, then all cell weights will be set to 1. + + Returns + ------- + scipy.sparse.csr.csr_matrix + sparse representation of incidence matrix (i.e. Compressed Sparse Row matrix) + + Other Parameters + ---------------- + index : bool, optional + Not used + + Note + ---- + In the context of Hypergraphs, think `level1 = edges, level2 = nodes` + """ + if self.dimsize < 2: + warnings.warn("Incidence matrix requires two levels of data.") + return None + + data_cols = [self._data_cols[level2], self._data_cols[level1]] + weights = self._cell_weight_col if weights else None + + df, weight_col = remove_row_duplicates( + self._dataframe, + data_cols, + weights=weights, + aggregateby=aggregateby, + ) + + return csr_matrix( + (df[weight_col], tuple(df[col].cat.codes for col in data_cols)) + )
+ +
[docs] def restrict_to_levels( + self, + levels: int | Iterable[int], + weights: bool = False, + aggregateby: str | None = "sum", + **kwargs, + ) -> Entity: + """Create a new Entity by restricting to a subset of levels (columns) in the + underlying data table + + Parameters + ---------- + levels : array-like of int + indices of a subset of levels (columns) of data + weights : bool, default=False + If True, aggregate existing cell weights to get new cell weights + Otherwise, all new cell weights will be 1 + aggregateby : {'sum', 'first', 'last', 'count', 'mean', 'median', 'max', \ + 'min', None}, optional + Method to aggregate weights of duplicate rows in data table + If None or `weights`=False then all new cell weights will be 1 + **kwargs + Extra arguments to `Entity` constructor + + Returns + ------- + Entity + + Raises + ------ + KeyError + If `levels` contains any invalid values + + See Also + -------- + EntitySet + """ + + levels = np.asarray(levels) + invalid_levels = (levels < 0) | (levels >= self.dimsize) + if invalid_levels.any(): + raise KeyError(f"Invalid levels: {levels[invalid_levels]}") + + cols = [self._data_cols[lev] for lev in levels] + + if weights: + weights = self._cell_weight_col + cols.append(weights) + kwargs.update(weights=weights) + + properties = self.properties.loc[levels] + properties.index = properties.index.remove_unused_levels() + level_map = {old: new for new, old in enumerate(levels)} + new_levels = properties.index.levels[0].map(level_map) + properties.index = properties.index.set_levels(new_levels, level=0) + level_col, id_col = properties.index.names + + return self.__class__( + entity=self.dataframe[cols], + data_cols=cols, + aggregateby=aggregateby, + properties=properties, + misc_props_col=self._misc_props_col, + level_col=level_col, + id_col=id_col, + **kwargs, + )
+ +
[docs] def restrict_to_indices(self, indices, level=0, **kwargs): + """Create a new Entity by restricting the data table to rows containing specific items in a given level + + Parameters + ---------- + indices : int or iterable of int + indices of item label(s) in `level` to restrict to + level : int, default=0 + level index + **kwargs + Extra arguments to `Entity` constructor + + Returns + ------- + Entity + """ + column = self._dataframe[self._data_cols[level]] + values = self.translate(level, indices) + entity = self._dataframe.loc[column.isin(values)].copy() + + for col in self._data_cols: + entity[col] = entity[col].cat.remove_unused_categories() + restricted = self.__class__( + entity=entity, misc_props_col=self._misc_props_col, **kwargs + ) + + if not self.properties.empty: + prop_idx = [ + (lv, uid) + for lv in range(restricted.dimsize) + for uid in restricted.uidset_by_level(lv) + ] + properties = self.properties.loc[prop_idx] + restricted.assign_properties(properties) + return restricted
+ +
[docs] def assign_properties( + self, + props: pd.DataFrame | dict[int, dict[T, dict[Any, Any]]], + misc_col: Optional[str] = None, + level_col=0, + id_col=1, + ) -> None: + """Assign new properties to items in the data table, update :attr:`properties` + + Parameters + ---------- + props : pandas.DataFrame or doubly-nested dict + See documentation of the `properties` parameter in :class:`Entity` + level_col, id_col, misc_col : str, optional + column names corresponding to the levels, items, and misc. properties; + if None, default to :attr:`_level_col`, :attr:`_id_col`, :attr:`_misc_props_col`, + respectively. + + See Also + -------- + properties + """ + # mapping from user-specified level, id, misc column names to internal names + ### This will fail if there isn't a level column + + if isinstance(props, pd.DataFrame): + ### Fix to check the shape of properties or redo properties format + column_map = { + old: new + for old, new in zip( + (level_col, id_col, misc_col), + (*self.properties.index.names, self._misc_props_col), + ) + if old is not None + } + props = props.rename(columns=column_map) + props = props.rename_axis(index=column_map) + self._properties_from_dataframe(props) + + if isinstance(props, dict): + ### Expects nested dictionary with keys corresponding to level and id + self._properties_from_dict(props)
+ + def _properties_from_dataframe(self, props: pd.DataFrame) -> None: + """Private handler for updating :attr:`properties` from a DataFrame + + Parameters + ---------- + props + + Notes + ----- + For clarity in in-line developer comments: + + idx-level + refers generally to a level of a MultiIndex + level + refers specifically to the idx-level in the MultiIndex of :attr:`properties` + that stores the level/column id for the item + """ + # names of property table idx-levels for level and item id, respectively + # ``item`` used instead of ``id`` to avoid redefining python built-in func `id` + level, item = self.properties.index.names + if props.index.nlevels > 1: # props has MultiIndex + # drop all idx-levels from props other than level and id (if present) + extra_levels = [ + idx_lev for idx_lev in props.index.names if idx_lev not in (level, item) + ] + props = props.reset_index(level=extra_levels) + + try: + # if props index is already in the correct format, + # enforce the correct idx-level ordering + props.index = props.index.reorder_levels((level, item)) + except AttributeError: # props is not in (level, id) MultiIndex format + # if the index matches level or id, drop index to column + if props.index.name in (level, item): + props = props.reset_index() + index_cols = [item] + if level in props: + index_cols.insert(0, level) + try: + props = props.set_index(index_cols, verify_integrity=True) + except ValueError: + warnings.warn( + "duplicate (level, ID) rows will be dropped after first occurrence" + ) + props = props.drop_duplicates(index_cols) + props = props.set_index(index_cols) + + if self._misc_props_col in props: + try: + props[self._misc_props_col] = props[self._misc_props_col].apply( + literal_eval + ) + except ValueError: + pass # data already parsed, no literal eval needed + else: + warnings.warn("parsed property dict column from string literal") + + if props.index.nlevels == 1: + props = props.reindex(self.properties.index, level=1) + + # combine with existing properties + # non-null values in new props override existing value + properties = props.combine_first(self.properties) + # update misc. column to combine existing and new misc. property dicts + # new props override existing value for overlapping misc. property dict keys + properties[self._misc_props_col] = self.properties[ + self._misc_props_col + ].combine( + properties[self._misc_props_col], + lambda x, y: {**(x if pd.notna(x) else {}), **(y if pd.notna(y) else {})}, + fill_value={}, + ) + self._properties = properties.sort_index() + + def _properties_from_dict(self, props: dict[int, dict[T, dict[Any, Any]]]) -> None: + """Private handler for updating :attr:`properties` from a doubly-nested dict + + Parameters + ---------- + props + """ + # TODO: there may be a more efficient way to convert this to a dataframe instead + # of updating one-by-one via nested loop, but checking whether each prop_name + # belongs in a designated existing column or the misc. property dict column + # makes it more challenging + # For now: only use nested loop update if non-misc. columns currently exist + if len(self.properties.columns) > 1: + for level in props: + for item in props[level]: + for prop_name, prop_val in props[level][item].items(): + self.set_property(item, prop_name, prop_val, level) + else: + item_keys = pd.MultiIndex.from_tuples( + [(level, item) for level in props for item in props[level]], + names=self.properties.index.names, + ) + props_data = [props[level][item] for level, item in item_keys] + props = pd.DataFrame({self._misc_props_col: props_data}, index=item_keys) + self._properties_from_dataframe(props) + + def _property_loc(self, item: T) -> tuple[int, T]: + """Get index in :attr:`properties` of an item of unspecified level + + Parameters + ---------- + item : hashable + name of an item + + Returns + ------- + item_key : tuple of (int, hashable) + ``(level, item)`` + + Raises + ------ + KeyError + If `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `item` appears in multiple levels, returns the first (closest to 0) + + """ + try: + item_loc = self.properties.xs(item, level=1, drop_level=False).index + except KeyError as ex: # item not in df + raise KeyError(f"no properties initialized for 'item': {item}") from ex + + try: + item_key = item_loc.item() + except ValueError: + item_loc, _ = item_loc.sortlevel(sort_remaining=False) + item_key = item_loc[0] + warnings.warn(f"item found in multiple levels: {tuple(item_loc)}") + return item_key + +
[docs] def set_property( + self, + item: T, + prop_name: Any, + prop_val: Any, + level: Optional[int] = None, + ) -> None: + """Set a property of an item + + Parameters + ---------- + item : hashable + name of an item + prop_name : hashable + name of the property to set + prop_val : any + value of the property to set + level : int, optional + level index of the item; + required if `item` is not already in :attr:`properties` + + Raises + ------ + ValueError + If `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_property, get_properties + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError as ex: + raise ValueError( + "cannot infer 'level' when initializing 'item' properties" + ) from ex + + if prop_name in self.properties: + self._properties.loc[item_key, prop_name] = prop_val + else: + try: + self._properties.loc[item_key, self._misc_props_col].update( + {prop_name: prop_val} + ) + except KeyError: + self._properties.loc[item_key, :] = { + self._misc_props_col: {prop_name: prop_val} + }
+ +
[docs] def get_property(self, item: T, prop_name: Any, level: Optional[int] = None) -> Any: + """Get a property of an item + + Parameters + ---------- + item : hashable + name of an item + prop_name : hashable + name of the property to get + level : int, optional + level index of the item + + Returns + ------- + prop_val : any + value of the property + + Raises + ------ + KeyError + if (`level`, `item`) is not in :attr:`properties`, + or if `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_properties, set_property + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError: + raise # item not in properties + + try: + prop_val = self.properties.loc[item_key, prop_name] + except KeyError as ex: + if ex.args[0] == prop_name: + prop_val = self.properties.loc[item_key, self._misc_props_col].get( + prop_name + ) + else: + raise KeyError( + f"no properties initialized for ('level','item'): {item_key}" + ) from ex + + return prop_val
+ +
[docs] def get_properties(self, item: T, level: Optional[int] = None) -> dict[Any, Any]: + """Get all properties of an item + + Parameters + ---------- + item : hashable + name of an item + level : int, optional + level index of the item + + Returns + ------- + prop_vals : dict + ``{named property: property value, ..., + misc. property column name: {property name: property value}}`` + + Raises + ------ + KeyError + if (`level`, `item`) is not in :attr:`properties`, + or if `level` is not provided and `item` is not in :attr:`properties` + + Warns + ----- + UserWarning + If `level` is not provided and `item` appears in multiple levels, + assumes the first (closest to 0) + + See Also + -------- + get_property, set_property + """ + if level is not None: + item_key = (level, item) + else: + try: + item_key = self._property_loc(item) + except KeyError: + raise + + try: + prop_vals = self.properties.loc[item_key].to_dict() + except KeyError as ex: + raise KeyError( + f"no properties initialized for ('level','item'): {item_key}" + ) from ex + + return prop_vals
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/classes/entityset.html b/_modules/hypernetx/classes/entityset.html new file mode 100644 index 00000000..b937de86 --- /dev/null +++ b/_modules/hypernetx/classes/entityset.html @@ -0,0 +1,773 @@ + + + + + + hypernetx.classes.entityset — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for hypernetx.classes.entityset

+from __future__ import annotations
+
+import warnings
+from ast import literal_eval
+from collections import OrderedDict
+from collections.abc import Iterable, Sequence
+from typing import Mapping
+from typing import Optional, Any, TypeVar, Union
+from pprint import pformat
+
+import numpy as np
+import pandas as pd
+
+from hypernetx.classes import Entity
+from hypernetx.classes.helpers import AttrList
+
+# from hypernetx.utils.log import get_logger
+
+# _log = get_logger("entity_set")
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class EntitySet(Entity): + """Class for handling 2-dimensional (i.e., system of sets, bipartite) data when + building network-like models, i.e., :class:`Hypergraph` + + Parameters + ---------- + entity : Entity, pandas.DataFrame, dict of lists or sets, or list of lists or sets, optional + If an ``Entity`` with N levels or a ``DataFrame`` with N columns, + represents N-dimensional entity data (data table). + If N > 2, only considers levels (columns) `level1` and `level2`. + Otherwise, represents 2-dimensional entity data (system of sets). + data : numpy.ndarray, optional + 2D M x N ``ndarray`` of ``ints`` (data table); + sparse representation of an N-dimensional incidence tensor with M nonzero cells. + If N > 2, only considers levels (columns) `level1` and `level2`. + Ignored if `entity` is provided. + labels : collections.OrderedDict of lists, optional + User-specified labels in corresponding order to ``ints`` in `data`. + For M x N `data`, N > 2, `labels` must contain either 2 or N keys. + If N keys, only considers labels for levels (columns) `level1` and `level2`. + Ignored if `entity` is provided or `data` is not provided. + level1, level2 : str or int, default=0,1 + Each item in `level1` defines a set containing all the `level2` items with which + it appears in the same row of the underlying data table. + If ``int``, gives the index of a level; + if ``str``, gives the name of a column in `entity`. + Ignored if `entity`, `data` (if `entity` not provided), and `labels` all (if + provided) represent 1- or 2-dimensional data (set or system of sets). + weights : str or sequence of float, optional + User-specified cell weights corresponding to entity data. + If sequence of ``floats`` and `entity` or `data` defines a data table, + length must equal the number of rows. + If sequence of ``floats`` and `entity` defines a system of sets, + length must equal the total sum of the sizes of all sets. + If ``str`` and `entity` is a ``DataFrame``, + must be the name of a column in `entity`. + Otherwise, weight for all cells is assumed to be 1. + Ignored if `entity` is an ``Entity`` and `keep_weights`=True. + keep_weights : bool, default=True + Whether to preserve any existing cell weights; + ignored if `entity` is not an ``Entity``. + cell_properties : str, list of str, pandas.DataFrame, or doubly-nested dict, optional + User-specified properties to be assigned to cells of the incidence matrix, i.e., + rows in a data table; pairs of (set, element of set) in a system of sets. + See Notes for detailed explanation. + Ignored if underlying data is 1-dimensional (set). + If doubly-nested dict, + ``{level1 item: {level2 item: {cell property name: cell property value}}}``. + misc_cell_props_col : str, default='cell_properties' + Column name for miscellaneous cell properties; see Notes for explanation. + kwargs + Keyword arguments passed to the ``Entity`` constructor, e.g., `static`, + `uid`, `aggregateby`, `properties`, etc. See :class:`Entity` for documentation + of these parameters. + + Notes + ----- + A **cell property** is a named attribute assigned jointly to a set and one of its + elements, i.e, a cell of the incidence matrix. + + When an ``Entity`` or ``DataFrame`` is passed to the `entity` parameter of the + constructor, it should represent a data table: + + +--------------+--------------+--------------+-------+--------------+ + | Column_1 | Column_2 | Column_3 | [...] | Column_N | + +==============+==============+==============+=======+==============+ + | level 1 item | level 2 item | level 3 item | ... | level N item | + +--------------+--------------+--------------+-------+--------------+ + | ... | ... | ... | ... | ... | + +--------------+--------------+--------------+-------+--------------+ + + Assuming the default values for parameters `level1`, `level2`, the data table will + be restricted to the set system defined by Column 1 and Column 2. + Since each row of the data table represents an incidence or cell, values from other + columns may contain data that should be converted to cell properties. + + By passing a **column name or list of column names** as `cell_properties`, each + given column will be preserved in the :attr:`cell_properties` as an explicit cell + property type. An additional column in :attr:`cell_properties` will be created to + store a ``dict`` of miscellaneous cell properties, which will store cell properties + of types that have not been explicitly defined and do not have a dedicated column + (which may be assigned after construction). The name of the miscellaneous column is + determined by `misc_cell_props_col`. + + You can also pass a **pre-constructed table** to `cell_properties` as a + ``DataFrame``: + + +----------+----------+----------------------------+-------+-----------------------+ + | Column_1 | Column_2 | [explicit cell prop. type] | [...] | misc. cell properties | + +==========+==========+============================+=======+=======================+ + | level 1 | level 2 | cell property value | ... | {cell property name: | + | item | item | | | cell property value} | + +----------+----------+----------------------------+-------+-----------------------+ + | ... | ... | ... | ... | ... | + +----------+----------+----------------------------+-------+-----------------------+ + + Column 1 and Column 2 must have the same names as the corresponding columns in the + `entity` data table, and `misc_cell_props_col` can be used to specify the name of the + column to be used for miscellaneous cell properties. If no column by that name is + found, a new column will be created and populated with empty ``dicts``. All other + columns will be considered explicit cell property types. The order of the columns + does not matter. + + Both of these methods assume that there are no row duplicates in the tables passed + to `entity` and/or `cell_properties`; if duplicates are found, all but the first + occurrence will be dropped. + + """ + + def __init__( + self, + entity: Optional[ + pd.DataFrame + | np.ndarray + | Mapping[T, Iterable[T]] + | Iterable[Iterable[T]] + | Mapping[T, Mapping[T, Mapping[T, Any]]] + ] = None, + data: Optional[np.ndarray] = None, + labels: Optional[OrderedDict[T, Sequence[T]]] = None, + level1: str | int = 0, + level2: str | int = 1, + weight_col: str | int = "cell_weights", + weights: Sequence[float] | float | int | str = 1, + # keep_weights: bool = True, + cell_properties: Optional[ + Sequence[T] | pd.DataFrame | dict[T, dict[T, dict[Any, Any]]] + ] = None, + misc_cell_props_col: str = "cell_properties", + uid: Optional[Hashable] = None, + aggregateby: Optional[str] = "sum", + properties: Optional[pd.DataFrame | dict[int, dict[T, dict[Any, Any]]]] = None, + misc_props_col: str = "properties", + # level_col: str = "level", + # id_col: str = "id", + **kwargs, + ): + self._misc_cell_props_col = misc_cell_props_col + + # if the entity data is passed as an Entity, get its underlying data table and + # proceed to the case for entity data passed as a DataFrame + # if isinstance(entity, Entity): + # # _log.info(f"Changing entity from type {Entity} to {type(entity.dataframe)}") + # if keep_weights: + # # preserve original weights + # weights = entity._cell_weight_col + # entity = entity.dataframe + + # if the entity data is passed as a DataFrame, restrict to two columns if needed + if isinstance(entity, pd.DataFrame) and len(entity.columns) > 2: + # _log.info(f"Processing parameter of 'entity' of type {type(entity)}...") + # metadata columns are not considered levels of data, + # remove them before indexing by level + # if isinstance(cell_properties, str): + # cell_properties = [cell_properties] + + prop_cols = [] + if isinstance(cell_properties, Sequence): + for col in {*cell_properties, self._misc_cell_props_col}: + if col in entity: + # _log.debug(f"Adding column to prop_cols: {col}") + prop_cols.append(col) + + # meta_cols = prop_cols + # if weights in entity and weights not in meta_cols: + # meta_cols.append(weights) + # # _log.debug(f"meta_cols: {meta_cols}") + if weight_col in prop_cols: + prop_cols.remove(weight_col) + if not weight_col in entity: + entity[weight_col] = weights + + # if both levels are column names, no need to index by level + if isinstance(level1, int): + level1 = entity.columns[level1] + if isinstance(level2, int): + level2 = entity.columns[level2] + # if isinstance(level1, str) and isinstance(level2, str): + columns = [level1, level2, weight_col] + prop_cols + # if one or both of the levels are given by index, get column name + # else: + # all_columns = entity.columns.drop(meta_cols) + # columns = [ + # all_columns[lev] if isinstance(lev, int) else lev + # for lev in (level1, level2) + # ] + + # if there is a column for cell properties, convert to separate DataFrame + # if len(prop_cols) > 0: + # cell_properties = entity[[*columns, *prop_cols]] + + # if there is a column for weights, preserve it + # if weights in entity and weights not in prop_cols: + # columns.append(weights) + # _log.debug(f"columns: {columns}") + + # pass level1, level2, and weights (optional) to Entity constructor + entity = entity[columns] + + # if a 2D ndarray is passed, restrict to two columns if needed + elif isinstance(data, np.ndarray) and data.ndim == 2 and data.shape[1] > 2: + # _log.info(f"Processing parameter 'data' of type {type(data)}...") + data = data[:, (level1, level2)] + + # if a dict of labels is provided, restrict to labels for two columns if needed + if isinstance(labels, dict) and len(labels) > 2: + label_keys = list(labels) + columns = (label_keys[level1], label_keys[level2]) + labels = {col: labels[col] for col in columns} + # _log.debug(f"Restricted labels to columns:\n{pformat(labels)}") + + # _log.info( + # f"Creating instance of {Entity} using reformatted params: \n\tentity: {type(entity)} \n\tdata: {type(data)} \n\tlabels: {type(labels)}, \n\tweights: {weights}, \n\tkwargs: {kwargs}" + # ) + # _log.debug(f"entity:\n{pformat(entity)}") + # _log.debug(f"data: {pformat(data)}") + super().__init__( + entity=entity, + data=data, + labels=labels, + uid=uid, + weight_col=weight_col, + weights=weights, + aggregateby=aggregateby, + properties=properties, + misc_props_col=misc_props_col, + **kwargs, + ) + + # if underlying data is 2D (system of sets), create and assign cell properties + if self.dimsize == 2: + # self._cell_properties = pd.DataFrame( + # columns=[*self._data_cols, self._misc_cell_props_col] + # ) + self._cell_properties = pd.DataFrame(self._dataframe) + self._cell_properties.set_index(self._data_cols, inplace=True) + if isinstance(cell_properties, (dict, pd.DataFrame)): + self.assign_cell_properties(cell_properties) + else: + self._cell_properties = None + + @property + def cell_properties(self) -> Optional[pd.DataFrame]: + """Properties assigned to cells of the incidence matrix + + Returns + ------- + pandas.Series, optional + Returns None if :attr:`dimsize` < 2 + """ + return self._cell_properties + + @property + def memberships(self) -> dict[str, AttrList[str]]: + """Extends :attr:`Entity.memberships` + + Each item in level 1 (second column) defines a set containing all the level 0 + (first column) items with which it appears in the same row of the underlying + data table. + + Returns + ------- + dict of AttrList + System of sets representation as dict of + ``{level 1 item: AttrList(level 0 items)}``. + + See Also + -------- + elements : dual of this representation, + i.e., each item in level 0 (first column) defines a set + restrict_to_levels : for more information on how memberships work for + 1-dimensional (set) data + """ + if self._dimsize == 1: + return self._state_dict.get("memberships") + + return super().memberships + +
[docs] def restrict_to_levels( + self, + levels: int | Iterable[int], + weights: bool = False, + aggregateby: Optional[str] = "sum", + keep_memberships: bool = True, + **kwargs, + ) -> EntitySet: + """Extends :meth:`Entity.restrict_to_levels` + + Parameters + ---------- + levels : array-like of int + indices of a subset of levels (columns) of data + weights : bool, default=False + If True, aggregate existing cell weights to get new cell weights. + Otherwise, all new cell weights will be 1. + aggregateby : {'sum', 'first', 'last', 'count', 'mean', 'median', 'max', \ + 'min', None}, optional + Method to aggregate weights of duplicate rows in data table + If None or `weights`=False then all new cell weights will be 1 + keep_memberships : bool, default=True + Whether to preserve membership information for the discarded level when + the new ``EntitySet`` is restricted to a single level + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + EntitySet + + Raises + ------ + KeyError + If `levels` contains any invalid values + """ + restricted = super().restrict_to_levels( + levels, + weights, + aggregateby, + misc_cell_props_col=self._misc_cell_props_col, + **kwargs, + ) + + if keep_memberships: + # use original memberships to set memberships for the new EntitySet + # TODO: This assumes levels=[1], add explicit checks for other cases + restricted._state_dict["memberships"] = self.memberships + + return restricted
+ +
[docs] def restrict_to(self, indices: int | Iterable[int], **kwargs) -> EntitySet: + """Alias of :meth:`restrict_to_indices` with default parameter `level`=0 + + Parameters + ---------- + indices : array_like of int + indices of item label(s) in `level` to restrict to + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + EntitySet + + See Also + -------- + restrict_to_indices + """ + restricted = self.restrict_to_indices( + indices, misc_cell_props_col=self._misc_cell_props_col, **kwargs + ) + if not self.cell_properties.empty: + cell_properties = self.cell_properties.loc[ + list(restricted.uidset) + ].reset_index() + restricted.assign_cell_properties(cell_properties) + return restricted
+ +
[docs] def assign_cell_properties( + self, + cell_props: pd.DataFrame | dict[T, dict[T, dict[Any, Any]]], + misc_col: Optional[str] = None, + replace: bool = False, + ) -> None: + """Assign new properties to cells of the incidence matrix and update + :attr:`properties` + + Parameters + ---------- + cell_props : pandas.DataFrame, dict of iterables, or doubly-nested dict, optional + See documentation of the `cell_properties` parameter in :class:`EntitySet` + misc_col: str, optional + name of column to be used for miscellaneous cell property dicts + replace: bool, default=False + If True, replace existing :attr:`cell_properties` with result; + otherwise update with new values from result + + Raises + ----- + AttributeError + Not supported for :attr:`dimsize`=1 + """ + if self.dimsize < 2: + raise AttributeError( + f"cell properties are not supported for 'dimsize'={self.dimsize}" + ) + + misc_col = misc_col or self._misc_cell_props_col + try: + cell_props = cell_props.rename( + columns={misc_col: self._misc_cell_props_col} + ) + except AttributeError: # handle cell props in nested dict format + self._cell_properties_from_dict(cell_props) + else: # handle cell props in DataFrame format + self._cell_properties_from_dataframe(cell_props)
+ + def _cell_properties_from_dataframe(self, cell_props: pd.DataFrame) -> None: + """Private handler for updating :attr:`properties` from a DataFrame + + Parameters + ---------- + props + + Parameters + ---------- + cell_props : DataFrame + """ + if cell_props.index.nlevels > 1: + extra_levels = [ + idx_lev + for idx_lev in cell_props.index.names + if idx_lev not in self._data_cols + ] + cell_props = cell_props.reset_index(level=extra_levels) + + misc_col = self._misc_cell_props_col + + try: + cell_props.index = cell_props.index.reorder_levels(self._data_cols) + except AttributeError: + if cell_props.index.name in self._data_cols: + cell_props = cell_props.reset_index() + + try: + cell_props = cell_props.set_index( + self._data_cols, verify_integrity=True + ) + except ValueError: + warnings.warn( + "duplicate cell rows will be dropped after first occurrence" + ) + cell_props = cell_props.drop_duplicates(self._data_cols) + cell_props = cell_props.set_index(self._data_cols) + + if misc_col in cell_props: + try: + cell_props[misc_col] = cell_props[misc_col].apply(literal_eval) + except ValueError: + pass # data already parsed, no literal eval needed + else: + warnings.warn("parsed cell property dict column from string literal") + + cell_properties = cell_props.combine_first(self.cell_properties) + # import ipdb; ipdb.set_trace() + # cell_properties[misc_col] = self.cell_properties[misc_col].combine( + # cell_properties[misc_col], + # lambda x, y: {**(x if pd.notna(x) else {}), **(y if pd.notna(y) else {})}, + # fill_value={}, + # ) + + self._cell_properties = cell_properties.sort_index() + + def _cell_properties_from_dict( + self, cell_props: dict[T, dict[T, dict[Any, Any]]] + ) -> None: + """Private handler for updating :attr:`cell_properties` from a doubly-nested dict + + Parameters + ---------- + cell_props + """ + # TODO: there may be a more efficient way to convert this to a dataframe instead + # of updating one-by-one via nested loop, but checking whether each prop_name + # belongs in a designated existing column or the misc. property dict column + # makes it more challenging. + # For now: only use nested loop update if non-misc. columns currently exist + if len(self.cell_properties.columns) > 1: + for item1 in cell_props: + for item2 in cell_props[item1]: + for prop_name, prop_val in cell_props[item1][item2].items(): + self.set_cell_property(item1, item2, prop_name, prop_val) + else: + cells = pd.MultiIndex.from_tuples( + [(item1, item2) for item1 in cell_props for item2 in cell_props[item1]], + names=self._data_cols, + ) + props_data = [cell_props[item1][item2] for item1, item2 in cells] + cell_props = pd.DataFrame( + {self._misc_cell_props_col: props_data}, index=cells + ) + self._cell_properties_from_dataframe(cell_props) + +
[docs] def collapse_identical_elements( + self, return_equivalence_classes: bool = False, **kwargs + ) -> EntitySet | tuple[EntitySet, dict[str, list[str]]]: + """Create a new :class:`EntitySet` by collapsing sets with the same set elements + + Each item in level 0 (first column) defines a set containing all the level 1 + (second column) items with which it appears in the same row of the underlying + data table. + + Parameters + ---------- + return_equivalence_classes : bool, default=False + If True, return a dictionary of equivalence classes keyed by new edge names + **kwargs + Extra arguments to :class:`EntitySet` constructor + + Returns + ------- + new_entity : EntitySet + new :class:`EntitySet` with identical sets collapsed; + if all sets are unique, the system of sets will be the same as the original. + equivalence_classes : dict of lists, optional + if `return_equivalence_classes`=True, + ``{collapsed set label: [level 0 item labels]}``. + """ + # group by level 0 (set), aggregate level 1 (set elements) as frozenset + collapse = ( + self._dataframe[self._data_cols] + .groupby(self._data_cols[0], as_index=False) + .agg(frozenset) + ) + + # aggregation method to rename equivalence classes as [first item]: [# items] + agg_kwargs = {"name": (self._data_cols[0], lambda x: f"{x.iloc[0]}: {len(x)}")} + if return_equivalence_classes: + # aggregation method to list all items in each equivalence class + agg_kwargs.update(equivalence_class=(self._data_cols[0], list)) + # group by frozenset of level 1 items (set elements), aggregate to get names of + # equivalence classes and (optionally) list of level 0 items (sets) in each + collapse = collapse.groupby(self._data_cols[1], as_index=False).agg( + **agg_kwargs + ) + # convert to nested dict representation of collapsed system of sets + collapse = collapse.set_index("name") + new_entity_dict = collapse[self._data_cols[1]].to_dict() + # construct new EntitySet from system of sets + new_entity = EntitySet(new_entity_dict, **kwargs) + + if return_equivalence_classes: + # lists of equivalent sets, keyed by equivalence class name + equivalence_classes = collapse.equivalence_class.to_dict() + return new_entity, equivalence_classes + return new_entity
+ +
[docs] def set_cell_property( + self, item1: T, item2: T, prop_name: Any, prop_val: Any + ) -> None: + """Set a property of a cell i.e., incidence between items of different levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + prop_name : hashable + name of the cell property to set + prop_val : any + value of the cell property to set + + See Also + -------- + get_cell_property, get_cell_properties + """ + if item2 in self.elements[item1]: + if prop_name in self.properties: + self._cell_properties.loc[(item1, item2), prop_name] = pd.Series( + [prop_val] + ) + else: + try: + self._cell_properties.loc[ + (item1, item2), self._misc_cell_props_col + ].update({prop_name: prop_val}) + except KeyError: + self._cell_properties.loc[(item1, item2), :] = { + self._misc_cell_props_col: {prop_name: prop_val} + }
+ +
[docs] def get_cell_property(self, item1: T, item2: T, prop_name: Any) -> Any: + """Get a property of a cell i.e., incidence between items of different levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + prop_name : hashable + name of the cell property to get + + Returns + ------- + prop_val : any + value of the cell property + + See Also + -------- + get_cell_properties, set_cell_property + """ + try: + cell_props = self.cell_properties.loc[(item1, item2)] + except KeyError: + raise + # TODO: raise informative exception + + try: + prop_val = cell_props.loc[prop_name] + except KeyError: + prop_val = cell_props.loc[self._misc_cell_props_col].get(prop_name) + + return prop_val
+ +
[docs] def get_cell_properties(self, item1: T, item2: T) -> dict[Any, Any]: + """Get all properties of a cell, i.e., incidence between items of different + levels + + Parameters + ---------- + item1 : hashable + name of an item in level 0 + item2 : hashable + name of an item in level 1 + + Returns + ------- + dict + ``{named cell property: cell property value, ..., misc. cell property column + name: {cell property name: cell property value}}`` + + See Also + -------- + get_cell_property, set_cell_property + """ + try: + cell_props = self.cell_properties.loc[(item1, item2)] + except KeyError: + raise + # TODO: raise informative exception + + return cell_props.to_dict()
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/classes/hypergraph.html b/_modules/hypernetx/classes/hypergraph.html new file mode 100644 index 00000000..3b070274 --- /dev/null +++ b/_modules/hypernetx/classes/hypergraph.html @@ -0,0 +1,2502 @@ + + + + + + hypernetx.classes.hypergraph — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for hypernetx.classes.hypergraph

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+from __future__ import annotations
+
+import pickle
+import warnings
+from copy import deepcopy
+from collections import defaultdict
+from collections.abc import Sequence, Iterable
+from typing import Optional, Any, TypeVar, Union, Mapping, Hashable
+
+import networkx as nx
+import numpy as np
+import pandas as pd
+from networkx.algorithms import bipartite
+from scipy.sparse import coo_matrix, csr_matrix
+
+from hypernetx.classes import Entity, EntitySet
+from hypernetx.exception import HyperNetXError
+from hypernetx.utils.decorators import warn_nwhy
+from hypernetx.classes.helpers import merge_nested_dicts, dict_depth
+
+__all__ = ["Hypergraph"]
+
+T = TypeVar("T", bound=Union[str, int])
+
+
+
[docs]class Hypergraph: + """ + Parameters + ---------- + + setsystem : (optional) dict of iterables, dict of dicts,iterable of iterables, + pandas.DataFrame, numpy.ndarray, default = None + See SetSystem above for additional setsystem requirements. + + edge_col : (optional) str | int, default = 0 + column index (or name) in pandas.dataframe or numpy.ndarray, + used for (hyper)edge ids. Will be used to reference edgeids for + all set systems. + + node_col : (optional) str | int, default = 1 + column index (or name) in pandas.dataframe or numpy.ndarray, + used for node ids. Will be used to reference nodeids for all set systems. + + cell_weight_col : (optional) str | int, default = None + column index (or name) in pandas.dataframe or numpy.ndarray used for + referencing cell weights. For a dict of dicts references key in cell + property dicts. + + cell_weights : (optional) Sequence[float,int] | int | float , default = 1.0 + User specified cell_weights or default cell weight. + Sequential values are only used if setsystem is a + dataframe or ndarray in which case the sequence must + have the same length and order as these objects. + Sequential values are ignored for dataframes if cell_weight_col is already + a column in the data frame. + If cell_weights is assigned a single value + then it will be used as default for missing values or when no cell_weight_col + is given. + + cell_properties : (optional) Sequence[int | str] | Mapping[T,Mapping[T,Mapping[str,Any]]], + default = None + Column names from pd.DataFrame to use as cell properties + or a dict assigning cell_property to incidence pairs of edges and + nodes. Will generate a misc_cell_properties, which may have variable lengths per cell. + + misc_cell_properties : (optional) str | int, default = None + Column name of dataframe corresponding to a column of variable + length property dictionaries for the cell. Ignored for other setsystem + types. + + aggregateby : (optional) str, dict, default = 'first' + By default duplicate edge,node incidences will be dropped unless + specified with `aggregateby`. + See pandas.DataFrame.agg() methods for additional syntax and usage + information. + + edge_properties : (optional) pd.DataFrame | dict, default = None + Properties associated with edge ids. + First column of dataframe or keys of dict link to edge ids in + setsystem. + + node_properties : (optional) pd.DataFrame | dict, default = None + Properties associated with node ids. + First column of dataframe or keys of dict link to node ids in + setsystem. + + properties : (optional) pd.DataFrame | dict, default = None + Concatenation/union of edge_properties and node_properties. + By default, the object id is used and should be the first column of + the dataframe, or key in the dict. If there are nodes and edges + with the same ids and different properties then use the edge_properties + and node_properties keywords. + + misc_properties : (optional) int | str, default = None + Column of property dataframes with dtype=dict. Intended for variable + length property dictionaries for the objects. + + edge_weight_prop : (optional) str, default = None, + Name of property in edge_properties to use for weight. + + node_weight_prop : (optional) str, default = None, + Name of property in node_properties to use for weight. + + weight_prop : (optional) str, default = None + Name of property in properties to use for 'weight' + + default_edge_weight : (optional) int | float, default = 1 + Used when edge weight property is missing or undefined. + + default_node_weight : (optional) int | float, default = 1 + Used when node weight property is missing or undefined + + name : (optional) str, default = None + Name assigned to hypergraph + + + ====================== + Hypergraphs in HNX 2.0 + ====================== + + An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: + V = nodes (vertices) and E = (hyper)edges. + + HNX allows for multi-edges by distinguishing edges by + their identifiers instead of their contents. For example, if + V = {1,2,3} and E = {e1,e2,e3}, + where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, + the edges e1 and e2 contain the same set of nodes and yet + are distinct and are distinguishable within H = (V,E). + + New as of version 2.0, HNX provides methods to easily store and + access additional metadata such as cell, edge, and node weights. + Metadata associated with (edge,node) incidences + are referenced as **cell_properties**. + Metadata associated with a single edge or node is referenced + as its **properties**. + + The fundamental object needed to create a hypergraph is a **setsystem**. The + setsystem defines the many-to-many relationships between edges and nodes in + the hypergraph. Cell properties for the incidence pairs can be defined within + the setsystem or in a separate pandas.Dataframe or dict. + Edge and node properties are defined with a pandas.DataFrame or dict. + + SetSystems + ---------- + There are five types of setsystems currently accepted by the library. + + 1. **iterable of iterables** : Barebones hypergraph uses Pandas default + indexing to generate hyperedge ids. Elements must be hashable.: :: + + >>> H = Hypergraph([{1,2},{1,2},{1,2,3}]) + + 2. **dictionary of iterables** : the most basic way to express many-to-many + relationships providing edge ids. The elements of the iterables must be + hashable): :: + + >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]}) + + 3. **dictionary of dictionaries** : allows cell properties to be assigned + to a specific (edge, node) incidence. This is particularly useful when + there are variable length dictionaries assigned to each pair: :: + + >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.1, 'name': 'related_to', + >>> 'startdate': '05.13.2020'}}, + >>> 'e2':{ 1: {'w':0.52, 'name': 'owned_by'}, + >>> 2: {'w':0.2}}, + >>> 'e3':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.2, 'name': 'owner_of'}, + >>> 3: {'w':1, 'type': 'relationship'}} + + >>> H = Hypergraph(d, cell_weight_col='w') + + 4. **pandas.DataFrame** For large datasets and for datasets with cell + properties it is most efficient to construct a hypergraph directly from + a pandas.DataFrame. Incidence pairs are in the first two columns. + Cell properties shared by all incidence pairs can be placed in their own + column of the dataframe. Variable length dictionaries of cell properties + particular to only some of the incidence pairs may be placed in a single + column of the dataframe. Representing the data above as a dataframe df: + + +-----------+-----------+-----------+-----------------------------------+ + | col1 | col2 | w | col3 | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 1 | 0.5 | {'name':'related_to'} | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 2 | 0.1 | {"name":"related_to", | + | | | | "startdate":"05.13.2020"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 1 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 2 | 0.2 | | + +-----------+-----------+-----------+-----------------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+-----------------------------------+ + + The first row of the dataframe is used to reference each column. :: + + >>> H = Hypergraph(df,edge_col="col1",node_col="col2", + >>> cell_weight_col="w",misc_cell_properties="col3") + + 5. **numpy.ndarray** For homogeneous datasets given in an ndarray a + pandas dataframe is generated and column names are added from the + edge_col and node_col arguments. Cell properties containing multiple data + types are added with a separate dataframe or dict and passed through the + cell_properties keyword. :: + + >>> arr = np.array([['e1','1'],['e1','2'], + >>> ['e2','1'],['e2','2'], + >>> ['e3','1'],['e3','2'],['e3','3']]) + >>> H = hnx.Hypergraph(arr, column_names=['col1','col2']) + + + Edge and Node Properties + ------------------------ + Properties specific to a single edge or node are passed through the + keywords: **edge_properties, node_properties, properties**. + Properties may be passed as dataframes or dicts. + The first column or index of the dataframe or keys of the dict keys + correspond to the edge and/or node identifiers. + If identifiers are shared among edges and nodes, or are distinct + for edges and nodes, properties may be combined into a single + object and passed to the **properties** keyword. For example: + + +-----------+-----------+---------------------------------------+ + | id | weight | properties | + +-----------+-----------+---------------------------------------+ + | e1 | 5.0 | {'type':'event'} | + +-----------+-----------+---------------------------------------+ + | e2 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+---------------------------------------+ + | ... | ... | {...} | + +-----------+-----------+---------------------------------------+ + | 1 | 1.2 | {'color':'red'} | + +-----------+-----------+---------------------------------------+ + | 2 | .003 | {'name':'Fido','color':'brown'} | + +-----------+-----------+---------------------------------------+ + | 3 | 1.0 | {} | + +-----------+-----------+---------------------------------------+ + + A properties dictionary should have the format: :: + + dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... } + + A properties dataframe may be used for nodes and edges sharing ids + but differing in cell properties by adding a level index using 0 + for edges and 1 for nodes: + + +-----------+-----------+-----------+---------------------------+ + | level | id | weight | properties | + +-----------+-----------+-----------+---------------------------+ + | 0 | e1 | 5.0 | {'type':'event'} | + +-----------+-----------+-----------+---------------------------+ + | 0 | e2 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+-----------+---------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+---------------------------+ + | 1 | 1.2 | {'color':'red'} | + +-----------+-----------+-----------+---------------------------+ + | 2 | .003 | {'name':'Fido','color':'brown'} | + +-----------+-----------+-----------+---------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+---------------------------+ + + + + Weights + ------- + The default key for cell and object weights is "weight". The default value + is 1. Weights may be assigned and/or a new default prescribed in the + constructor using **cell_weight_col** and **cell_weights** for incidence pairs, + and using **edge_weight_prop, node_weight_prop, weight_prop, + default_edge_weight,** and **default_node_weight** for node and edge weights. + + """ + + @warn_nwhy + def __init__( + self, + setsystem: Optional[ + pd.DataFrame + | np.ndarray + | Mapping[T, Iterable[T]] + | Iterable[Iterable[T]] + | Mapping[T, Mapping[T, Mapping[str, Any]]] + ] = None, + edge_col: str | int = 0, + node_col: str | int = 1, + cell_weight_col: Optional[str | int] = "cell_weights", + cell_weights: Sequence[float] | float = 1.0, + cell_properties: Optional[ + Sequence[str | int] | Mapping[T, Mapping[T, Mapping[str, Any]]] + ] = None, + misc_cell_properties_col: Optional[str | int] = None, + aggregateby: str | dict[str, str] = "first", + edge_properties: Optional[pd.DataFrame | dict[T, dict[Any, Any]]] = None, + node_properties: Optional[pd.DataFrame | dict[T, dict[Any, Any]]] = None, + properties: Optional[ + pd.DataFrame | dict[T, dict[Any, Any]] | dict[T, dict[T, dict[Any, Any]]] + ] = None, + misc_properties_col: Optional[str | int] = None, + edge_weight_prop_col: str | int = "weight", + node_weight_prop_col: str | int = "weight", + weight_prop_col: str | int = "weight", + default_edge_weight: Optional[float | None] = None, + default_node_weight: Optional[float | None] = None, + default_weight: float = 1.0, + name: Optional[str] = None, + **kwargs, + ): + self.name = name or "" + self.misc_cell_properties_col = misc_cell_properties = ( + misc_cell_properties_col or "cell_properties" + ) + self.misc_properties_col = misc_properties_col = ( + misc_properties_col or "properties" + ) + self.default_edge_weight = default_edge_weight = ( + default_edge_weight or default_weight + ) + self.default_node_weight = default_node_weight = ( + default_node_weight or default_weight + ) + ### cell properties + + if setsystem is None: #### Empty Case + + self._edges = EntitySet({}) + self._nodes = EntitySet({}) + self._state_dict = {} + + else: #### DataFrame case + if isinstance(setsystem, pd.DataFrame): + if isinstance(edge_col, int): + self._edge_col = edge_col = setsystem.columns[edge_col] + if isinstance(edge_col, int): + setsystem = setsystem.rename(columns={edge_col: "edges"}) + self._edge_col = edge_col = "edges" + else: + self._edge_col = edge_col + + if isinstance(node_col, int): + self._node_col = node_col = setsystem.columns[node_col] + if isinstance(node_col, int): + setsystem = setsystem.rename(columns={node_col: "nodes"}) + self._node_col = node_col = "nodes" + else: + self._node_col = node_col + + entity = setsystem.copy() + + if isinstance(cell_weight_col, int): + self._cell_weight_col = setsystem.columns[cell_weight_col] + else: + self._cell_weight_col = cell_weight_col + + if cell_weight_col in entity: + entity = entity.fillna({cell_weight_col: cell_weights}) + else: + entity[cell_weight_col] = cell_weights + + if isinstance(cell_properties, Sequence): + cell_properties = [ + c + for c in cell_properties + if not c in [edge_col, node_col, cell_weight_col] + ] + cols = [edge_col, node_col, cell_weight_col] + cell_properties + entity = entity[cols] + elif isinstance(cell_properties, dict): + cp = [] + for idx in entity.index: + edge, node = entity.iloc[idx][[edge_col, node_col]].values + cp.append(cell_properties[edge][node]) + entity["cell_properties"] = cp + + else: ### Cases Other than DataFrame + self._edge_col = edge_col = edge_col or "edges" + if node_col == 1: + self._node_col = node_col = "nodes" + else: + self._node_col = node_col + self._cell_weight_col = cell_weight_col + + if isinstance(setsystem, np.ndarray): + if setsystem.shape[1] != 2: + raise HyperNetXError("Numpy array must have exactly 2 columns.") + entity = pd.DataFrame(setsystem, columns=[edge_col, node_col]) + entity[cell_weight_col] = cell_weights + + elif isinstance(setsystem, dict): + ## check if it is a dict of iterables or a nested dict. if the latter then pull + ## out the nested dicts as cell properties. + ## cell properties must be of the same type as setsystem + + entity = pd.Series(setsystem).explode() + entity = pd.DataFrame( + {edge_col: entity.index.to_list(), node_col: entity.values} + ) + + if dict_depth(setsystem) > 2: + cell_props = dict(setsystem) + if isinstance(cell_properties, dict): + ## if setsystem is a dict then cell properties must be a dict + cell_properties = merge_nested_dicts( + cell_props, cell_properties + ) + else: + cell_properties = cell_props + + df = setsystem + cp = [] + wt = [] + for idx in entity.index: + edge, node = entity.values[idx][[0, 1]] + wt.append(df[edge][node].get(cell_weight_col, cell_weights)) + cp.append(df[edge][node]) + entity[self._cell_weight_col] = wt + entity["cell_properties"] = cp + + else: + entity[self._cell_weight_col] = cell_weights + + elif isinstance(setsystem, Iterable): + entity = pd.Series(setsystem).explode() + entity = pd.DataFrame( + {edge_col: entity.index.to_list(), node_col: entity.values} + ) + entity["cell_weights"] = cell_weights + + else: + raise HyperNetXError( + "setsystem is not supported or is in the wrong format." + ) + + def props2dict(df=None): + if df is None: + return {} + elif isinstance(df, pd.DataFrame): + return df.set_index(df.columns[0]).to_dict(orient="index") + else: + return dict(df) + + if properties is None: + if edge_properties is not None or node_properties is not None: + if edge_properties is not None: + edge_properties = props2dict(edge_properties) + for e in entity[edge_col].unique(): + if not e in edge_properties: + edge_properties[e] = {} + for v in edge_properties.values(): + v.setdefault(edge_weight_prop_col, default_edge_weight) + else: + edge_properties = {} + if node_properties is not None: + node_properties = props2dict(node_properties) + for nd in entity[node_col].unique(): + if not nd in node_properties: + node_properties[nd] = {} + for v in node_properties.values(): + v.setdefault(node_weight_prop_col, default_node_weight) + else: + node_properties = {} + properties = {0: edge_properties, 1: node_properties} + else: + if isinstance(properties, pd.DataFrame): + if weight_prop_col in properties.columns: + properties = properties.fillna( + {weight_prop_col: default_weight} + ) + elif misc_properties_col in properties.columns: + for idx in properties.index: + if not isinstance( + properties[misc_properties_col][idx], dict + ): + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + else: + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + else: + properties[weight_prop_col] = default_weight + if isinstance(properties, dict): + if dict_depth(properties) <= 2: + properties = pd.DataFrame( + [ + {"id": k, misc_properties_col: v} + for k, v in properties.items() + ] + ) + for idx in properties.index: + if isinstance(properties[misc_properties_col][idx], dict): + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + else: + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + elif set(properties.keys()) == {0, 1}: + edge_properties = properties[0] + for e in entity[edge_col].unique(): + if not e in edge_properties: + edge_properties[e] = { + edge_weight_prop_col: default_edge_weight + } + else: + edge_properties[e].setdefault( + edge_weight_prop_col, default_edge_weight + ) + node_properties = properties[1] + for nd in entity[node_col].unique(): + if not nd in node_properties: + node_properties[nd] = { + node_weight_prop_col: default_node_weight + } + else: + node_properties[nd].setdefault( + node_weight_prop_col, default_node_weight + ) + for idx in properties.index: + if not isinstance( + properties[misc_properties_col][idx], dict + ): + properties[misc_properties_col][idx] = { + weight_prop_col: default_weight + } + else: + properties[misc_properties_col][idx].setdefault( + weight_prop_col, default_weight + ) + + self.E = EntitySet( + entity=entity, + level1=edge_col, + level2=node_col, + weight_col=cell_weight_col, + weights=cell_weights, + cell_properties=cell_properties, + misc_cell_props_col=misc_cell_properties_col or "cell_properties", + aggregateby=aggregateby or "sum", + properties=properties, + misc_props_col=misc_properties_col, + ) + + self._edges = self.E + self._nodes = self.E.restrict_to_levels([1]) + self._dataframe = self.E.cell_properties.reset_index() + self._data_cols = data_cols = [self._edge_col, self._node_col] + self._dataframe[data_cols] = self._dataframe[data_cols].astype("category") + + self.__dict__.update(locals()) + self._set_default_state() + + @property + def edges(self): + """ + Object associated with self._edges. + + Returns + ------- + EntitySet + """ + return self._edges + + @property + def nodes(self): + """ + Object associated with self._nodes. + + Returns + ------- + EntitySet + """ + return self._nodes + + @property + def dataframe(self): + """Returns dataframe of incidence pairs and their properties. + + Returns + ------- + pd.DataFrame + """ + return self._dataframe + + @property + def properties(self): + """Returns dataframe of edge and node properties. + + Returns + ------- + pd.DataFrame + """ + return self.E.properties + + @property + def edge_props(self): + """Dataframe of edge properties + indexed on edge ids + + Returns + ------- + pd.DataFrame + """ + return self.E.properties.loc[0] + + @property + def node_props(self): + """Dataframe of node properties + indexed on node ids + + Returns + ------- + pd.DataFrame + """ + return self.E.properties.loc[1] + + @property + def incidence_dict(self): + """ + Dictionary keyed by edge uids with values the uids of nodes in each + edge + + Returns + ------- + dict + + """ + return self.E.incidence_dict + + @property + def shape(self): + """ + (number of nodes, number of edges) + + Returns + ------- + tuple + + """ + return len(self._nodes.elements), len(self._edges.elements) + + def __str__(self): + """ + String representation of hypergraph + + Returns + ------- + str + + """ + return f"{self.name}, <class 'hypernetx.classes.hypergraph.Hypergraph'>" + + def __repr__(self): + """ + String representation of hypergraph + + Returns + ------- + str + + """ + return f"{self.name}, hypernetx.classes.hypergraph.Hypergraph" + + def __len__(self): + """ + Number of nodes + + Returns + ------- + int + + """ + return len(self._nodes) + + def __iter__(self): + """ + Iterate over the nodes of the hypergraph + + Returns + ------- + dict_keyiterator + + """ + return iter(self.nodes) + + def __contains__(self, item): + """ + Returns boolean indicating if item is in self.nodes + + Parameters + ---------- + item : hashable or Entity + + """ + return item in self.nodes + + def __getitem__(self, node): + """ + Returns the neighbors of node + + Parameters + ---------- + node : Entity or hashable + If hashable, then must be uid of node in hypergraph + + Returns + ------- + neighbors(node) : iterator + + """ + return self.neighbors(node) + +
[docs] def get_cell_properties( + self, edge: str, node: str, prop_name: Optional[str] = None + ) -> Any | dict[str, Any]: + """Get cell properties on a specified edge and node + + Parameters + ---------- + edge : str + edgeid + node : str + nodeid + prop_name : str, optional + name of a cell property; if None, all cell properties will be returned + + Returns + ------- + : int or str or dict of {str: any} + cell property value if `prop_name` is provided, otherwise ``dict`` of all + cell properties and values + """ + if prop_name is None: + return self.edges.get_cell_properties(edge, node) + + return self.edges.get_cell_property(edge, node, prop_name)
+ +
[docs] def get_properties(self, id, level=None, prop_name=None): + """Returns an object's specific property or all properties + + Parameters + ---------- + id : hashable + edge or node id + level : int | None , optional, default = None + if separate edge and node properties then enter 0 for edges + and 1 for nodes. + prop_name : str | None, optional, default = None + if None then all properties associated with the object will be + returned. + + Returns + ------- + : str or dict + single property or dictionary of properties + """ + if prop_name == None: + return self.E.get_properties(id, level=level) + else: + return self.E.get_property(id, prop_name, level=level)
+ +
[docs] @warn_nwhy + def get_linegraph(self, s=1, edges=True): + """ + Creates an ::term::s-linegraph for the Hypergraph. + If edges=True (default)then the edges will be the vertices of the line + graph. Two vertices are connected by an s-line-graph edge if the + corresponding hypergraph edges intersect in at least s hypergraph nodes. + If edges=False, the hypergraph nodes will be the vertices of the line + graph. Two vertices are connected if the nodes they correspond to share + at least s incident hyper edges. + + Parameters + ---------- + s : int + The width of the connections. + edges : bool, optional, default = True + Determine if edges or nodes will be the vertices in the linegraph. + + Returns + ------- + nx.Graph + A NetworkX graph. + """ + d = self._state_dict + key = "sedgelg" if edges else "snodelg" + if s in d[key]: + return d[key][s] + + if edges: + A, Amap = self.edge_adjacency_matrix(s=s, index=True) + Amaplst = [(k, self.edge_props.loc[k].to_dict()) for k in Amap] + else: + A, Amap = self.adjacency_matrix(s=s, index=True) + Amaplst = [(k, self.node_props.loc[k].to_dict()) for k in Amap] + + ### TODO: add key function to compute weights lambda x,y : funcval + + A = np.array(np.nonzero(A)) + e1 = np.array([Amap[idx] for idx in A[0]]) + e2 = np.array([Amap[idx] for idx in A[1]]) + A = np.array([e1, e2]).T + g = nx.Graph() + g.add_edges_from(A) + g.add_nodes_from(Amaplst) + d[key][s] = g + return g
+ +
[docs] def set_state(self, **kwargs): + """ + Allow state_dict updates from outside of class. Use with caution. + + Parameters + ---------- + **kwargs + key=value pairs to save in state dictionary + """ + self._state_dict.update(kwargs)
+ + def _set_default_state(self): + """Populate state_dict with default values""" + self._state_dict = {} + + self._state_dict["dataframe"] = df = self.dataframe + self._state_dict["labels"] = { + "edges": np.array(df[self._edge_col].cat.categories), + "nodes": np.array(df[self._node_col].cat.categories), + } + self._state_dict["data"] = np.array( + [df[self._edge_col].cat.codes, df[self._node_col].cat.codes], dtype=int + ).T + self._state_dict["snodelg"] = dict() ### s: nx.graph + self._state_dict["sedgelg"] = dict() + self._state_dict["neighbors"] = defaultdict(dict) ### s: {node: neighbors} + self._state_dict["edge_neighbors"] = defaultdict( + dict + ) ### s: {edge: edge_neighbors} + self._state_dict["adjacency_matrix"] = dict() ### s: scipy.sparse.csr_matrix + self._state_dict["edge_adjacency_matrix"] = dict() + +
[docs] def edge_size_dist(self): + """ + Returns the size for each edge + + Returns + ------- + np.array + + """ + + if "edge_size_dist" not in self._state_dict: + dist = np.array(np.sum(self.incidence_matrix(), axis=0))[0].tolist() + self.set_state(edge_size_dist=dist) + return dist + else: + return self._state_dict["edge_size_dist"]
+ +
[docs] def degree(self, node, s=1, max_size=None): + """ + The number of edges of size s that contain node. + + Parameters + ---------- + node : hashable + identifier for the node. + s : positive integer, optional, default 1 + smallest size of edge to consider in degree + max_size : positive integer or None, optional, default = None + largest size of edge to consider in degree + + Returns + ------- + : int + + """ + if s == 1 and max_size == None: + return len(self.E.memberships[node]) + else: + memberships = set() + for edge in self.E.memberships[node]: + size = len(self.edges[edge]) + if size >= s and (max_size is None or size <= max_size): + memberships.add(edge) + + return len(memberships)
+ +
[docs] def size(self, edge, nodeset=None): + """ + The number of nodes in nodeset that belong to edge. + If nodeset is None then returns the size of edge + + Parameters + ---------- + edge : hashable + The uid of an edge in the hypergraph + + Returns + ------- + size : int + + """ + if nodeset is not None: + return len(set(nodeset).intersection(set(self.edges[edge]))) + + return len(self.edges[edge])
+ +
[docs] def number_of_nodes(self, nodeset=None): + """ + The number of nodes in nodeset belonging to hypergraph. + + Parameters + ---------- + nodeset : an interable of Entities, optional, default = None + If None, then return the number of nodes in hypergraph. + + Returns + ------- + number_of_nodes : int + + """ + if nodeset is not None: + return len([n for n in self.nodes if n in nodeset]) + + return len(self.nodes)
+ +
[docs] def number_of_edges(self, edgeset=None): + """ + The number of edges in edgeset belonging to hypergraph. + + Parameters + ---------- + edgeset : an iterable of Entities, optional, default = None + If None, then return the number of edges in hypergraph. + + Returns + ------- + number_of_edges : int + """ + if edgeset: + return len([e for e in self.edges if e in edgeset]) + + return len(self.edges)
+ +
[docs] def order(self): + """ + The number of nodes in hypergraph. + + Returns + ------- + order : int + """ + return len(self.nodes)
+ +
[docs] def dim(self, edge): + """ + Same as size(edge)-1. + """ + return self.size(edge) - 1
+ +
[docs] def neighbors(self, node, s=1): + """ + The nodes in hypergraph which share s edge(s) with node. + + Parameters + ---------- + node : hashable or Entity + uid for a node in hypergraph or the node Entity + + s : int, list, optional, default = 1 + Minimum number of edges shared by neighbors with node. + + Returns + ------- + neighbors : list + s-neighbors share at least s edges in the hypergraph + + """ + if node not in self.nodes: + print(f"{node} is not in hypergraph {self.name}.") + return None + if node in self._state_dict["neighbors"][s]: + return self._state_dict["neighbors"][s][node] + else: + M = self.incidence_matrix() + rdx = self._state_dict["labels"]["nodes"] + jdx = np.where(rdx == node) + idx = (M[jdx].dot(M.T) >= s) * 1 + idx = np.nonzero(idx)[1] + neighbors = list(rdx[idx]) + if len(neighbors) > 0: + neighbors.remove(node) + self._state_dict["neighbors"][s][node] = neighbors + else: + self._state_dict["neighbors"][s][node] = [] + return neighbors
+ +
[docs] def edge_neighbors(self, edge, s=1): + """ + The edges in hypergraph which share s nodes(s) with edge. + + Parameters + ---------- + edge : hashable or Entity + uid for a edge in hypergraph or the edge Entity + + s : int, list, optional, default = 1 + Minimum number of nodes shared by neighbors edge node. + + Returns + ------- + : list + List of edge neighbors + + """ + + if edge not in self.edges: + print(f"Edge is not in hypergraph {self.name}.") + return None + if edge in self._state_dict["edge_neighbors"][s]: + return self._state_dict["edge_neighbors"][s][edge] + else: + M = self.incidence_matrix() + cdx = self._state_dict["labels"]["edges"] + jdx = np.where(cdx == edge) + idx = (M.T[jdx].dot(M) >= s) * 1 + idx = np.nonzero(idx)[1] + edge_neighbors = list(cdx[idx]) + if len(edge_neighbors) > 0: + edge_neighbors.remove(edge) + self._state_dict["edge_neighbors"][s][edge] = edge_neighbors + else: + self._state_dict["edge_neighbors"][s][edge] = [] + return edge_neighbors
+ +
[docs] def incidence_matrix(self, weights=False, index=False): + """ + An incidence matrix for the hypergraph indexed by nodes x edges. + + Parameters + ---------- + weights : bool, default =False + If False all nonzero entries are 1. + If True and self.static all nonzero entries are filled by + self.edges.cell_weights dictionary values. + + index : boolean, optional, default = False + If True return will include a dictionary of node uid : row number + and edge uid : column number + + Returns + ------- + incidence_matrix : scipy.sparse.csr.csr_matrix or np.ndarray + + row_index : list + index of node ids for rows + + col_index : list + index of edge ids for columns + + """ + sdkey = "incidence_matrix" + if weights: + sdkey = "weighted_" + sdkey + + if sdkey in self._state_dict: + M = self._state_dict[sdkey] + else: + df = self.dataframe + data_cols = [self._node_col, self._edge_col] + if weights == True: + data = df[self._cell_weight_col].values + M = csr_matrix( + (data, tuple(np.array(df[col].cat.codes) for col in data_cols)) + ) + else: + M = csr_matrix( + ( + [1] * len(df), + tuple(np.array(df[col].cat.codes) for col in data_cols), + ) + ) + self._state_dict[sdkey] = M + + if index == True: + rdx = self.dataframe[self._node_col].cat.categories + cdx = self.dataframe[self._edge_col].cat.categories + + return M, rdx, cdx + else: + return M
+ +
[docs] def adjacency_matrix(self, s=1, index=False, remove_empty_rows=False): + """ + The :term:`s-adjacency matrix` for the hypergraph. + + Parameters + ---------- + s : int, optional, default = 1 + + index: boolean, optional, default = False + if True, will return the index of ids for rows and columns + + remove_empty_rows: boolean, optional, default = False + + Returns + ------- + adjacency_matrix : scipy.sparse.csr.csr_matrix + + node_index : list + index of ids for rows and columns + + """ + try: + A = self._state_dict["adjacency_matrix"][s] + except: + M = self.incidence_matrix() + A = M @ (M.T) + A.setdiag(0) + A = (A >= s) * 1 + self._state_dict["adjacency_matrix"][s] = A + if index == True: + return A, self._state_dict["labels"]["nodes"] + else: + return A
+ +
[docs] def edge_adjacency_matrix(self, s=1, index=False): + """ + The :term:`s-adjacency matrix` for the dual hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + index: boolean, optional, default = False + if True, will return the index of ids for rows and columns + + Returns + ------- + edge_adjacency_matrix : scipy.sparse.csr.csr_matrix + + edge_index : list + index of ids for rows and columns + + Notes + ----- + This is also the adjacency matrix for the line graph. + Two edges are s-adjacent if they share at least s nodes. + If remove_zeros is True will return the auxillary matrix + + """ + try: + A = self._state_dict["edge_adjacency_matrix"][s] + except: + M = self.incidence_matrix() + A = (M.T) @ (M) + A.setdiag(0) + A = (A >= s) * 1 + self._state_dict["edge_adjacency_matrix"][s] = A + if index == True: + return A, self._state_dict["labels"]["edges"] + else: + return A
+ +
[docs] def auxiliary_matrix(self, s=1, node=True, index=False): + """ + The unweighted :term:`s-edge or node auxiliary matrix` for hypergraph + + Parameters + ---------- + s : int, optional, default = 1 + node : bool, optional, default = True + whether to return based on node or edge adjacencies + + Returns + ------- + auxiliary_matrix : scipy.sparse.csr.csr_matrix + Node/Edge adjacency matrix with empty rows and columns + removed + index : np.array + row and column index of userids + + """ + if node == True: + A, Amap = self.adjacency_matrix(s, index=True) + else: + A, Amap = self.edge_adjacency_matrix(s, index=True) + + idx = np.nonzero(np.sum(A, axis=1))[0] + if len(idx) < A.shape[0]: + B = A[idx][:, idx] + else: + B = A + if index: + return B, Amap[idx] + else: + return B
+ +
[docs] def bipartite(self): + """ + Constructs the networkX bipartite graph associated to hypergraph. + + Returns + ------- + bipartite : nx.Graph() + + Notes + ----- + Creates a bipartite networkx graph from hypergraph. + The nodes and (hyper)edges of hypergraph become the nodes of bipartite + graph. For every (hyper)edge e in the hypergraph and node n in e there + is an edge (n,e) in the graph. + + """ + B = nx.Graph() + nodes = self._state_dict["labels"]["nodes"] + edges = self._state_dict["labels"]["edges"] + B.add_nodes_from(self.edges, bipartite=0) + B.add_nodes_from(self.nodes, bipartite=1) + B.add_edges_from([(v, e) for e in self.edges for v in self.edges[e]]) + return B
+ +
[docs] def dual(self, name=None, switch_names=True): + """ + Constructs a new hypergraph with roles of edges and nodes of hypergraph + reversed. + + Parameters + ---------- + name : hashable, optional + + switch_names : bool, optional, default = True + reverses edge_col and node_col names + unless edge_col = 'edges' and node_col = 'nodes' + + Returns + ------- + : hypergraph + + """ + dfp = deepcopy(self.edges.properties) + dfp = dfp.reset_index() + dfp.level = dfp.level.apply(lambda x: 1 * (x == 0)) + dfp = dfp.set_index(["level", "id"]) + + edge, node, wt = self._edge_col, self._node_col, self._cell_weight_col + df = deepcopy(self.dataframe) + cprops = [col for col in df.columns if not col in [edge, node, wt]] + + df[[edge, node]] = df[[node, edge]] + if switch_names == True and not ( + self._edge_col == "edges" and self._node_col == "nodes" + ): + # if switch_names == False or (self._edge_col == 'edges' and self._node_col == 'nodes'): + df = df.rename(columns={edge: self._node_col, node: self._edge_col}) + node = self._edge_col + edge = self._node_col + + return Hypergraph( + df, + edge_col=edge, + node_col=node, + cell_weight_col=wt, + cell_properties=cprops, + properties=dfp, + name=name, + )
+ +
[docs] def collapse_edges( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Constructs a new hypergraph gotten by identifying edges containing the + same nodes + + Parameters + ---------- + name : hashable, optional, default = None + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of edge equivalence classes keyed by frozen + sets of nodes + + Returns + ------- + new hypergraph : Hypergraph + Equivalent edges are collapsed to a single edge named by a + representative of the equivalent edges followed by a colon and the + number of edges it represents. + + equivalence_classes : dict + A dictionary keyed by representative edge names with values equal + to the edges in its equivalence class + + Notes + ----- + Two edges are identified if their respective elements are the same. + Using this as an equivalence relation, the uids of the edges are + partitioned into equivalence classes. + + A single edge from the collapsed edges followed by a colon and the + number of elements in its equivalence class as uid for the new edge + + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps ane return_counts are no longer supported keyword + arguments and will throw an error in the next release. + collapsed hypergraph automatically names collapsed objects by a + string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + temp = self.edges.collapse_identical_elements( + return_equivalence_classes=return_equivalence_classes + ) + + if return_equivalence_classes: + return Hypergraph(temp[0].incidence_dict, name), temp[1] + + return Hypergraph(temp.incidence_dict, name)
+ +
[docs] def collapse_nodes( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Constructs a new hypergraph gotten by identifying nodes contained by + the same edges + + Parameters + ---------- + name: str, optional, default = None + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of node equivalence classes keyed by frozen + sets of edges + + use_reps : boolean, optional, default = False - Deprecated, this no + longer works and will be removed. Choose a single element from the + collapsed nodes as uid for the new node, otherwise uses a frozen + set of the uids of nodes in the equivalence class + + return_counts: boolean, - Deprecated, this no longer works and will be + removed if use_reps is True the new nodes have uids given by a + tuple of the rep and the count + + Returns + ------- + new hypergraph : Hypergraph + + Notes + ----- + Two nodes are identified if their respective memberships are the same. + Using this as an equivalence relation, the uids of the nodes are + partitioned into equivalence classes. A single member of the + equivalence class is chosen to represent the class followed by the + number of members of the class. + + Example + ------- + + >>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', / + ['a','b']),Entity('E2',['a','b'])])) + >>> h.incidence_dict + {'E1': {'a', 'b'}, 'E2': {'a', 'b'}} + >>> h.collapse_nodes().incidence_dict + {'E1': {frozenset({'a', 'b'})}, 'E2': {frozenset({'a', 'b'})}} + ### Fix this + >>> h.collapse_nodes(use_reps=True).incidence_dict + {'E1': {('a', 2)}, 'E2': {('a', 2)}} + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps and return_counts are no longer supported keyword arguments and will throw + an error in the next release. + collapsed hypergraph automatically names collapsed objects by a string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + temp = self.dual().edges.collapse_identical_elements( + return_equivalence_classes=return_equivalence_classes + ) + + if return_equivalence_classes: + return Hypergraph(temp[0].incidence_dict).dual(), temp[1] + + return Hypergraph(temp.incidence_dict, name).dual()
+ +
[docs] def collapse_nodes_and_edges( + self, + name=None, + return_equivalence_classes=False, + use_reps=None, + return_counts=None, + ): + """ + Returns a new hypergraph by collapsing nodes and edges. + + Parameters + ---------- + + name: str, optional, default = None + + use_reps: boolean, optional, default = False + Choose a single element from the collapsed elements as a + representative + + return_counts: boolean, optional, default = True + if use_reps is True the new elements are keyed by a tuple of the + rep and the count + + return_equivalence_classes: boolean, optional, default = False + Returns a dictionary of edge equivalence classes keyed by frozen + sets of nodes + + Returns + ------- + new hypergraph : Hypergraph + + Notes + ----- + Collapses the Nodes and Edges EntitySets. Two nodes(edges) are + duplicates if their respective memberships(elements) are the same. + Using this as an equivalence relation, the uids of the nodes(edges) + are partitioned into equivalence classes. A single member of the + equivalence class is chosen to represent the class followed by the + number of members of the class. + + Example + ------- + + >>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', / + ['a','b']),Entity('E2',['a','b'])])) + >>> h.incidence_dict + {'E1': {'a', 'b'}, 'E2': {'a', 'b'}} + >>> h.collapse_nodes_and_edges().incidence_dict ### Fix this + {('E1', 2): {('a', 2)}} + + """ + if use_reps is not None or return_counts is not None: + msg = """ + use_reps and return_counts are no longer supported keyword + arguments and will throw an error in the next release. + collapsed hypergraph automatically names collapsed objects by a + string "rep:count" + """ + warnings.warn(msg, DeprecationWarning) + + if return_equivalence_classes: + temp, neq = self.collapse_nodes( + name="temp", return_equivalence_classes=True + ) + ntemp, eeq = temp.collapse_edges(name=name, return_equivalence_classes=True) + return ntemp, neq, eeq + + temp = self.collapse_nodes(name="temp") + return temp.collapse_edges(name=name)
+ +
[docs] def restrict_to_nodes(self, nodes, name=None): + """New hypergraph gotten by restricting to nodes + + Parameters + ---------- + nodes : Iterable + nodeids to restrict to + + Returns + ------- + : hnx. Hypergraph + + """ + keys = set(self._state_dict["labels"]["nodes"]).difference(nodes) + return self.remove(keys, level=1)
+ +
[docs] def restrict_to_edges(self, edges, name=None): + """New hypergraph gotten by restricting to edges + + Parameters + ---------- + edges : Iterable + edgeids to restrict to + + Returns + ------- + hnx.Hypergraph + + """ + keys = set(self._state_dict["labels"]["edges"]).difference(edges) + return self.remove(keys, level=0)
+ +
[docs] def remove_edges(self, keys, name=None): + return self.remove(keys, level=0, name=name)
+ +
[docs] def remove_nodes(self, keys, name=None): + return self.remove(keys, level=1, name=name)
+ +
[docs] def remove(self, keys, level=None, name=None): + """Creates a new hypergraph with nodes and/or edges indexed by keys + removed. More efficient for creating a restricted hypergraph if the + restricted set is greater than what is being removed. + + Parameters + ---------- + keys : list | tuple | set | Hashable + node and/or edge id(s) to restrict to + level : None, optional + Enter 0 to remove edges with ids in keys. + Enter 1 to remove nodes with ids in keys. + If None then all objects in nodes and edges with the id will + be removed. + name : str, optional + Name of new hypergraph + + Returns + ------- + : hnx.Hypergraph + + """ + rdfprop = self.properties.copy() + rdf = self.dataframe.copy() + if isinstance(keys, (list, tuple, set)): + nkeys = keys + elif isinstance(keys, Hashable): + nkeys = list() + nkeys.append(keys) + else: + raise TypeError("`keys` parameter must be list | tuple | set | Hashable") + if level == 0: + kdx = set(nkeys).intersection(set(self._state_dict["labels"]["edges"])) + for k in kdx: + rdfprop = rdfprop.drop((0, k)) + rdf = rdf.loc[~(rdf[self._edge_col].isin(kdx))] + elif level == 1: + kdx = set(nkeys).intersection(set(self._state_dict["labels"]["nodes"])) + for k in kdx: + rdfprop = rdfprop.drop((1, k)) + rdf = rdf.loc[~(rdf[self._node_col].isin(kdx))] + else: + rdfprop = rdfprop.reset_index() + kdx = set(nkeys).intersection(rdfprop.id.unique()) + rdfprop = rdfprop.set_index("id") + rdfprop = rdfprop.drop(index=kdx) + rdf = rdf.loc[~(rdf[self._edge_col].isin(kdx))] + rdf = rdf.loc[~(rdf[self._node_col].isin(kdx))] + + return Hypergraph( + setsystem=rdf, + edge_col=self._edge_col, + node_col=self._node_col, + cell_weight_col=self._cell_weight_col, + misc_cell_properties_col=self.edges._misc_cell_props_col, + properties=rdfprop, + misc_properties_col=self.edges._misc_props_col, + name=name, + )
+ +
[docs] def toplexes(self, name=None): + """ + Returns a :term:`simple hypergraph` corresponding to self. + + Warning + ------- + Collapsing is no longer supported inside the toplexes method. Instead + generate a new collapsed hypergraph and compute the toplexes of the + new hypergraph. + + Parameters + ---------- + name: str, optional, default = None + """ + + thdict = {} + for e in self.edges: + thdict[e] = self.edges[e] + + tops = [] + for e in self.edges: + flag = True + old_tops = list(tops) + for top in old_tops: + if set(thdict[e]).issubset(thdict[top]): + flag = False + break + + if set(thdict[top]).issubset(thdict[e]): + tops.remove(top) + if flag: + tops += [e] + return self.restrict_to_edges(tops, name=name)
+ +
[docs] def is_connected(self, s=1, edges=False): + """ + Determines if hypergraph is :term:`s-connected <s-connected, + s-node-connected>`. + + Parameters + ---------- + s: int, optional, default 1 + + edges: boolean, optional, default = False + If True, will determine if s-edge-connected. + For s=1 s-edge-connected is the same as s-connected. + + Returns + ------- + is_connected : boolean + + Notes + ----- + + A hypergraph is s node connected if for any two nodes v0,vn + there exists a sequence of nodes v0,v1,v2,...,v(n-1),vn + such that every consecutive pair of nodes v(i),v(i+1) + share at least s edges. + + A hypergraph is s edge connected if for any two edges e0,en + there exists a sequence of edges e0,e1,e2,...,e(n-1),en + such that every consecutive pair of edges e(i),e(i+1) + share at least s nodes. + + """ + + g = self.get_linegraph(s=s, edges=edges) + is_connected = None + + try: + is_connected = nx.is_connected(g) + except nx.NetworkXPointlessConcept: + warnings.warn("Graph is null; ") + is_connected = False + + return is_connected
+ +
[docs] def singletons(self): + """ + Returns a list of singleton edges. A singleton edge is an edge of + size 1 with a node of degree 1. + + Returns + ------- + singles : list + A list of edge uids. + """ + + M, _, cdict = self.incidence_matrix(index=True) + # which axis has fewest members? if 1 then columns + idx = np.argmax(M.shape).tolist() + # we add down the row index if there are fewer columns + cols = M.sum(idx) + singles = [] + # index along opposite axis with one entry each + for c in np.nonzero((cols - 1 == 0))[(idx + 1) % 2]: + # if the singleton entry in that column is also + # singleton in its row find the entry + if idx == 0: + r = np.argmax(M.getcol(c)) + # and get its sum + s = np.sum(M.getrow(r)) + # if this is also 1 then the entry in r,c represents a + # singleton so we want to change that entry to 0 and + # remove the row. this means we want to remove the + # edge corresponding to c + if s == 1: + singles.append(cdict[c]) + else: # switch the role of r and c + r = np.argmax(M.getrow(c)) + s = np.sum(M.getcol(r)) + if s == 1: + singles.append(cdict[r]) + return singles
+ +
[docs] def remove_singletons(self, name=None): + """ + Constructs clone of hypergraph with singleton edges removed. + + Returns + ------- + new hypergraph : Hypergraph + + """ + singletons = self.singletons() + if len(singletons) > len(self.edges): + E = [e for e in self.edges if e not in singletons] + return self.restrict_to_edges(E, name=name) + else: + return self.remove(singletons, level=0, name=name)
+ +
[docs] def s_connected_components(self, s=1, edges=True, return_singletons=False): + """ + Returns a generator for the :term:`s-edge-connected components + <s-edge-connected component>` + or the :term:`s-node-connected components <s-connected component, + s-node-connected component>` of the hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + edges : boolean, optional, default = True + If True will return edge components, if False will return node + components + return_singletons : bool, optional, default = False + + Notes + ----- + If edges=True, this method returns the s-edge-connected components as + lists of lists of edge uids. + An s-edge-component has the property that for any two edges e1 and e2 + there is a sequence of edges starting with e1 and ending with e2 + such that pairwise adjacent edges in the sequence intersect in at least + s nodes. If s=1 these are the path components of the hypergraph. + + If edges=False this method returns s-node-connected components. + A list of sets of uids of the nodes which are s-walk connected. + Two nodes v1 and v2 are s-walk-connected if there is a + sequence of nodes starting with v1 and ending with v2 such that + pairwise adjacent nodes in the sequence share s edges. If s=1 these + are the path components of the hypergraph. + + Example + ------- + >>> S = {'A':{1,2,3},'B':{2,3,4},'C':{5,6},'D':{6}} + >>> H = Hypergraph(S) + + >>> list(H.s_components(edges=True)) + [{'C', 'D'}, {'A', 'B'}] + >>> list(H.s_components(edges=False)) + [{1, 2, 3, 4}, {5, 6}] + + Yields + ------ + s_connected_components : iterator + Iterator returns sets of uids of the edges (or nodes) in the + s-edge(node) components of hypergraph. + + """ + g = self.get_linegraph(s, edges=edges) + for c in nx.connected_components(g): + if not return_singletons and len(c) == 1: + continue + yield c
+ +
[docs] def s_component_subgraphs( + self, s=1, edges=True, return_singletons=False, name=None + ): + """ + + Returns a generator for the induced subgraphs of s_connected + components. Removes singletons unless return_singletons is set to True. + Computed using s-linegraph generated either by the hypergraph + (edges=True) or its dual (edges = False) + + Parameters + ---------- + s : int, optional, default 1 + + edges : boolean, optional, edges=False + Determines if edge or node components are desired. Returns + subgraphs equal to the hypergraph restricted to each set of + nodes(edges) in the s-connected components or s-edge-connected + components + return_singletons : bool, optional + + Yields + ------ + s_component_subgraphs : iterator + Iterator returns subgraphs generated by the edges (or nodes) in the + s-edge(node) components of hypergraph. + + """ + for idx, c in enumerate( + self.s_components(s=s, edges=edges, return_singletons=return_singletons) + ): + if edges: + yield self.restrict_to_edges(c, name=f"{name or self.name}:{idx}") + else: + yield self.restrict_to_nodes(c, name=f"{name or self.name}:{idx}")
+ +
[docs] def s_components(self, s=1, edges=True, return_singletons=True): + """ + Same as s_connected_components + + See Also + -------- + s_connected_components + """ + return self.s_connected_components( + s=s, edges=edges, return_singletons=return_singletons + )
+ +
[docs] def connected_components(self, edges=False): + """ + Same as :meth:`s_connected_components` with s=1, but nodes are returned + by default. Return iterator. + + See Also + -------- + s_connected_components + """ + return self.s_connected_components(edges=edges, return_singletons=True)
+ +
[docs] def connected_component_subgraphs(self, return_singletons=True, name=None): + """ + Same as :meth:`s_component_subgraphs` with s=1. Returns iterator + + See Also + -------- + s_component_subgraphs + """ + return self.s_component_subgraphs( + return_singletons=return_singletons, name=name + )
+ +
[docs] def components(self, edges=False): + """ + Same as :meth:`s_connected_components` with s=1, but nodes are returned + by default. Return iterator. + + See Also + -------- + s_connected_components + """ + return self.s_connected_components(s=1, edges=edges)
+ +
[docs] def component_subgraphs(self, return_singletons=False, name=None): + """ + Same as :meth:`s_components_subgraphs` with s=1. Returns iterator. + + See Also + -------- + s_component_subgraphs + """ + return self.s_component_subgraphs( + return_singletons=return_singletons, name=name + )
+ +
[docs] def node_diameters(self, s=1): + """ + Returns the node diameters of the connected components in hypergraph. + + Parameters + ---------- + list of the diameters of the s-components and + list of the s-component nodes + """ + A, coldict = self.adjacency_matrix(s=s, index=True) + G = nx.from_scipy_sparse_matrix(A) + diams = [] + comps = [] + for c in nx.connected_components(G): + diamc = nx.diameter(G.subgraph(c)) + temp = set() + for e in c: + temp.add(coldict[e]) + comps.append(temp) + diams.append(diamc) + loc = np.argmax(diams).tolist() + return diams[loc], diams, comps
+ +
[docs] def edge_diameters(self, s=1): + """ + Returns the edge diameters of the s_edge_connected component subgraphs + in hypergraph. + + Parameters + ---------- + s : int, optional, default 1 + + Returns + ------- + maximum diameter : int + + list of diameters : list + List of edge_diameters for s-edge component subgraphs in hypergraph + + list of component : list + List of the edge uids in the s-edge component subgraphs. + + """ + A, coldict = self.edge_adjacency_matrix(s=s, index=True) + G = nx.from_scipy_sparse_matrix(A) + diams = [] + comps = [] + for c in nx.connected_components(G): + diamc = nx.diameter(G.subgraph(c)) + temp = set() + for e in c: + temp.add(coldict[e]) + comps.append(temp) + diams.append(diamc) + loc = np.argmax(diams).tolist() + return diams[loc], diams, comps
+ +
[docs] def diameter(self, s=1): + """ + Returns the length of the longest shortest s-walk between nodes in + hypergraph + + Parameters + ---------- + s : int, optional, default 1 + + Returns + ------- + diameter : int + + Raises + ------ + HyperNetXError + If hypergraph is not s-edge-connected + + Notes + ----- + Two nodes are s-adjacent if they share s edges. + Two nodes v_start and v_end are s-walk connected if there is a + sequence of nodes v_start, v_1, v_2, ... v_n-1, v_end such that + consecutive nodes are s-adjacent. If the graph is not connected, + an error will be raised. + + """ + A = self.adjacency_matrix(s=s) + G = nx.from_scipy_sparse_matrix(A) + if nx.is_connected(G): + return nx.diameter(G) + + raise HyperNetXError(f"Hypergraph is not s-connected. s={s}")
+ +
[docs] def edge_diameter(self, s=1): + """ + Returns the length of the longest shortest s-walk between edges in + hypergraph + + Parameters + ---------- + s : int, optional, default 1 + + Return + ------ + edge_diameter : int + + Raises + ------ + HyperNetXError + If hypergraph is not s-edge-connected + + Notes + ----- + Two edges are s-adjacent if they share s nodes. + Two nodes e_start and e_end are s-walk connected if there is a + sequence of edges e_start, e_1, e_2, ... e_n-1, e_end such that + consecutive edges are s-adjacent. If the graph is not connected, an + error will be raised. + + """ + A = self.edge_adjacency_matrix(s=s) + G = nx.from_scipy_sparse_matrix(A) + if nx.is_connected(G): + return nx.diameter(G) + + raise HyperNetXError(f"Hypergraph is not s-connected. s={s}")
+ +
[docs] def distance(self, source, target, s=1): + """ + Returns the shortest s-walk distance between two nodes in the + hypergraph. + + Parameters + ---------- + source : node.uid or node + a node in the hypergraph + + target : node.uid or node + a node in the hypergraph + + s : positive integer + the number of edges + + Returns + ------- + s-walk distance : int + + See Also + -------- + edge_distance + + Notes + ----- + The s-distance is the shortest s-walk length between the nodes. + An s-walk between nodes is a sequence of nodes that pairwise share + at least s edges. The length of the shortest s-walk is 1 less than + the number of nodes in the path sequence. + + Uses the networkx shortest_path_length method on the graph + generated by the s-adjacency matrix. + + """ + g = self.get_linegraph(s=s, edges=False) + try: + dist = nx.shortest_path_length(g, source, target) + except (nx.NetworkXNoPath, nx.NodeNotFound): + warnings.warn(f"No {s}-path between {source} and {target}") + dist = np.inf + + return dist
+ +
[docs] def edge_distance(self, source, target, s=1): + """XX TODO: still need to return path and translate into user defined + nodes and edges Returns the shortest s-walk distance between two edges + in the hypergraph. + + Parameters + ---------- + source : edge.uid or edge + an edge in the hypergraph + + target : edge.uid or edge + an edge in the hypergraph + + s : positive integer + the number of intersections between pairwise consecutive edges + + TODO: add edge weights + weight : None or string, optional, default = None + if None then all edges have weight 1. If string then edge attribute + string is used if available. + + + Returns + ------- + s- walk distance : the shortest s-walk edge distance + A shortest s-walk is computed as a sequence of edges, + the s-walk distance is the number of edges in the sequence + minus 1. If no such path exists returns np.inf. + + See Also + -------- + distance + + Notes + ----- + The s-distance is the shortest s-walk length between the edges. + An s-walk between edges is a sequence of edges such that + consecutive pairwise edges intersect in at least s nodes. The + length of the shortest s-walk is 1 less than the number of edges + in the path sequence. + + Uses the networkx shortest_path_length method on the graph + generated by the s-edge_adjacency matrix. + + """ + g = self.get_linegraph(s=s, edges=True) + try: + edge_dist = nx.shortest_path_length(g, source, target) + except (nx.NetworkXNoPath, nx.NodeNotFound): + warnings.warn(f"No {s}-path between {source} and {target}") + edge_dist = np.inf + + return edge_dist
+ +
[docs] def incidence_dataframe( + self, sort_rows=False, sort_columns=False, cell_weights=True + ): + """ + Returns a pandas dataframe for hypergraph indexed by the nodes and + with column headers given by the edge names. + + Parameters + ---------- + sort_rows : bool, optional, default =True + sort rows based on hashable node names + sort_columns : bool, optional, default =True + sort columns based on hashable edge names + cell_weights : bool, optional, default =True + + """ + + ## An entity dataframe is already an incidence dataframe. + df = self.E.dataframe.pivot( + index=self.E._data_cols[1], + columns=self.E._data_cols[0], + values=self.E._cell_weight_col, + ).fillna(0) + + if sort_rows: + df = df.sort_index("index") + if sort_columns: + df = df.sort_index("columns") + if not cell_weights: + df[df > 0] = 1 + + return df
+ +
[docs] @classmethod + @warn_nwhy + def from_bipartite(cls, B, set_names=("edges", "nodes"), name=None, **kwargs): + """ + Static method creates a Hypergraph from a bipartite graph. + + Parameters + ---------- + + B: nx.Graph() + A networkx bipartite graph. Each node in the graph has a property + 'bipartite' taking the value of 0 or 1 indicating a 2-coloring of + the graph. + + set_names: iterable of length 2, optional, default = ['edges','nodes'] + Category names assigned to the graph nodes associated to each + bipartite set + + name: hashable, optional + + Returns + ------- + : Hypergraph + + Notes + ----- + A partition for the nodes in a bipartite graph generates a hypergraph. + + >>> import networkx as nx + >>> B = nx.Graph() + >>> B.add_nodes_from([1, 2, 3, 4], bipartite=0) + >>> B.add_nodes_from(['a', 'b', 'c'], bipartite=1) + >>> B.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'), (2, 'c'), / + (3, 'c'), (4, 'a')]) + >>> H = Hypergraph.from_bipartite(B) + >>> H.nodes, H.edges + # output: (EntitySet(_:Nodes,[1, 2, 3, 4],{}), / + # EntitySet(_:Edges,['b', 'c', 'a'],{})) + + """ + + edges = [] + nodes = [] + for n, d in B.nodes(data=True): + if d["bipartite"] == 1: + nodes.append(n) + else: + edges.append(n) + + if not bipartite.is_bipartite_node_set(B, nodes): + raise HyperNetXError( + "Error: Method requires a 2-coloring of a bipartite graph." + ) + + elist = [] + for e in list(B.edges): + if e[0] in edges: + elist.append([e[0], e[1]]) + else: + elist.append([e[1], e[0]]) + df = pd.DataFrame(elist, columns=set_names) + return Hypergraph(df, name=name, **kwargs)
+ +
[docs] @classmethod + def from_incidence_matrix( + cls, + M, + node_names=None, + edge_names=None, + node_label="nodes", + edge_label="edges", + name=None, + key=None, + **kwargs, + ): + """ + Same as from_numpy_array. + """ + return Hypergraph.from_numpy_array( + M, + node_names=node_names, + edge_names=edge_names, + node_label=node_label, + edge_label=edge_label, + name=name, + key=key, + )
+ +
[docs] @classmethod + @warn_nwhy + def from_numpy_array( + cls, + M, + node_names=None, + edge_names=None, + node_label="nodes", + edge_label="edges", + name=None, + key=None, + **kwargs, + ): + """ + Create a hypergraph from a real valued matrix represented as a 2 dimensionsl numpy array. + The matrix is converted to a matrix of 0's and 1's so that any truthy cells are converted to 1's and + all others to 0's. + + Parameters + ---------- + M : real valued array-like object, 2 dimensions + representing a real valued matrix with rows corresponding to nodes and columns to edges + + node_names : object, array-like, default=None + List of node names must be the same length as M.shape[0]. + If None then the node names correspond to row indices with 'v' prepended. + + edge_names : object, array-like, default=None + List of edge names must have the same length as M.shape[1]. + If None then the edge names correspond to column indices with 'e' prepended. + + name : hashable + + key : (optional) function + boolean function to be evaluated on each cell of the array, + must be applicable to numpy.array + + Returns + ------- + : Hypergraph + + Note + ---- + The constructor does not generate empty edges. + All zero columns in M are removed and the names corresponding to these + edges are discarded. + + + """ + # Create names for nodes and edges + # Validate the size of the node and edge arrays + + M = np.array(M) + if len(M.shape) != (2): + raise HyperNetXError("Input requires a 2 dimensional numpy array") + # apply boolean key if available + if key is not None: + M = key(M) + + if node_names is not None: + nodenames = np.array(node_names) + if len(nodenames) != M.shape[0]: + raise HyperNetXError( + "Number of node names does not match number of rows." + ) + else: + nodenames = np.array([f"v{idx}" for idx in range(M.shape[0])]) + + if edge_names is not None: + edgenames = np.array(edge_names) + if len(edgenames) != M.shape[1]: + raise HyperNetXError( + "Number of edge_names does not match number of columns." + ) + else: + edgenames = np.array([f"e{jdx}" for jdx in range(M.shape[1])]) + + df = pd.DataFrame(M, columns=edgenames, index=nodenames) + return Hypergraph.from_incidence_dataframe(df, name=name)
+ +
[docs] @classmethod + @warn_nwhy + def from_incidence_dataframe( + cls, + df, + columns=None, + rows=None, + edge_col: str = "edges", + node_col: str = "nodes", + name=None, + fillna=0, + transpose=False, + transforms=[], + key=None, + return_only_dataframe=False, + **kwargs, + ): + """ + Create a hypergraph from a Pandas Dataframe object, which has values equal + to the incidence matrix of a hypergraph. Its index will identify the nodes + and its columns will identify its edges. + + Parameters + ---------- + df : Pandas.Dataframe + a real valued dataframe with a single index + + columns : (optional) list, default = None + restricts df to the columns with headers in this list. + + rows : (optional) list, default = None + restricts df to the rows indexed by the elements in this list. + + name : (optional) string, default = None + + fillna : float, default = 0 + a real value to place in empty cell, all-zero columns will not + generate an edge. + + transpose : (optional) bool, default = False + option to transpose the dataframe, in this case df.Index will + identify the edges and df.columns will identify the nodes, transpose is + applied before transforms and key + + transforms : (optional) list, default = [] + optional list of transformations to apply to each column, + of the dataframe using pd.DataFrame.apply(). + Transformations are applied in the order they are + given (ex. abs). To apply transforms to rows or for additional + functionality, consider transforming df using pandas.DataFrame + methods prior to generating the hypergraph. + + key : (optional) function, default = None + boolean function to be applied to dataframe. will be applied to + entire dataframe. + + return_only_dataframe : (optional) bool, default = False + to use the incidence_dataframe with cell_properties or properties, set this + to true and use it as the setsystem in the Hypergraph constructor. + + See also + -------- + from_numpy_array + + + Returns + ------- + : Hypergraph + + """ + + if not isinstance(df, pd.DataFrame): + raise HyperNetXError("Error: Input object must be a pandas dataframe.") + + if columns: + df = df[columns] + if rows: + df = df.loc[rows] + + df = df.fillna(fillna) + if transpose: + df = df.transpose() + + for t in transforms: + df = df.apply(t) + if key: + mat = key(df.values) * 1 + else: + mat = df.values * 1 + + cols = df.columns + rows = df.index + CM = coo_matrix(mat) + c1 = CM.row + c1 = [rows[c1[idx]] for idx in range(len(c1))] + c2 = CM.col + c2 = [cols[c2[idx]] for idx in range(len(c2))] + c3 = CM.data + + dfnew = pd.DataFrame({edge_col: c2, node_col: c1, "cell_weights": c3}) + if return_only_dataframe == True: + return dfnew + else: + return Hypergraph( + dfnew, + edge_col=edge_col, + node_col=node_col, + weights="cell_weights", + name=None, + )
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/drawing/rubber_band.html b/_modules/hypernetx/drawing/rubber_band.html new file mode 100644 index 00000000..83652b6f --- /dev/null +++ b/_modules/hypernetx/drawing/rubber_band.html @@ -0,0 +1,613 @@ + + + + + + hypernetx.drawing.rubber_band — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.drawing.rubber_band

+# Copyright © 2018 Battelle Memorial Institute
+# All rights reserved.
+
+from hypernetx import Hypergraph
+from hypernetx.drawing.util import (
+    get_frozenset_label,
+    get_collapsed_size,
+    get_set_layering,
+    inflate_kwargs,
+    transpose_inflated_kwargs,
+)
+
+import matplotlib.pyplot as plt
+from matplotlib.collections import PolyCollection
+
+import networkx as nx
+
+
+import numpy as np
+from scipy.spatial.distance import pdist
+from scipy.spatial import ConvexHull
+
+# increases the default figure size to 8in square.
+plt.rcParams["figure.figsize"] = (8, 8)
+
+N_CONTROL_POINTS = 24
+
+theta = np.linspace(0, 2 * np.pi, N_CONTROL_POINTS + 1)[:-1]
+
+cp = np.vstack((np.cos(theta), np.sin(theta))).T
+
+
+def layout_node_link(H, layout=nx.spring_layout, **kwargs):
+    """
+    Helper function to use a NetwrokX-like graph layout algorithm on a Hypergraph
+
+    The hypergraph is converted to a bipartite graph, allowing the usual graph layout
+    techniques to be applied.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    layout: function
+        the layout algorithm which accepts a NetworkX graph and keyword arguments
+    kwargs: dict
+        Keyword arguments are passed through to the layout algorithm
+
+    Returns
+    -------
+    dict
+        mapping of node and edge positions to R^2
+    """
+    return layout(H.bipartite(), **kwargs)
+
+
+def get_default_radius(H, pos):
+    """
+    Calculate a reasonable default node radius
+
+    This function iterates over the hyper edges and finds the most distant
+    pair of points given the positions provided. Then, the node radius is a fraction
+    of the median of this distance take across all hyper-edges.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    pos: dict
+        mapping of node and edge positions to R^2
+
+    Returns
+    -------
+    float
+        the recommended radius
+
+    """
+    if len(H) > 1:
+        return 0.0125 * np.median(
+            [pdist(np.vstack(list(map(pos.get, H.nodes)))).max() for nodes in H.edges()]
+        )
+    return 1
+
+
+def draw_hyper_edge_labels(H, polys, labels={}, ax=None, **kwargs):
+    """
+    Draws a label on the hyper edge boundary.
+
+    Should be passed Matplotlib PolyCollection representing the hyper-edges, see
+    the return value of draw_hyper_edges.
+
+    The label will be draw on the least curvy part of the polygon, and will be
+    aligned parallel to the orientation of the polygon where it is drawn.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    polys: PolyCollection
+        collection of polygons returned by draw_hyper_edges
+    labels: dict
+        mapping of node id to string label
+    ax: Axis
+        matplotlib axis on which the plot is rendered
+    kwargs: dict
+        Keyword arguments are passed through to Matplotlib's annotate function.
+
+    """
+    ax = ax or plt.gca()
+
+    params = transpose_inflated_kwargs(inflate_kwargs(H.edges, kwargs))
+
+    for edge, path, params in zip(H.edges, polys.get_paths(), params):
+        s = labels.get(edge, edge)
+
+        # calculate the xy location of the annotation
+        # this is the midpoint of the pair of adjacent points the most distant
+        d = ((path.vertices[:-1] - path.vertices[1:]) ** 2).sum(axis=1)
+        i = d.argmax()
+
+        x1, x2 = path.vertices[i : i + 2]
+        x, y = x2 - x1
+        theta = 360 * np.arctan2(y, x) / (2 * np.pi)
+        theta = (theta + 360) % 360
+
+        while theta > 90:
+            theta -= 180
+
+        # the string is a comma separated list of the edge uid
+        ax.annotate(
+            s, (x1 + x2) / 2, rotation=theta, ha="center", va="center", **params
+        )
+
+
+def layout_hyper_edges(H, pos, node_radius={}, dr=None):
+    """
+    Draws a convex hull for each edge in H.
+
+    Position of the nodes in the graph is specified by the position dictionary,
+    pos. Convex hulls are spaced out such that if one set contains another, the
+    convex hull will surround the contained set. The amount of spacing added
+    between hulls is specified by the parameter, dr.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    pos: dict
+        mapping of node and edge positions to R^2
+    node_radius: dict
+        mapping of node to R^1 (radius of each node)
+    dr: float
+        the spacing between concentric rings
+    ax: Axis
+        matplotlib axis on which the plot is rendered
+
+    Returns
+    -------
+    dict
+        A mapping from hyper edge ids to paths (Nx2 numpy matrices)
+    """
+
+    if len(node_radius):
+        r0 = min(node_radius.values())
+    else:
+        r0 = get_default_radius(H, pos)
+
+    dr = dr or r0
+
+    levels = get_set_layering(H)
+
+    radii = {
+        v: {v: i for i, v in enumerate(sorted(e, key=levels.get))}
+        for v, e in H.dual().edges.elements.items()
+    }
+
+    def get_padded_hull(uid, edge):
+        # make sure the edge contains at least one node
+        if len(edge):
+            points = np.vstack(
+                [
+                    cp * (node_radius.get(v, r0) + dr * (2 + radii[v][uid])) + pos[v]
+                    for v in edge
+                ]
+            )
+        # if not, draw an empty edge centered around the location of the edge node (in the bipartite graph)
+        else:
+            points = 4 * r0 * cp + pos[uid]
+
+        hull = ConvexHull(points)
+
+        return hull.points[hull.vertices]
+
+    return [get_padded_hull(uid, list(H.edges[uid])) for uid in H.edges]
+
+
+def draw_hyper_edges(H, pos, ax=None, node_radius={}, dr=None, **kwargs):
+    """
+    Draws a convex hull around the nodes contained within each edge in H
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    pos: dict
+        mapping of node and edge positions to R^2
+    node_radius: dict
+        mapping of node to R^1 (radius of each node)
+    dr: float
+        the spacing between concentric rings
+    ax: Axis
+        matplotlib axis on which the plot is rendered
+    kwargs: dict
+        keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor
+
+    Returns
+    -------
+    PolyCollection
+        a Matplotlib PolyCollection that can be further styled
+    """
+    points = layout_hyper_edges(H, pos, node_radius=node_radius, dr=dr)
+
+    polys = PolyCollection(points, **inflate_kwargs(H.edges, kwargs))
+
+    (ax or plt.gca()).add_collection(polys)
+
+    return polys
+
+
+def draw_hyper_nodes(H, pos, node_radius={}, r0=None, ax=None, **kwargs):
+    """
+    Draws a circle for each node in H.
+
+    The position of each node is specified by the a dictionary/list-like, pos,
+    where pos[v] is the xy-coordinate for the vertex. The radius of each node
+    can be specified as a dictionary where node_radius[v] is the radius. If a
+    node is missing from this dictionary, or the node_radius is not specified at
+    all, a sensible default radius is chosen based on distances between nodes
+    given by pos.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    pos: dict
+        mapping of node and edge positions to R^2
+    node_radius: dict
+        mapping of node to R^1 (radius of each node)
+    r0: float
+        minimum distance that concentric rings start from the node position
+    ax: Axis
+        matplotlib axis on which the plot is rendered
+    kwargs: dict
+        keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor
+
+    Returns
+    -------
+    PolyCollection
+        a Matplotlib PolyCollection that can be further styled
+    """
+
+    ax = ax or plt.gca()
+
+    r0 = r0 or get_default_radius(H, pos)
+
+    points = [node_radius.get(v, r0) * cp + pos[v] for v in H.nodes]
+
+    kwargs.setdefault("facecolors", "black")
+
+    circles = PolyCollection(points, **inflate_kwargs(H, kwargs))
+
+    ax.add_collection(circles)
+
+    return circles
+
+
+def draw_hyper_labels(H, pos, node_radius={}, ax=None, labels={}, **kwargs):
+    """
+    Draws text labels for the hypergraph nodes.
+
+    The label is drawn to the right of the node. The node radius is needed (see
+    draw_hyper_nodes) so the text can be offset appropriately as the node size
+    changes.
+
+    The text label can be customized by passing in a dictionary, labels, mapping
+    a node to its custom label. By default, the label is the string
+    representation of the node.
+
+    Keyword arguments are passed through to Matplotlib's annotate function.
+
+    Parameters
+    ----------
+    H: Hypergraph
+        the entity to be drawn
+    pos: dict
+        mapping of node and edge positions to R^2
+    node_radius: dict
+        mapping of node to R^1 (radius of each node)
+    ax: Axis
+        matplotlib axis on which the plot is rendered
+    labels: dict
+        mapping of node to text label
+    kwargs: dict
+        keyword arguments passed to matplotlib.annotate
+
+    """
+    ax = ax or plt.gca()
+
+    params = transpose_inflated_kwargs(inflate_kwargs(H.nodes, kwargs))
+
+    for v, v_kwargs in zip(H.nodes, params):
+        xy = np.array([node_radius.get(v, 0), 0]) + pos[v]
+        ax.annotate(
+            labels.get(v, v),
+            xy,
+            **{
+                k: (
+                    d[v]
+                    if hasattr(d, "__getitem__") and type(d) not in {str, tuple}
+                    else d
+                )
+                for k, d in kwargs.items()
+            }
+        )
+
+
+
[docs]def draw( + H, + pos=None, + with_color=True, + with_node_counts=False, + with_edge_counts=False, + layout=nx.spring_layout, + layout_kwargs={}, + ax=None, + node_radius=None, + edges_kwargs={}, + nodes_kwargs={}, + edge_labels={}, + edge_labels_kwargs={}, + node_labels={}, + node_labels_kwargs={}, + with_edge_labels=True, + with_node_labels=True, + label_alpha=0.35, + return_pos=False, +): + """ + Draw a hypergraph as a Matplotlib figure + + By default this will draw a colorful "rubber band" like hypergraph, where + convex hulls represent edges and are drawn around the nodes they contain. + + This is a convenience function that wraps calls with sensible parameters to + the following lower-level drawing functions: + + * draw_hyper_edges, + * draw_hyper_edge_labels, + * draw_hyper_labels, and + * draw_hyper_nodes + + The default layout algorithm is nx.spring_layout, but other layouts can be + passed in. The Hypergraph is converted to a bipartite graph, and the layout + algorithm is passed the bipartite graph. + + If you have a pre-determined layout, you can pass in a "pos" dictionary. + This is a dictionary mapping from node id's to x-y coordinates. For example: + + >>> pos = { + >>> 'A': (0, 0), + >>> 'B': (1, 2), + >>> 'C': (5, -3) + >>> } + + will position the nodes {A, B, C} manually at the locations specified. The + coordinate system is in Matplotlib "data coordinates", and the figure will + be centered within the figure. + + By default, this will draw in a new figure, but the axis to render in can be + specified using :code:`ax`. + + This approach works well for small hypergraphs, and does not guarantee + a rigorously "correct" drawing. Overlapping of sets in the drawing generally + implies that the sets intersect, but sometimes sets overlap if there is no + intersection. It is not possible, in general, to draw a "correct" hypergraph + this way for an arbitrary hypergraph, in the same way that not all graphs + have planar drawings. + + Parameters + ---------- + H: Hypergraph + the entity to be drawn + pos: dict + mapping of node and edge positions to R^2 + with_color: bool + set to False to disable color cycling of edges + with_node_counts: bool + set to True to replace the label for collapsed nodes with the number of elements + with_edge_counts: bool + set to True to label collapsed edges with number of elements + layout: function + layout algorithm to compute + layout_kwargs: dict + keyword arguments passed to layout function + ax: Axis + matplotlib axis on which the plot is rendered + edges_kwargs: dict + keyword arguments passed to matplotlib.collections.PolyCollection for edges + node_radius: None, int, float, or dict + radius of all nodes, or dictionary of node:value; the default (None) calculates radius based on number of collapsed nodes; reasonable values range between 1 and 3 + nodes_kwargs: dict + keyword arguments passed to matplotlib.collections.PolyCollection for nodes + edge_labels_kwargs: dict + keyword arguments passed to matplotlib.annotate for edge labels + node_labels_kwargs: dict + keyword argumetns passed to matplotlib.annotate for node labels + with_edge_labels: bool + set to False to make edge labels invisible + with_node_labels: bool + set to False to make node labels invisible + label_alpha: float + the transparency (alpha) of the box behind text drawn in the figure + """ + + ax = ax or plt.gca() + + if pos is None: + pos = layout_node_link(H, layout=layout, **layout_kwargs) + + r0 = get_default_radius(H, pos) + a0 = np.pi * r0**2 + + def get_node_radius(v): + if node_radius is None: + return np.sqrt(a0 * get_collapsed_size(v) / np.pi) + elif hasattr(node_radius, "get"): + return node_radius.get(v, 1) * r0 + return node_radius * r0 + + # guarantee that node radius is a dictionary mapping nodes to values + node_radius = {v: get_node_radius(v) for v in H.nodes} + + # for convenience, we are using setdefault to mutate the argument + # however, we need to copy this to prevent side-effects + edges_kwargs = edges_kwargs.copy() + edges_kwargs.setdefault("edgecolors", plt.cm.tab10(np.arange(len(H.edges)) % 10)) + edges_kwargs.setdefault("facecolors", "none") + + polys = draw_hyper_edges(H, pos, node_radius=node_radius, ax=ax, **edges_kwargs) + + if with_edge_labels: + labels = get_frozenset_label( + H.edges, count=with_edge_counts, override=edge_labels + ) + + draw_hyper_edge_labels( + H, + polys, + color=edges_kwargs["edgecolors"], + backgroundcolor=(1, 1, 1, label_alpha), + labels=labels, + ax=ax, + **edge_labels_kwargs + ) + + if with_node_labels: + labels = get_frozenset_label( + H.nodes, count=with_node_counts, override=node_labels + ) + + draw_hyper_labels( + H, + pos, + node_radius=node_radius, + labels=labels, + ax=ax, + va="center", + xytext=(5, 0), + textcoords="offset points", + backgroundcolor=(1, 1, 1, label_alpha), + **node_labels_kwargs + ) + + draw_hyper_nodes(H, pos, node_radius=node_radius, ax=ax, **nodes_kwargs) + + if len(H.nodes) == 1: + x, y = pos[list(H.nodes)[0]] + s = 20 + + ax.axis([x - s, x + s, y - s, y + s]) + else: + ax.axis("equal") + + ax.axis("off") + if return_pos: + return pos
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/hypernetx/reports/descriptive_stats.html b/_modules/hypernetx/reports/descriptive_stats.html new file mode 100644 index 00000000..f259cd82 --- /dev/null +++ b/_modules/hypernetx/reports/descriptive_stats.html @@ -0,0 +1,515 @@ + + + + + + hypernetx.reports.descriptive_stats — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for hypernetx.reports.descriptive_stats

+"""
+This module contains methods which compute various distributions for hypergraphs:
+    * Edge size distribution
+    * Node degree distribution
+    * Component size distribution
+    * Toplex size distribution
+    * Diameter
+
+Also computes general hypergraph information: number of nodes, edges, cells, aspect ratio, incidence matrix density
+"""
+from collections import Counter
+import numpy as np
+from hypernetx.utils.decorators import not_implemented_for
+
+
+
[docs]def centrality_stats(X): + """ + Computes basic centrality statistics for X + + Parameters + ---------- + X : + an iterable of numbers + + Returns + ------- + [min, max, mean, median, standard deviation] : list + List of centrality statistics for X + """ + return [ + min(X), + max(X), + np.mean(X).tolist(), + np.median(X).tolist(), + np.std(X).tolist(), + ]
+ + +
[docs]def edge_size_dist(H, aggregated=False): + """ + Computes edge sizes of a hypergraph. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + edge sizes and counts. If aggregated is False, returns a + list of edge sizes in H. + + Returns + ------- + edge_size_dist : list or dict + List of edge sizes or dictionary of edge size distribution. + + """ + if aggregated: + return Counter(H.edge_size_dist()) + else: + return H.edge_size_dist()
+ + +
[docs]def degree_dist(H, aggregated=False): + """ + Computes degrees of nodes of a hypergraph. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + degrees and counts. If aggregated is False, returns a + list of degrees in H. + + Returns + ------- + degree_dist : list or dict + List of degrees or dictionary of degree distribution + """ + distr = [H.degree(n) for n in H.nodes] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def comp_dist(H, aggregated=False): + """ + Computes component sizes, number of nodes. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + component sizes (number of nodes) and counts. If aggregated + is False, returns a list of components sizes in H. + + Returns + ------- + comp_dist : list or dictionary + List of component sizes or dictionary of component size distribution + + See Also + -------- + s_comp_dist + + """ + + distr = [len(c) for c in H.components()] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def s_comp_dist(H, s=1, aggregated=False, edges=True, return_singletons=True): + """ + Computes s-component sizes, counting nodes or edges. + + Parameters + ---------- + H : Hypergraph + s : positive integer, default is 1 + aggregated : + If aggregated is True, returns a dictionary of + s-component sizes and counts in H. If aggregated is + False, returns a list of s-component sizes in H. + edges : + If edges is True, the component size is number of edges. + If edges is False, the component size is number of nodes. + return_singletons : bool, optional, default=True + + Returns + ------- + s_comp_dist : list or dictionary + List of component sizes or dictionary of component size distribution in H + + See Also + -------- + comp_dist + + """ + distr = list() + comps = H.s_connected_components( + s=s, edges=edges, return_singletons=return_singletons + ) + + distr = [len(c) for c in comps] + + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]@not_implemented_for("static") +def toplex_dist(H, aggregated=False): + """ + + Computes toplex sizes for hypergraph H. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + toplex sizes and counts in H. If aggregated + is False, returns a list of toplex sizes in H. + + Returns + ------- + toplex_dist : list or dictionary + List of toplex sizes or dictionary of toplex size distribution in H + """ + distr = [H.size(e) for e in H.toplexes().edges] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def s_node_diameter_dist(H): + """ + Parameters + ---------- + H : Hypergraph + + Returns + ------- + s_node_diameter_dist : list + List of s-node-diameters for hypergraph H starting with s=1 + and going up as long as the hypergraph is s-node-connected + """ + i = 1 + diams = [] + while H.is_connected(s=i): + diams.append(H.diameter(s=i)) + i += 1 + return diams
+ + +
[docs]def s_edge_diameter_dist(H): + """ + Parameters + ---------- + H : Hypergraph + + Returns + ------- + s_edge_diameter_dist : list + List of s-edge-diameters for hypergraph H starting with s=1 + and going up as long as the hypergraph is s-edge-connected + """ + i = 1 + diams = [] + while H.is_connected(s=i, edges=True): + diams.append(H.edge_diameter(s=i)) + i += 1 + return diams
+ + +
[docs]def info(H, node=None, edge=None): + """ + Print a summary of simple statistics for H + + Parameters + ---------- + H : Hypergraph + obj : optional + either a node or edge uid from the hypergraph + dictionary : optional + If True then returns the info as a dictionary rather + than a string + If False (default) returns the info as a string + + Returns + ------- + info : string + Returns a string of statistics of the size, + aspect ratio, and density of the hypergraph. + Print the string to see it formatted. + + """ + if not H.edges.elements: + return f"Hypergraph {H.name} is empty." + report = info_dict(H, node=node, edge=edge) + info = "" + if node: + info += f"Node '{node}' has the following properties:\n" + info += f"Degree: {report['degree']}\n" + info += f"Contained in: {report['membs']}\n" + info += f"Neighbors: {report['neighbors']}" + elif edge: + info += f"Edge '{edge}' has the following properties:\n" + info += f"Size: {report['size']}\n" + info += f"Elements: {report['elements']}" + else: + info += f"Number of Rows: {report['nrows']}\n" + info += f"Number of Columns: {report['ncols']}\n" + info += f"Aspect Ratio: {report['aspect ratio']}\n" + info += f"Number of non-empty Cells: {report['ncells']}\n" + info += f"Density: {report['density']}" + return info
+ + +
[docs]def info_dict(H, node=None, edge=None): + """ + Create a summary of simple statistics for H + + Parameters + ---------- + H : Hypergraph + obj : optional + either a node or edge uid from the hypergraph + + Returns + ------- + info_dict : dict + Returns a dictionary of statistics of the size, + aspect ratio, and density of the hypergraph. + + """ + report = dict() + if len(H.edges.elements) == 0: + return {} + + if node: + report["membs"] = list(H.dual().edges[node]) + report["degree"] = len(report["membs"]) + report["neighbors"] = H.neighbors(node) + return report + if edge: + report["size"] = H.size(edge) + report["elements"] = list(H.edges[edge]) + return report + else: + lnodes, ledges = H.shape + M = H.incidence_matrix(index=False) + ncells = M.nnz + + report["nrows"] = lnodes + report["ncols"] = ledges + report["aspect ratio"] = lnodes / ledges + report["ncells"] = ncells + report["density"] = ncells / (lnodes * ledges) + return report
+ + +
[docs]def dist_stats(H): + """ + Computes many basic hypergraph stats and puts them all into a single dictionary object + + * nrows = number of nodes (rows in the incidence matrix) + * ncols = number of edges (columns in the incidence matrix) + * aspect ratio = nrows/ncols + * ncells = number of filled cells in incidence matrix + * density = ncells/(nrows*ncols) + * node degree list = degree_dist(H) + * node degree dist = centrality_stats(degree_dist(H)) + * node degree hist = Counter(degree_dist(H)) + * max node degree = max(degree_dist(H)) + * edge size list = edge_size_dist(H) + * edge size dist = centrality_stats(edge_size_dist(H)) + * edge size hist = Counter(edge_size_dist(H)) + * max edge size = max(edge_size_dist(H)) + * comp nodes list = s_comp_dist(H, s=1, edges=False) + * comp nodes dist = centrality_stats(s_comp_dist(H, s=1, edges=False)) + * comp nodes hist = Counter(s_comp_dist(H, s=1, edges=False)) + * comp edges list = s_comp_dist(H, s=1, edges=True) + * comp edges dist = centrality_stats(s_comp_dist(H, s=1, edges=True)) + * comp edges hist = Counter(s_comp_dist(H, s=1, edges=True)) + * num comps = len(s_comp_dist(H)) + + Parameters + ---------- + H : Hypergraph + + Returns + ------- + dist_stats : dict + Dictionary which keeps track of each of the above items (e.g., basic['nrows'] = the number of nodes in H) + """ + stats = H._state_dict.get("dist_stats", None) + if stats is not None: + return H._state_dict["dist_stats"] + + cstats = ["min", "max", "mean", "median", "std"] + basic = dict() + + # Number of rows (nodes), columns (edges), and aspect ratio + basic["nrows"] = len(H.nodes) + basic["ncols"] = len(H.edges) + basic["aspect ratio"] = basic["nrows"] / basic["ncols"] + + # Number of cells and density + M = H.incidence_matrix(index=False) + basic["ncells"] = M.nnz + basic["density"] = basic["ncells"] / (basic["nrows"] * basic["ncols"]) + + # Node degree distribution + basic["node degree list"] = sorted(degree_dist(H), reverse=True) + basic["node degree centrality stats"] = dict( + zip(cstats, centrality_stats(basic["node degree list"])) + ) + basic["node degree hist"] = Counter(basic["node degree list"]) + basic["max node degree"] = max(basic["node degree list"]) + + # Edge size distribution + basic["edge size list"] = sorted(H.edge_size_dist(), reverse=True) + basic["edge size centrality stats"] = dict( + zip(cstats, centrality_stats(basic["edge size list"])) + ) + basic["edge size hist"] = Counter(basic["edge size list"]) + basic["max edge size"] = max(basic["edge size hist"]) + + # Component size distribution (nodes) + basic["comp nodes list"] = sorted(s_comp_dist(H, edges=False), reverse=True) + basic["comp nodes hist"] = Counter(basic["comp nodes list"]) + basic["comp nodes centrality stats"] = dict( + zip(cstats, centrality_stats(basic["comp nodes list"])) + ) + + # Component size distribution (edges) + basic["comp edges list"] = sorted(s_comp_dist(H, edges=True), reverse=True) + basic["comp edges hist"] = Counter(basic["comp edges list"]) + basic["comp edges centrality stats"] = dict( + zip(cstats, centrality_stats(basic["comp edges list"])) + ) + + # Number of components + basic["num comps"] = len(basic["comp nodes list"]) + + # # Diameters + # basic['s edge diam list'] = s_edge_diameter_dist(H) + # basic['s node diam list'] = s_node_diameter_dist(H) + H.set_state(dist_stats=basic) + return basic
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_modules/index.html b/_modules/index.html new file mode 100644 index 00000000..03d073c7 --- /dev/null +++ b/_modules/index.html @@ -0,0 +1,141 @@ + + + + + + Overview: module code — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ + +
+
+ + + + \ No newline at end of file diff --git a/_modules/reports/descriptive_stats.html b/_modules/reports/descriptive_stats.html new file mode 100644 index 00000000..445c1506 --- /dev/null +++ b/_modules/reports/descriptive_stats.html @@ -0,0 +1,515 @@ + + + + + + reports.descriptive_stats — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for reports.descriptive_stats

+"""
+This module contains methods which compute various distributions for hypergraphs:
+    * Edge size distribution
+    * Node degree distribution
+    * Component size distribution
+    * Toplex size distribution
+    * Diameter
+
+Also computes general hypergraph information: number of nodes, edges, cells, aspect ratio, incidence matrix density
+"""
+from collections import Counter
+import numpy as np
+from hypernetx.utils.decorators import not_implemented_for
+
+
+
[docs]def centrality_stats(X): + """ + Computes basic centrality statistics for X + + Parameters + ---------- + X : + an iterable of numbers + + Returns + ------- + [min, max, mean, median, standard deviation] : list + List of centrality statistics for X + """ + return [ + min(X), + max(X), + np.mean(X).tolist(), + np.median(X).tolist(), + np.std(X).tolist(), + ]
+ + +
[docs]def edge_size_dist(H, aggregated=False): + """ + Computes edge sizes of a hypergraph. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + edge sizes and counts. If aggregated is False, returns a + list of edge sizes in H. + + Returns + ------- + edge_size_dist : list or dict + List of edge sizes or dictionary of edge size distribution. + + """ + if aggregated: + return Counter(H.edge_size_dist()) + else: + return H.edge_size_dist()
+ + +
[docs]def degree_dist(H, aggregated=False): + """ + Computes degrees of nodes of a hypergraph. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + degrees and counts. If aggregated is False, returns a + list of degrees in H. + + Returns + ------- + degree_dist : list or dict + List of degrees or dictionary of degree distribution + """ + distr = [H.degree(n) for n in H.nodes] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def comp_dist(H, aggregated=False): + """ + Computes component sizes, number of nodes. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + component sizes (number of nodes) and counts. If aggregated + is False, returns a list of components sizes in H. + + Returns + ------- + comp_dist : list or dictionary + List of component sizes or dictionary of component size distribution + + See Also + -------- + s_comp_dist + + """ + + distr = [len(c) for c in H.components()] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def s_comp_dist(H, s=1, aggregated=False, edges=True, return_singletons=True): + """ + Computes s-component sizes, counting nodes or edges. + + Parameters + ---------- + H : Hypergraph + s : positive integer, default is 1 + aggregated : + If aggregated is True, returns a dictionary of + s-component sizes and counts in H. If aggregated is + False, returns a list of s-component sizes in H. + edges : + If edges is True, the component size is number of edges. + If edges is False, the component size is number of nodes. + return_singletons : bool, optional, default=True + + Returns + ------- + s_comp_dist : list or dictionary + List of component sizes or dictionary of component size distribution in H + + See Also + -------- + comp_dist + + """ + distr = list() + comps = H.s_connected_components( + s=s, edges=edges, return_singletons=return_singletons + ) + + distr = [len(c) for c in comps] + + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]@not_implemented_for("static") +def toplex_dist(H, aggregated=False): + """ + + Computes toplex sizes for hypergraph H. + + Parameters + ---------- + H : Hypergraph + aggregated : + If aggregated is True, returns a dictionary of + toplex sizes and counts in H. If aggregated + is False, returns a list of toplex sizes in H. + + Returns + ------- + toplex_dist : list or dictionary + List of toplex sizes or dictionary of toplex size distribution in H + """ + distr = [H.size(e) for e in H.toplexes().edges] + if aggregated: + return Counter(distr) + else: + return distr
+ + +
[docs]def s_node_diameter_dist(H): + """ + Parameters + ---------- + H : Hypergraph + + Returns + ------- + s_node_diameter_dist : list + List of s-node-diameters for hypergraph H starting with s=1 + and going up as long as the hypergraph is s-node-connected + """ + i = 1 + diams = [] + while H.is_connected(s=i): + diams.append(H.diameter(s=i)) + i += 1 + return diams
+ + +
[docs]def s_edge_diameter_dist(H): + """ + Parameters + ---------- + H : Hypergraph + + Returns + ------- + s_edge_diameter_dist : list + List of s-edge-diameters for hypergraph H starting with s=1 + and going up as long as the hypergraph is s-edge-connected + """ + i = 1 + diams = [] + while H.is_connected(s=i, edges=True): + diams.append(H.edge_diameter(s=i)) + i += 1 + return diams
+ + +
[docs]def info(H, node=None, edge=None): + """ + Print a summary of simple statistics for H + + Parameters + ---------- + H : Hypergraph + obj : optional + either a node or edge uid from the hypergraph + dictionary : optional + If True then returns the info as a dictionary rather + than a string + If False (default) returns the info as a string + + Returns + ------- + info : string + Returns a string of statistics of the size, + aspect ratio, and density of the hypergraph. + Print the string to see it formatted. + + """ + if not H.edges.elements: + return f"Hypergraph {H.name} is empty." + report = info_dict(H, node=node, edge=edge) + info = "" + if node: + info += f"Node '{node}' has the following properties:\n" + info += f"Degree: {report['degree']}\n" + info += f"Contained in: {report['membs']}\n" + info += f"Neighbors: {report['neighbors']}" + elif edge: + info += f"Edge '{edge}' has the following properties:\n" + info += f"Size: {report['size']}\n" + info += f"Elements: {report['elements']}" + else: + info += f"Number of Rows: {report['nrows']}\n" + info += f"Number of Columns: {report['ncols']}\n" + info += f"Aspect Ratio: {report['aspect ratio']}\n" + info += f"Number of non-empty Cells: {report['ncells']}\n" + info += f"Density: {report['density']}" + return info
+ + +
[docs]def info_dict(H, node=None, edge=None): + """ + Create a summary of simple statistics for H + + Parameters + ---------- + H : Hypergraph + obj : optional + either a node or edge uid from the hypergraph + + Returns + ------- + info_dict : dict + Returns a dictionary of statistics of the size, + aspect ratio, and density of the hypergraph. + + """ + report = dict() + if len(H.edges.elements) == 0: + return {} + + if node: + report["membs"] = list(H.dual().edges[node]) + report["degree"] = len(report["membs"]) + report["neighbors"] = H.neighbors(node) + return report + if edge: + report["size"] = H.size(edge) + report["elements"] = list(H.edges[edge]) + return report + else: + lnodes, ledges = H.shape + M = H.incidence_matrix(index=False) + ncells = M.nnz + + report["nrows"] = lnodes + report["ncols"] = ledges + report["aspect ratio"] = lnodes / ledges + report["ncells"] = ncells + report["density"] = ncells / (lnodes * ledges) + return report
+ + +
[docs]def dist_stats(H): + """ + Computes many basic hypergraph stats and puts them all into a single dictionary object + + * nrows = number of nodes (rows in the incidence matrix) + * ncols = number of edges (columns in the incidence matrix) + * aspect ratio = nrows/ncols + * ncells = number of filled cells in incidence matrix + * density = ncells/(nrows*ncols) + * node degree list = degree_dist(H) + * node degree dist = centrality_stats(degree_dist(H)) + * node degree hist = Counter(degree_dist(H)) + * max node degree = max(degree_dist(H)) + * edge size list = edge_size_dist(H) + * edge size dist = centrality_stats(edge_size_dist(H)) + * edge size hist = Counter(edge_size_dist(H)) + * max edge size = max(edge_size_dist(H)) + * comp nodes list = s_comp_dist(H, s=1, edges=False) + * comp nodes dist = centrality_stats(s_comp_dist(H, s=1, edges=False)) + * comp nodes hist = Counter(s_comp_dist(H, s=1, edges=False)) + * comp edges list = s_comp_dist(H, s=1, edges=True) + * comp edges dist = centrality_stats(s_comp_dist(H, s=1, edges=True)) + * comp edges hist = Counter(s_comp_dist(H, s=1, edges=True)) + * num comps = len(s_comp_dist(H)) + + Parameters + ---------- + H : Hypergraph + + Returns + ------- + dist_stats : dict + Dictionary which keeps track of each of the above items (e.g., basic['nrows'] = the number of nodes in H) + """ + stats = H._state_dict.get("dist_stats", None) + if stats is not None: + return H._state_dict["dist_stats"] + + cstats = ["min", "max", "mean", "median", "std"] + basic = dict() + + # Number of rows (nodes), columns (edges), and aspect ratio + basic["nrows"] = len(H.nodes) + basic["ncols"] = len(H.edges) + basic["aspect ratio"] = basic["nrows"] / basic["ncols"] + + # Number of cells and density + M = H.incidence_matrix(index=False) + basic["ncells"] = M.nnz + basic["density"] = basic["ncells"] / (basic["nrows"] * basic["ncols"]) + + # Node degree distribution + basic["node degree list"] = sorted(degree_dist(H), reverse=True) + basic["node degree centrality stats"] = dict( + zip(cstats, centrality_stats(basic["node degree list"])) + ) + basic["node degree hist"] = Counter(basic["node degree list"]) + basic["max node degree"] = max(basic["node degree list"]) + + # Edge size distribution + basic["edge size list"] = sorted(H.edge_size_dist(), reverse=True) + basic["edge size centrality stats"] = dict( + zip(cstats, centrality_stats(basic["edge size list"])) + ) + basic["edge size hist"] = Counter(basic["edge size list"]) + basic["max edge size"] = max(basic["edge size hist"]) + + # Component size distribution (nodes) + basic["comp nodes list"] = sorted(s_comp_dist(H, edges=False), reverse=True) + basic["comp nodes hist"] = Counter(basic["comp nodes list"]) + basic["comp nodes centrality stats"] = dict( + zip(cstats, centrality_stats(basic["comp nodes list"])) + ) + + # Component size distribution (edges) + basic["comp edges list"] = sorted(s_comp_dist(H, edges=True), reverse=True) + basic["comp edges hist"] = Counter(basic["comp edges list"]) + basic["comp edges centrality stats"] = dict( + zip(cstats, centrality_stats(basic["comp edges list"])) + ) + + # Number of components + basic["num comps"] = len(basic["comp nodes list"]) + + # # Diameters + # basic['s edge diam list'] = s_edge_diameter_dist(H) + # basic['s node diam list'] = s_node_diameter_dist(H) + H.set_state(dist_stats=basic) + return basic
+
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/_sources/algorithms/algorithms.rst.txt b/_sources/algorithms/algorithms.rst.txt new file mode 100644 index 00000000..7e160c16 --- /dev/null +++ b/_sources/algorithms/algorithms.rst.txt @@ -0,0 +1,61 @@ +algorithms package +================== + +Submodules +---------- + +algorithms.contagion module +--------------------------- + +.. automodule:: algorithms.contagion + :members: + :undoc-members: + :show-inheritance: + +algorithms.generative\_models module +------------------------------------ + +.. automodule:: algorithms.generative_models + :members: + :undoc-members: + :show-inheritance: + +algorithms.homology\_mod2 module +-------------------------------- + +.. automodule:: algorithms.homology_mod2 + :members: + :undoc-members: + :show-inheritance: + +algorithms.hypergraph\_modularity module +---------------------------------------- + +.. automodule:: algorithms.hypergraph_modularity + :members: + :undoc-members: + :show-inheritance: + +algorithms.laplacians\_clustering module +---------------------------------------- + +.. automodule:: algorithms.laplacians_clustering + :members: + :undoc-members: + :show-inheritance: + +algorithms.s\_centrality\_measures module +----------------------------------------- + +.. automodule:: algorithms.s_centrality_measures + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: algorithms + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/algorithms/modules.rst.txt b/_sources/algorithms/modules.rst.txt new file mode 100644 index 00000000..d755574d --- /dev/null +++ b/_sources/algorithms/modules.rst.txt @@ -0,0 +1,7 @@ +algorithms +========== + +.. toctree:: + :maxdepth: 4 + + algorithms diff --git a/_sources/classes/classes.rst.txt b/_sources/classes/classes.rst.txt new file mode 100644 index 00000000..75542ea7 --- /dev/null +++ b/_sources/classes/classes.rst.txt @@ -0,0 +1,45 @@ +classes package +=============== + +Submodules +---------- + +classes.entity module +--------------------- + +.. automodule:: classes.entity + :members: + :undoc-members: + :show-inheritance: + +classes.entityset module +------------------------ + +.. automodule:: classes.entityset + :members: + :undoc-members: + :show-inheritance: + +classes.helpers module +---------------------- + +.. automodule:: classes.helpers + :members: + :undoc-members: + :show-inheritance: + +classes.hypergraph module +------------------------- + +.. automodule:: classes.hypergraph + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: classes + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/classes/modules.rst.txt b/_sources/classes/modules.rst.txt new file mode 100644 index 00000000..6af3efe7 --- /dev/null +++ b/_sources/classes/modules.rst.txt @@ -0,0 +1,7 @@ +classes +======= + +.. toctree:: + :maxdepth: 4 + + classes diff --git a/_sources/core.rst.txt b/_sources/core.rst.txt new file mode 100644 index 00000000..f52bb844 --- /dev/null +++ b/_sources/core.rst.txt @@ -0,0 +1,12 @@ +.. _core: + +================== +HyperNetX Packages +================== + +.. toctree:: + + Hypergraphs + Algorithms + Drawing + Reports diff --git a/_sources/drawing/drawing.rst.txt b/_sources/drawing/drawing.rst.txt new file mode 100644 index 00000000..fd619876 --- /dev/null +++ b/_sources/drawing/drawing.rst.txt @@ -0,0 +1,37 @@ +drawing package +=============== + +Submodules +---------- + +drawing.rubber\_band module +--------------------------- + +.. automodule:: drawing.rubber_band + :members: + :undoc-members: + :show-inheritance: + +drawing.two\_column module +-------------------------- + +.. automodule:: drawing.two_column + :members: + :undoc-members: + :show-inheritance: + +drawing.util module +------------------- + +.. automodule:: drawing.util + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: drawing + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/drawing/modules.rst.txt b/_sources/drawing/modules.rst.txt new file mode 100644 index 00000000..d0a077a0 --- /dev/null +++ b/_sources/drawing/modules.rst.txt @@ -0,0 +1,7 @@ +drawing +======= + +.. toctree:: + :maxdepth: 4 + + drawing diff --git a/_sources/glossary.rst.txt b/_sources/glossary.rst.txt new file mode 100644 index 00000000..c927c49b --- /dev/null +++ b/_sources/glossary.rst.txt @@ -0,0 +1,126 @@ +.. _glossary: + +===================== +Glossary of HNX terms +===================== + + +The HNX library centers around the idea of a :term:`hypergraph`. This glossary provides a few key terms and definitions. + + +.. glossary:: + :sorted: + + + .. // scan hypergraph.py + + Entity and Entity set + Class in entity.py. + HNX stores many of its data structures inside objects of type Entity. Entities help to insure safe behavior, but their use is primarily technical, not mathematical. + + hypergraph + The term *hypergraph* can have many different meanings. In HNX, it means a tuple (Nodes, Edges, Incidence), where Nodes and Edges are sets, and Incidence is a function that assigns a value of True or False to every pair (n,e) in the Cartesian product Nodes x Edges. We call + - Nodes the set of nodes + - Edges the set of edges + - Incidence the incidence function + *Note* Another term for this type of object is a *multihypergraph*. The ability to work with multihypergraphs efficiently is a distinguishing feature of HNX! + + incidence + A node n is incident to an edge e in a hypergraph (Nodes, Edges, Incidence) if Incidence(n,e) = True. + !!! -- give the line of code that would allow you to evaluate + + incidence matrix + A rectangular matrix constructed from a hypergraph (Nodes, Edges, Incidence) where the elements of Nodes index the matrix rows, and the elements of Edges index the matrix columns. Entry (n,e) in the incidence matrix is 1 if n and e are incident, and is 0 otherwise. + + edge nodes (aka edge elements) + The nodes (or elements) of an edge e in a hypergraph (Nodes, Edges, Incidence) are the nodes that are incident to e. + + subhypergraph + A subhypergraph of a hypergraph (Nodes, Edges, Incidence) is a hypergraph (Nodes', Edges', Incidence') such that Nodes' is a subset of Nodes, Edges' is a subset of Edges, and every incident pair (n,e) in (Nodes', Edges', Incidence') is also incident in (Nodes, Edges, Incidence) + + subhypergraph induced by a set of nodes + An induced subhypergraph of a hypergraph (Nodes, Edges, Incidence) is a subhypergraph (Nodes', Edges', Incidence') where a pair (n,e) is incident if and only if it is incident in (Nodes, Edges, Incidence) + + degree + Given a hypergraph (Nodes, Edges, Incidence), the degree of a node in Nodes is the number of edges in Edges to which the node is incident. + See also: :term:`s-degree` + + dual + The dual of a hypergraph (Nodes, Edges, Incidence) switches the roles of Nodes and Edges. More precisely, it is the hypergraph (Edges, Nodes, Incidence'), where Incidence' is the function that assigns Incidence(n,e) to each pair (e,n). The :term:`incidence matrix` of the dual hypergraph is the transpose of the incidence matrix of (Nodes, Edges, Incidence). + + toplex + A toplex in a hypergraph (Nodes, Edges, Incidence ) is an edge e whose node set isn't properly contained in the node set of any other edge. That is, if f is another edge and ever node incident to e is also incident to f, then the node sets of e and f are identical. + + simple hypergraph + A hypergraph for which no edge is completely contained in another. + +------------- +S-line graphs +------------- + +HNX offers a variety of tool sets for network analysis, including s-line graphs. + + s-adjacency matrix + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a square matrix where the elements of Nodes index both rows and columns. The matrix can be weighted or unweighted. Entry (i,j) is nonzero if and only if node i and node j are incident to at least s edges in common. If it is nonzero, then it is equal to the number of shared edges (if weighted) or 1 (if unweighted). + + s-edge-adjacency matrix + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a square matrix where the elements of Edges index both rows and columns. The matrix can be weighted or unweighted. Entry (i,j) is nonzero if and only if edge i and edge j share to at least s nodes, and is equal to the number of shared nodes (if weighted) or 1 (if unweighted). + + s-auxiliary matrix + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the submatrix of the :term:`s-edge-adjacency matrix ` obtained by restricting to rows and columns corresponding to edges of size at least s. + + s-node-walk + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a sequence of nodes in Nodes such that each successive pair of nodes share at least s edges in Edges. + + s-edge-walk + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a sequence of edges in Edges such that each successive pair of edges intersects in at least s nodes in Nodes. + + s-walk + Either an s-node-walk or an s-edge-walk. + + s-connected component, s-node-connected component + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-connected component is a :term:`subhypergraph` induced by a subset of Nodes with the property that there exists an s-walk between every pair of nodes in this subset. An s-connected component is the maximal such subset in the sense that it is not properly contained in any other subset satisfying this property. + + s-edge-connected component + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-edge-connected component is a :term:`subhypergraph` induced by a subset of Edges with the property that there exists an s-edge-walk between every pair of edges in this subset. An s-edge-connected component is the maximal such subset in the sense that it is not properly contained in any other subset satisfying this property. + + s-connected, s-node-connected + A hypergraph is s-connected if it has one s-connected component. + + s-edge-connected + A hypergraph is s-edge-connected if it has one s-edge-connected component. + + s-distance + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-distances between two nodes in Nodes is the length of the shortest :term:`s-node-walk` between them. If no s-node-walks between the pair of nodes exists, the s-distance between them is infinite. The s-distance + between edges is the length of the shortest :term:`s-edge-walk` between them. If no s-edge-walks between the pair of edges exist, then s-distance between them is infinite. + + s-diameter + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-diameter is the maximum s-Distance over all pairs of nodes in Nodes. + + s-degree + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-degree of a node is the number of edges in Edges of size at least s to which node belongs. See also: :term:`degree` + + s-edge + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-edge is any edge of size at least s. + + s-linegraph + For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-linegraph is a graph representing + the node to node or edge to edge connections according to the *width* s of the connections. + The node s-linegraph is a graph on the set Nodes. Two nodes in Nodes are incident in the node s-linegraph if they + share at lease s incident edges in Edges; that is, there are at least s elements of Edges to which they both belong. + The edge s-linegraph is a graph on the set Edges. Two edges in Edges are incident in the edge s-linegraph if they + share at least s incident nodes in Nodes; that is, the edges intersect in at least s nodes in Nodes. + + .. Bipartite Condition + .. Condition imposed on instances of the class EntitySet. + .. *Entities that are elements of the same EntitySet, may not contain each other as elements.* + .. The elements and children of an EntitySet generate a specific partition for a bipartite graph. + .. The partition is isomorphic to a Hypergraph where the elements correspond to hyperedges and + .. the children correspond to the nodes. EntitySets are the basic objects used to construct dynamic hypergraphs + .. in HNX. See methods :py:meth:`classes.hypergraph.Hypergraph.bipartite` and :py:meth:`classes.hypergraph.Hypergraph.from_bipartite`. + + + + + + diff --git a/_sources/hypconstructors.rst.txt b/_sources/hypconstructors.rst.txt new file mode 100644 index 00000000..7596fcc7 --- /dev/null +++ b/_sources/hypconstructors.rst.txt @@ -0,0 +1,159 @@ + +.. _hypconstructors: + +======================= +Hypergraph Constructors +======================= + +An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: +V = nodes (vertices) and E = (hyper)edges. + +HNX allows for multi-edges by distinguishing edges by +their identifiers instead of their contents. For example, if +V = {1,2,3} and E = {e1,e2,e3}, +where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, +the edges e1 and e2 contain the same set of nodes and yet +are distinct and are distinguishable within H = (V,E). + +HNX provides methods to easily store and +access additional metadata such as cell, edge, and node weights. +Metadata associated with (edge,node) incidences +are referenced as **cell_properties**. +Metadata associated with a single edge or node is referenced +as its **properties**. + +The fundamental object needed to create a hypergraph is a **setsystem**. The +setsystem defines the many-to-many relationships between edges and nodes in +the hypergraph. Cell properties for the incidence pairs can be defined within +the setsystem or in a separate pandas.Dataframe or dict. +Edge and node properties are defined with a pandas.DataFrame or dict. + +SetSystems +---------- +There are five types of setsystems currently accepted by the library. + +1. **iterable of iterables** : Barebones hypergraph, which uses Pandas default + indexing to generate hyperedge ids. Elements must be hashable.: :: + + >>> H = Hypergraph([{1,2},{1,2},{1,2,3}]) + +2. **dictionary of iterables** : The most basic way to express many-to-many + relationships providing edge ids. The elements of the iterables must be + hashable): :: + + >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]}) + +3. **dictionary of dictionaries** : allows cell properties to be assigned + to a specific (edge, node) incidence. This is particularly useful when + there are variable length dictionaries assigned to each pair: :: + + >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.1, 'name': 'related_to', + >>> 'startdate': '05.13.2020'}}, + >>> 'e2':{ 1: {'w':0.52, 'name': 'owned_by'}, + >>> 2: {'w':0.2}}, + >>> 'e3':{ 1: {'w':0.5, 'name': 'related_to'}, + >>> 2: {'w':0.2, 'name': 'owner_of'}, + >>> 3: {'w':1, 'type': 'relationship'}} + + >>> H = Hypergraph(d, cell_weight_col='w') + +4. **pandas.DataFrame** For large datasets and for datasets with cell + properties it is most efficient to construct a hypergraph directly from + a pandas.DataFrame. Incidence pairs are in the first two columns. + Cell properties shared by all incidence pairs can be placed in their own + column of the dataframe. Variable length dictionaries of cell properties + particular to only some of the incidence pairs may be placed in a single + column of the dataframe. Representing the data above as a dataframe df: + + +-----------+-----------+-----------+-----------------------------------+ + | col1 | col2 | w | col3 | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 1 | 0.5 | {'name':'related_to'} | + +-----------+-----------+-----------+-----------------------------------+ + | e1 | 2 | 0.1 | {"name":"related_to", | + | | | | "startdate":"05.13.2020"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 1 | 0.52 | {"name":"owned_by"} | + +-----------+-----------+-----------+-----------------------------------+ + | e2 | 2 | 0.2 | | + +-----------+-----------+-----------+-----------------------------------+ + | ... | ... | ... | {...} | + +-----------+-----------+-----------+-----------------------------------+ + + The first row of the dataframe is used to reference each column. :: + + >>> H = Hypergraph(df,edge_col="col1",node_col="col2", + >>> cell_weight_col="w",misc_cell_properties="col3") + +5. **numpy.ndarray** For homogeneous datasets given in a *n x 2* ndarray a + pandas dataframe is generated and column names are added from the + edge_col and node_col arguments. Cell properties containing multiple data + types are added with a separate dataframe or dict and passed through the + cell_properties keyword. :: + + >>> arr = np.array([['e1','1'],['e1','2'], + >>> ['e2','1'],['e2','2'], + >>> ['e3','1'],['e3','2'],['e3','3']]) + >>> H = hnx.Hypergraph(arr, column_names=['col1','col2']) + + +Edge and Node Properties +------------------------ +Properties specific to edges and/or node can be passed through the +keywords: **edge_properties, node_properties, properties**. +Properties may be passed as dataframes or dicts. +The first column or index of the dataframe or keys of the dict keys +correspond to the edge and/or node identifiers. +If properties are specific to an id, they may be stored in a single +object and passed to the **properties** keyword. For example: + ++-----------+-----------+---------------------------------------+ +| id | weight | properties | ++-----------+-----------+---------------------------------------+ +| e1 | 5.0 | {'type':'event'} | ++-----------+-----------+---------------------------------------+ +| e2 | 0.52 | {"name":"owned_by"} | ++-----------+-----------+---------------------------------------+ +| ... | ... | {...} | ++-----------+-----------+---------------------------------------+ +| 1 | 1.2 | {'color':'red'} | ++-----------+-----------+---------------------------------------+ +| 2 | .003 | {'name':'Fido','color':'brown'} | ++-----------+-----------+---------------------------------------+ +| 3 | 1.0 | {} | ++-----------+-----------+---------------------------------------+ + +A properties dictionary should have the format: :: + + dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... } + +A properties dataframe may be used for nodes and edges sharing ids +but differing in cell properties by adding a level index using 0 +for edges and 1 for nodes: + ++-----------+-----------+-----------+---------------------------+ +| level | id | weight | properties | ++-----------+-----------+-----------+---------------------------+ +| 0 | e1 | 5.0 | {'type':'event'} | ++-----------+-----------+-----------+---------------------------+ +| 0 | e2 | 0.52 | {"name":"owned_by"} | ++-----------+-----------+-----------+---------------------------+ +| ... | ... | ... | {...} | ++-----------+-----------+-----------+---------------------------+ +| 1 | 1.2 | {'color':'red'} | ++-----------+-----------+-----------+---------------------------+ +| 2 | .003 | {'name':'Fido','color':'brown'} | ++-----------+-----------+-----------+---------------------------+ +| ... | ... | ... | {...} | ++-----------+-----------+-----------+---------------------------+ + + + +Weights +------- +The default key for cell and object weights is "weight". The default value +is 1. Weights may be assigned and/or a new default prescribed in the +constructor using **cell_weight_col** and **cell_weights** for incidence pairs, +and using **edge_weight_prop, node_weight_prop, weight_prop, +default_edge_weight,** and **default_node_weight** for node and edge weights. \ No newline at end of file diff --git a/_sources/hypergraph101.rst.txt b/_sources/hypergraph101.rst.txt new file mode 100644 index 00000000..fd5ff15d --- /dev/null +++ b/_sources/hypergraph101.rst.txt @@ -0,0 +1,465 @@ +.. _hypergraph101: + +=============================================== +A Gentle Introduction to Hypergraph Mathematics +=============================================== + + +Here we gently introduce some of the basic concepts in hypergraph +modeling. We note that in order to maintain this “gentleness”, we will +be mostly avoiding the very important and legitimate issues in the +proper mathematical foundations of hypergraphs and closely related +structures, which can be very complicated. Rather we will be focusing on +only the most common cases used in most real modeling, and call a graph +or hypergraph **gentle** when they are loopless, simple, finite, +connected, and lacking empty hyperedges, isolated vertices, labels, +weights, or attributes. Additionally, the deep connections between +hypergraphs and other critical mathematical objects like partial orders, +finite topologies, and topological complexes will also be treated +elsewhere. When it comes up, below we will sometimes refer to the added +complexities which would attend if we weren’t being so “gentle”. In +general the reader is referred to [1,2] for a less gentle and more +comprehensive treatment. + +Graphs and Hypergraphs +====================== + +Network science is based on the concept of a **graph** +:math:`G=\langle V,E\rangle` as a system of connections between +entities. :math:`V` is a (typically finite) set of elements, nodes, or +objects, which we formally call **“vertices”**, and :math:`E` is a set +of pairs of vertices. Given that, then for two vertices +:math:`u,v \in V`, an **edge** is a set :math:`e=\{u,v\}` in :math:`E`, +indicating that there is a connection between :math:`u` and :math:`v`. +It is then common to represent :math:`G` as either a Boolean **adjacency +matrix** :math:`A_{n \times n}` where :math:`n=|V|`, where an +:math:`i,j` entry in :math:`A` is 1 if :math:`v_i,v_j` are connected in +:math:`G`; or as an **incidence matrix** :math:`I_{n \times m}`, where +now also :math:`m=|E|`, and an :math:`i,j` entry in :math:`I` is now 1 +if the vertex :math:`v_i` is in edge :math:`e_j`. + +.. _f1: +.. figure:: images/exgraph.png + :class: with-border + :width: 300 + :align: center + + An example graph, where the numbers are edge IDs. + +.. _t1: +.. list-table:: Adjacency matrix :math:`A` of a graph. + :header-rows: 1 + :align: center + + * - + - Andrews + - Bailey + - Carter + - Davis + * - Andrews + - 0 + - 1 + - 1 + - 1 + * - Bailey + - 1 + - 0 + - 1 + - 0 + * - Carter + - 1 + - 1 + - 0 + - 1 + * - Davis + - 1 + - 0 + - 1 + - 1 + +.. _t2: +.. list-table:: Incidence matrix :math:`I` of a graph. + :header-rows: 1 + :align: center + + * - + - 1 + - 2 + - 3 + - 4 + - 5 + * - Andrews + - 1 + - 1 + - 0 + - 1 + - 0 + * - Bailey + - 0 + - 0 + - 0 + - 1 + - 1 + * - Carter + - 0 + - 1 + - 1 + - 0 + - 1 + * - Davis + - 1 + - 0 + - 1 + - 0 + - 0 + + +.. _label3: +.. figure:: images/biblio_hg.png + :class: with-border + :width: 400 + :align: center + + An example hypergraph, where similarly now the hyperedges are shown with numeric IDs. + +.. _t3: +.. list-table:: Incidence matrix I of a hypergraph. + :header-rows: 1 + :align: center + + * - + - 1 + - 2 + - 3 + - 4 + - 5 + * - Andrews + - 1 + - 1 + - 0 + - 1 + - 0 + * - Bailey + - 0 + - 0 + - 0 + - 1 + - 1 + * - Carter + - 0 + - 1 + - 0 + - 0 + - 1 + * - Davis + - 1 + - 1 + - 1 + - 0 + - 0 + + + +Notice that in the incidence matrix :math:`I` of a gentle graph +:math:`G`, it is necessarily the case that every column must have +precisely two 1 entries, reflecting that every edge connects exactly two +vertices. The move to a **hypergraph** :math:`H=\langle V,E\rangle` +relaxes this requirement, in that now a **hyperedge** (although we will +still say edge when clear from context) :math:`e \in E` is a subset +:math:`e = \{ v_1, v_2, \ldots, v_k\} \subseteq V` of vertices of +arbitrary size. We call :math:`e` a :math:`k`-edge when :math:`|e|=k`. +Note that thereby a 2-edge is a graph edge, while both a singleton +:math:`e=\{v\}` and a 3-edge :math:`e=\{v_1,v_2,v_3\}`, 4-edge +:math:`e=\{v_1,v_2,v_3,v_4\}`, etc., are all hypergraph edges. In this +way, if every edge in a hypergraph :math:`H` happens to be a 2-edge, +then :math:`H` is a graph. We call such a hypergraph **2-uniform**. + +Our incidence matrix :math:`I` is now very much like that for a graph, +but the requirement that each column have exactly two 1 entries is +relaxed: the column for edge :math:`e` with size :math:`k` will have +:math:`k` 1’s. Thus :math:`I` is now a general Boolean matrix (although +with some restrictions when :math:`H` is gentle). + +Notice also that in the examples we’re showing in the figures, the graph +is closely related to the hypergraph. In fact, this particular graph is +the **2-section** or **underlying graph** of the hypergraph. It is the +graph :math:`G` recorded when only the pairwise connections in the +hypergraph :math:`H` are recognized. Note that while the 2-section is +always determined by the hypergraph, and is frequently used as a +simplified representation, it almost never has enough information to be +able to recover the hypergraph from it. + +Important Things About Hypergraphs +================================== + +While all graphs :math:`G` are (2-uniform) hypergraphs :math:`H`, since +they’re very special cases, general hypergraphs have some important +properties which really stand out in distinction, especially to those +already conversant with graphs. The following issues are critical for +hypergraphs, but “disappear” when considering the special case of +2-uniform hypergraphs which are graphs. + +All Hypergraphs Come in Dual Pairs +---------------------------------- + +If our incidence matrix :math:`I` is a general :math:`n \times m` +Boolean matrix, then its transpose :math:`I^T` is an :math:`m \times n` +Boolean matrix. In fact, :math:`I^T` is also the incidence matrix of a +different hypergraph called the **dual** hypergraph :math:`H^*` of +:math:`H`. In the dual :math:`H^*`, it’s just that vertices and edges +are swapped: we now have :math:`H^* = \langle E, V \rangle` where it’s +:math:`E` that is a set of vertices, and the now edges +:math:`v \in V, v \subseteq E` are subsets of those vertices. + + +.. _f3: +.. figure:: images/dual.png + :class: with-border + :width: 400 + :align: center + + The dual hypergraph :math:`H^*`. + + +Just like the “primal” hypergraph :math:`H` has a 2-section, so does the +dual. This is called the **line graph**, and it is an important +structure which records all of the incident hyperedges. Line graphs are +also used extensively in graph theory. + +Note that it follows that since every graph :math:`G` is a (2-uniform) +hypergraph :math:`H`, so therefore we can form the dual hypergraph +:math:`G^*` of :math:`G`. If a graph :math:`G` is a 2-uniform +hypergraph, is its dual :math:`G^*` also a 2-uniform hypergraph? In +general, no, only in the case where :math:`G` is a single cycle or a +union of cycles would that be true. Also note that in order to calculate +the line graph of a graph :math:`G`, one needs to work through its dual +hypergraph :math:`G^*`. + + +.. _f4: +.. figure:: images/dual2.png + :class: with-border + :width: 400 + :align: center + + The line graph of :math:`H`, which is the 2-section of the dual :math:`H^*`. + + + +Edge Intersections Have Size +---------------------------- + +As we’ve already seen, in a graph all the edges are size 2, whereas in a +hypergarph edges can be arbitrary size :math:`1, 2, \ldots, n`. Our +example shows a singleton, three “graph edge” pairs, and a 2-edge. + +In a gentle graph :math:`G` consider two edges +:math:`e = \{ u, v \},f=\{w,z\} \in E` and their intersection +:math:`g = e \cap f`. If :math:`g \neq \emptyset` then :math:`e` and +:math:`f` are non-disjoint, and we call them **incident**. Let +:math:`s(e,f)=|g|` be the size of that intersection. If :math:`G` is +gentle and :math:`e` and :math:`f` are incident, then :math:`s(e,f)=1`, +in that one of :math:`u,v` must be equal to one of :math:`w,z`, and +:math:`g` will be that singleton. But in a hypergraph, the intersection +:math:`g=e \cap f` of two incident edges can be any size +:math:`s(e,f) \in [1,\min(|e|,|f|)]`. This aspect, the size of the +intersection of two incident edges, is critical to understanding +hypergraph structure and properties. + +Edges Can Be Nested +------------------- + +While in a gentle graph :math:`G` two edges :math:`e` and :math:`f` can +be incident or not, in a hypergraph :math:`H` there’s another case: two +edges :math:`e` and :math:`f` may be **nested** or **included**, in that +:math:`e \subseteq f` or :math:`f \subseteq e`. That’s exactly the +condition above where :math:`s(e,f) = \min(|e|,|f|)`, which is the size +of the edge included within the including edge. In our example, we have +that edge 1 is included in edge 2 is included in edge 3. + +Walks Have Length and Width +--------------------------- + +A **walk** is a sequence +:math:`W = \langle { e_0, e_1, \ldots, e_N } \rangle` of edges where +each pair :math:`e_i,e_{i+1}, 0 \le i \le N-1` in the sequence are +incident. We call :math:`N` the **length** of the walk. Walks are the +*raison d’être* of both graphs and hypergraphs, in that in a graph +:math:`G` a walk :math:`W` establishes the connectivity of all the +:math:`e_i` to each other, and a way to “travel” between the ends +:math:`e_0` and :math:`e_N`. Naturally in a walk for each such pair we +can also measure the size of the intersection +:math:`s_i=s(e_i,e_{i+1}), 0 \le i \le N`. While in a gentle graph +:math:`G`, all the :math:`s_i=1`, as we’ve seen in a hypergraph +:math:`H` all these :math:`s_i` can vary widely. So for any walk +:math:`W` we can not only talk about its length :math:`N`, but also +define its **width** :math:`s(W) = \min_{0 \le i \le N} s_i` as the size +of the smallest such intersection. When a walk :math:`W` has width +:math:`s`, we call it an **:math:`s`-walk**. It follows that all walks +in a graph are 1-walks with width 1. In Fig. `5 <#swalks>`__ we see two +walks in a hypergraph. While both have length 2 (counting edgewise, and +recalling origin zero), the one on the left has width 1, and that on the +right width 3. + + +.. _f5: +.. figure:: images/swalks.png + :class: with-border + :width: 600 + :align: center + + Two hypergraph walks of length 2: (Left) A 1-walk. (Right) A 3-walk. + + +Towards Less Gentle Things +========================== + +We close with just brief mentions of more advanced issues. + +:math:`s`-Walks and Hypernetwork Science +---------------------------------------- + +Network science has become a dominant force in data analytics in recent +years, including a range of methods measuring distance, connectivity, +reachability, centrality, modularity, and related things. Most all of +these concepts generalize to hypergraphs using “:math:`s`-versions” of +them. For example, the :math:`s`-distance between two vertices or +hyperedges is the length of the shortest :math:`s`-walk between them, so +that as :math:`s` goes up, requiring wider connections, the distance +will also tend to grow, so that ultimately perhaps vertices may not be +:math:`s`-reachable at all. See [2] for more details. + +Hypergraphs in Mathematics +-------------------------- + +Hypergraphs are very general objects mathematically, and are deeply +connected to a range of other essential objects and structures mostly in +discrete science. + +Most obviously, perhaps, is that there is a one-to-one relationship +between a hypergraph :math:`H = \langle V, E \rangle` and a +corresponding bipartite graph :math:`B=\langle V \sqcup E, I \rangle`. +:math:`B` is a new graph (not a hypergraph) with vertices being both the +vertices and the hyperedges from the hypergraph :math:`H`, and a +connection being a pair :math:`\{ v, e \} \in I` if and only if +:math:`v \in e` in :math:`H`. That you can go the other way to define a +hypergraph :math:`H` for every bipartite graph :math:`G` is evident, but +not all operations carry over unambiguously between hypergraphs and +their bipartite versions. + +.. _f6: +.. figure:: images/bicolored1.png + :class: with-border + :width: 200 + :align: center + + Bipartite graph. + + +Even more generally, the Boolean incidence matrix :math:`I` of a +hypergraph :math:`H` can be taken as the characteristic matrix of a +binary relation. When :math:`H` is gentle this is somewhat restricted, +but in general we can see that there are one-to-one relations now +between hypergraphs, binary relations, as well as bipartite graphs from +above. + +Additionally, we know that every hypergraph implies a hierarchical +structure via the fact that for every pair of incident hyperedges either +one is included in the other, or their intersection is included in both. +This creates a partial order, establishing a further one-to-one mapping +to a variety of lattice structures and dual lattice structures relating +how groups of vertices are included in groups of edges, and vice versa. +Fig. refex shows the **concept lattice** [3], perhaps the most important +of these structures, determined by our example. + +.. _f7: +.. figure:: images/ex.png + :class: with-border + :width: 450 + :align: center + + The concept lattice of the example hypergraph :math:`H`. + + +Finally, the strength of hypergraphs is their ability to model multi-way +interactions. Similarly, mathematical topology is concerned with how +multi-dimensional objects can be attached to each other, not only in +continuous spaces but also with discrete objects. In fact, a finite +topological space is a special kind of gentle hypergraph closed under +both union and intersection, and there are deep connections between +these structures and the lattices referred to above. + +In this context also an **abstract simplicial complex (ASC)** is a kind +of hypergraph where all possible included edges are present. Each +hypergraph determines such an ASC by “closing it down” by subset. ASCs +have a natural topological structure which can reveal hidden structures +measurable by homology, and are used extensively as the workhorse of +topological methods such as persistent homology. In this way hypergraphs +form a perfect bridge from network science to computational topology in +general. + +.. _f8: +.. figure:: images/simplicial.png + :class: with-border + :width: 400 + :align: center + + A diagram of the ASC implied by our example. Numbers here indicate the actual hyper-edges in the original hypergraph :math:`H`, where now additionally all sub-edges, including singletons, are in the ASC. + + +Non-Gentle Graphs and Hypergraphs +--------------------------------- + +Above we described our use of “gentle” graphs and hypergraphs as finite, +loopless, simple, connected, and lacking empty hyperedges, isolated +vertices, labels, weights, or attributes. But at a higher level of +generality we can also have: + +Empty Hyperedges: + If a column of :math:`I` has all zero entries. + +Isolated Vertices: + If a row of :math:`I` has all zero entries. + +Multihypergraphs: + We may choose to allow duplicated hyperedges, resulting in duplicate + columns in the incidence matrix :math:`I`. + +Self-Loops: + In a graph allowing an edge to connect to itself. + +Direction: + In an edge, where some vertices are recognized as “inputs” which + point to others recognized as “outputs”. + +Order: + In a hyperedge, where the vertices carry a particular (total) order. + In a graph, this is equivalent to being directed, but not in a + hypergraph. + +Attributes: + In general we use graphs and hypergraphs to model data, and thus + carrying attributes of different types, including weights, labels, + identifiers, types, strings, or really in principle any data object. + These attributes could be on vertices (rows of :math:`I`), edges + (columns of :math:`I`) or what we call “incidences”, related to a + particular appearnace of a particular vertex in a particular edge + (cells of :math:`I`). + +[1] Joslyn, Cliff A; Aksoy, Sinan; Callahan, Tiffany J; Hunter, LE; +Jefferson, Brett; Praggastis, Brenda; Purvine, Emilie AH; Tripodi, +Ignacio J: (2021) “Hypernetwork Science: From Multidimensional +Networks to Computational Topology”, in: *Unifying Themes in Complex +systems X: Proc. 10th Int. Conf. Complex Systems*, ed. D. Braha et +al., pp. 377-392, Springer, +``https://doi.org/10.1007/978-3-030-67318-5_25`` + +[2] Aksoy, Sinan G; Joslyn, Cliff A; Marrero, Carlos O; Praggastis, B; +Purvine, Emilie AH: (2020) “Hypernetwork Science via High-Order +Hypergraph Walks”, *EPJ Data Science*, v. **9**:16, +``https://doi.org/10.1140/epjds/s13688-020-00231-0`` + +[3] Ganter, Bernhard and Wille, Rudolf: (1999) *Formal Concept +Analysis*, Springer-Verlag + + diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 00000000..9b8d9c17 --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,70 @@ +=============== +HyperNetX (HNX) +=============== + +.. image:: images/hnxbasics.png + :width: 300px + :align: right + + +`HNX`_ is a Python library for hypergraphs, the natural models for multi-dimensional network data. + +To get started, try the :ref:`interactive COLAB tutorials`. For a primer on hypergraphs, try this :ref:`gentle introduction`. To see hypergraphs at work in cutting-edge research, see our list of recent :ref:`publications`. + +Why hypergraphs? +---------------- + +Like graphs, hypergraphs capture important information about networks and relationships. But hypergraphs do more -- they model *multi-way* relationships, where ordinary graphs only capture two-way relationships. This library serves as a repository of methods and algorithms that have proven useful over years of exploration into what hypergraphs can tell us. + +As both vertex adjacency and edge +incidence are generalized to be quantities, +hypergraph paths and walks have both length and *width* because of these multiway connections. +Most graph metrics have natural generalizations to hypergraphs, but since +hypergraphs are basically set systems, they also admit to the powerful tools of algebraic topology, +including simplicial complexes and simplicial homology, to study their structure. + + +Our community +------------- + +We have a growing community of users and contributors. For the latest software updates, and to learn about the development team, see the :ref:`library overview`. Have ideas to share? We'd love to hear from you! Our `orientation for contributors `_ can help you get started. + +Our values +------------- + +Our shared values as software developers guide us in our day-to-day interactions and decision-making. Our open source projects are no exception. Trust, respect, collaboration and transparency are core values we believe should live and breathe within our projects. Our community welcomes participants from around the world with different experiences, unique perspectives, and great ideas to share. See our `code of conduct `_ to learn more. + +Contact us +---------- + +Questions and comments are welcome! Contact us at + hypernetx@pnnl.gov + +Contents +-------- + +.. toctree:: + :maxdepth: 1 + + Home + overview/index + install + Glossary + core + A Gentle Introduction to Hypergraph Mathematics + Hypergraph Constructors + Visualization Widget + Algorithms: Modularity and Clustering + Publications + license + long_description + + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` + +.. _HNX: https://github.com/pnnl/HyperNetX diff --git a/_sources/install.rst.txt b/_sources/install.rst.txt new file mode 100644 index 00000000..ca103f5d --- /dev/null +++ b/_sources/install.rst.txt @@ -0,0 +1,129 @@ +******************** +Installing HyperNetX +******************** + + +Installation +############ + +The recommended installation method for most users is to create a virtual environment +and install HyperNetX from PyPi. + +.. _Github: https://github.com/pnnl/HyperNetX + +HyperNetX may be cloned or forked from Github_. + + +Prerequisites +###################### + +HyperNetX officially supports Python 3.8, 3.9, 3.10 and 3.11. + + +Create a virtual environment +############################ + +Using Anaconda +************************* + + >>> conda create -n env-hnx python=3.8 -y + >>> conda activate env-hnx + +Using venv +************************* + + >>> python -m venv venv-hnx + >>> source env-hnx/bin/activate + + +Using virtualenv +************************* + + >>> virtualenv env-hnx + >>> source env-hnx/bin/activate + + +For Windows Users +****************** + +On both Windows PowerShell or Command Prompt, you can use the following command to activate your virtual environment: + + >>> .\env-hnx\Scripts\activate + + +To deactivate your environment, use: + + >>> .\env-hnx\Scripts\deactivate + + +Installing Hypernetx +#################### + +Regardless of how you install HyperNetX, ensure that your environment is activated and that you are running Python >=3.8. + +Installing from PyPi +************************* + + >>> pip install hypernetx + + +Installing from Source +************************* + +Ensure that you have ``git`` installed. + + >>> git clone https://github.com/pnnl/HyperNetX.git + >>> cd HyperNetX + >>> pip install -e .['all'] + +If you are using zsh as your shell, ensure that the single quotation marks are placed outside the square brackets: + + >>> pip install -e .'[all]' + + +Post-Installation Actions +########################## + +Running Tests +************** + + >>> python -m pytest + +Interact with HyperNetX in a REPL +******************************************** + +Ensure that your environment is activated and that you run ``python`` on your terminal to open a REPL: + + >>> import hypernetx as hnx + >>> data = { 0: ('A', 'B'), 1: ('B', 'C'), 2: ('D', 'A', 'E'), 3: ('F', 'G', 'H', 'D') } + >>> H = hnx.Hypergraph(data) + >>> list(H.nodes) + ['G', 'F', 'D', 'A', 'B', 'H', 'C', 'E'] + >>> list(H.edges) + [0, 1, 2, 3] + >>> H.shape + (8, 4) + + +Other Actions if installed from source +******************************************** + +Ensure that you are at the root of the source directory before running any of the following commands: + +Viewing jupyter notebooks +-------------------------- + +The following command will automatically open the notebooks in a browser. + + >>> jupyter-notebook tutorials + + +Building documentation +----------------------- + +The following commands will build and open a local version of the documentation in a browser: + + >>> make build-docs + >>> open docs/build/index.html + + diff --git a/_sources/license.rst.txt b/_sources/license.rst.txt new file mode 100644 index 00000000..2a90cf46 --- /dev/null +++ b/_sources/license.rst.txt @@ -0,0 +1 @@ +.. include:: ../../LICENSE.rst \ No newline at end of file diff --git a/_sources/modularity.rst.txt b/_sources/modularity.rst.txt new file mode 100644 index 00000000..8738ced1 --- /dev/null +++ b/_sources/modularity.rst.txt @@ -0,0 +1,114 @@ +.. _modularity: + + +========================= +Modularity and Clustering +========================= + +.. image:: images/ModularityScreenShot.png + :width: 300px + :align: right + +Overview +-------- +The hypergraph_modularity submodule in HNX provides functions to compute **hypergraph modularity** for a +given partition of the vertices in a hypergraph. In general, higher modularity indicates a better +partitioning of the vertices into dense communities. + +Two functions to generate such hypergraph +partitions are provided: **Kumar's** algorithm, and the simple **Last-Step** refinement algorithm. + +The submodule also provides a function to generate the **two-section graph** for a given hypergraph which can then be used to find +vertex partitions via graph-based algorithms. + + +Installation +------------ +Since it is part of HNX, no extra installation is required. +The submodule can be imported as follows:: + + import hypernetx.algorithms.hypergraph_modularity as hmod + +Using the Tool +-------------- + + +Precomputation +^^^^^^^^^^^^^^ + +In order to make the computation of hypergraph modularity more efficient, some quantities need to be pre-computed. +Given hypergraph H, calling:: + + HG = hmod.precompute_attributes(H) + +will pre-compute quantities such as node strength (weighted degree), d-weights (total weight for each edge cardinality) and binomial coefficients. + +Modularity +^^^^^^^^^^ + +Given hypergraph HG and a partition A of its vertices, hypergraph modularity is a measure of the quality of this partition. +Random partitions typically yield modularity near zero (it can be negative) while positive modularity is indicative of the presence +of dense communities, or modules. There are several variations for the definition of hypergraph modularity, and the main difference lies in the +weight given to different edges. Modularity is computed via:: + + q = hmod.modularity(HG, A, wdc=linear) + +In a graph, an edge only links 2 nodes, so given partition A, an edge is either within a community (which increases the modularity) +or between communities. + +With hypergraphs, we consider edges of size *d=2* or more. Given some vertex partition A and some *d*-edge *e*, let *c* be the number of nodes +that belong to the most represented part in *e*; if *c > d/2*, we consider this edge to be within the part. +Hyper-parameters *0 <= w(d,c) <= 1* control the weight +given to such edges. Three functions are supplied in this submodule, namely: + +**linear** + $w(d,c) = c/d$ if $c > d/2$, else $0$. +**majority** + $w(d,c) = 1$ if $c > d/2$, else $0$. +**strict** + $w(d,c) = 1$ if $c == d$, else $0$. + +The 'linear' function is used by default. More details in [2]. + +Two-section graph +^^^^^^^^^^^^^^^^^ + +There are several good partitioning algorithms for graphs such as the Louvain algorithm and ECG, a consensus clustering algorithm. +One way to obtain a partition for hypergraph HG is to build its corresponding two-section graph G and run a graph clustering algorithm. +Code is provided to build such graph via:: + + G = hmod.two_section(HG) + +which returns an igraph.Graph object. + + +Clustering Algorithms +^^^^^^^^^^^^^^^^^^^^^ + +Two clustering (vertex partitioning) algorithms are supplied. The first one is a hybrid method proposed by Kumar et al. (see [1]) +that uses the Louvain algorithm on the two-section graph, but re-weights the edges according to the distibution of vertices +from each part inside each edge. Given hypergraph HG, this is called as:: + + K = hmod.kumar(HG) + +The other supplied algorithm is a simple method to improve hypergraph modularity directely. Given some +initial partition of the vertices (for example via Louvain on the two-section graph), move vertices between parts in order +to improve hypergraph modularity. Given hypergraph HG and initial partition A, this is called as:: + + L = hmod.last_step(HG, A, wdc=linear) + +where the 'wdc' parameter is the same as in the modularity function. + + +Other Features +^^^^^^^^^^^^^^ + +We represent a vertex partition A as a list of sets, but another conveninent representation is via a dictionary. +We provide two utility functions to switch representation, namely `A = dict2part(D)` and `D = part2dict(A)`. + +References +^^^^^^^^^^ +[1] Kumar T., Vaidyanathan S., Ananthapadmanabhan H., Parthasarathy S. and Ravindran B. “A New Measure of Modularity in Hypergraphs: Theoretical Insights and Implications for Effective Clustering”. In: Cherifi H., Gaito S., Mendes J., Moro E., Rocha L. (eds) Complex Networks and Their Applications VIII. COMPLEX NETWORKS 2019. Studies in Computational Intelligence, vol 881. Springer, Cham. https://doi.org/10.1007/978-3-030-36687-2_24 + +[2] Kamiński B., Prałat P. and Théberge F. “Community Detection Algorithm Using Hypergraph Modularity”. In: Benito R.M., Cherifi C., Cherifi H., Moro E., Rocha L.M., Sales-Pardo M. (eds) Complex Networks & Their Applications IX. COMPLEX NETWORKS 2020. Studies in Computational Intelligence, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-65347-7_13 + diff --git a/_sources/overview/index.rst.txt b/_sources/overview/index.rst.txt new file mode 100644 index 00000000..2c5e4e82 --- /dev/null +++ b/_sources/overview/index.rst.txt @@ -0,0 +1,103 @@ +.. _overview: + +======== +Overview +======== + +.. image:: ../images/harrypotter_basic_hyp.png + :width: 300px + :align: right + +.. include:: ../../../LONG_DESCRIPTION.rst + +.. _colab: + +COLAB Tutorials +================ +The following tutorials may be run in your browser using Google Colab. Additional tutorials are +available on `GitHub `_. + +.. raw:: html + + + + + +Notice +====== +This material was prepared as an account of work sponsored by an agency of the United States Government. +Neither the United States Government nor the United States Department of Energy, nor Battelle, +nor any of their employees, nor any jurisdiction or organization that has cooperated in the development of +these materials, makes any warranty, express or implied, or assumes any legal liability or responsibility +for the accuracy, completeness, or usefulness or any information, apparatus, product, software, or process +disclosed, or represents that its use would not infringe privately owned rights. +Reference herein to any specific commercial product, process, or service by trade name, +trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, +or favoring by the United States Government or any agency thereof, or Battelle Memorial Institute. +The views and opinions of authors expressed herein do not necessarily state or reflect +those of the United States Government or any agency thereof. + + +.. raw:: html + +
+
+         PACIFIC NORTHWEST NATIONAL LABORATORY
+         operated by
+         BATTELLE
+         for the
+         UNITED STATES DEPARTMENT OF ENERGY
+         under Contract DE-AC05-76RL01830
+      
+
+ +License +======= +HyperNetX is released under the 3-Clause BSD license (see :ref:`license`) + + + +.. toctree:: + :maxdepth: 2 + + +.. _HyperNetX: https://github.com/pnnl/HyperNetX +.. _HNX: https://github.com/pnnl/HyperNetX diff --git a/_sources/publications.rst.txt b/_sources/publications.rst.txt new file mode 100644 index 00000000..ef9ad873 --- /dev/null +++ b/_sources/publications.rst.txt @@ -0,0 +1,37 @@ +.. _publications: + +============ +Publications +============ + + +**Joslyn, Cliff A; Aksoy, Sinan; Callahan, Tiffany J; Hunter, LE; Jefferson, Brett; Praggastis, Brenda; Purvine, Emilie AH; Tripodi, Ignacio J: (2021)** `Hypernetwork Science: From Multidimensional Networks to Computational Topology `_, in: `Unifying Themes in Complex systems X: Proc. 10th Int. Conf. Complex Systems*, ed. D. Braha et al., pp. 377-392, Springer, https://doi.org/10.1007/978-3-030-67318-5_25 + + +**Aksoy, Sinan G; Joslyn, Cliff A; Marrero, Carlos O; Praggastis, B; Purvine, Emilie AH: (2020)** "Hypernetwork Science via High-Order Hypergraph Walks" , *EPJ Data Science*, v. **9**:16, +https://doi.org/10.1140/epjds/s13688-020-00231-0 + +**Aksoy, Sinan G; Hagberg, Aric; Joslyn, Cliff A; Kay, Bill; Purvine, Emilie; Young, Stephen J: (2022)** "Models and Methods for Sparse (Hyper)Network Science in Business, Industry, and Government", *Notices of the AMS*, v. **69**:2, pp. 287-291, +https://doi.org/10.1090/noti2424 + +**Feng, Song; Heath, Emily; Jefferson, Brett; Joslyn, CA; Kvinge, Henry; McDermott, Jason E; Mitchell, Hugh D; Praggastis, Brenda; Eisfeld, Amie J; Sims, Amy C; Thackray, Larissa B; Fan, Shufang; Walters, Kevin B; Halfmann, Peter J; Westhoff-Smith, Danielle; Tan, Qing; Menachery, Vineet D; Sheahan, Timothy P; Cockrell, Adam S; Kocher, Jacob F; Stratton, Kelly G; Heller, Natalie C; Bramer, Lisa M; Diamond, Michael S; Baric, Ralph S; Waters, Katrina M; Kawaoka, Yoshihiro; Purvine, Emilie: (2021)** "Hypergraph Models of Biological Networks to Identify Genes Critical to Pathogenic Viral Response", in: *BMC Bioinformatics*, v. **22**:287, +https://doi.org/10.1186/s12859-021-04197-2 + +**Myers, Audun; Joslyn, Cliff A; Kay, Bill; Purvine, EAH; Roek, Gregory; Shapiro, Madelyn: (2023)** "Topological Analysis of Temporal Hypergraphs", in: *Proc. Wshop. on Analysis of the Web Graph (WAW 2023)* https://arxiv.org/abs/2302.02857 and +*2022 SIAM Conf. on Mathematics of Data Science*, https://www.siam.org/Portals/0/Conferences/MDS/MDS22/MDS22_ABSTRACTS.pdf + +**Joslyn, Cliff A; Aksoy, Sinan; Arendt, Dustin; Firoz, J; Jenkins, Louis; Praggastis, Brenda; Purvine, Emilie AH; Zalewski, Marcin: (2020)** "Hypergraph Analytics of Domain Name System Relationships", in: *17th Wshop. on Algorithms and Models for the Web Graph (WAW 2020), Lecture Notes in Computer Science*, v. **12901**, ed. Kaminski, B et al., pp. 1-15, Springer, +https://doi.org/10.1007/978-3-030-48478-1_1 + +**Hayashi, Koby; Aksoy, Sinan G; Park, CH; and Park, Haesun: (2020) "Hypergraph Random Walks, Laplacians, and Clustering"**, in:**Proc. 29th ACM Int. Conf. Information and Knowledge Management (CIKM 2020)**, pp. 495-504, ACM, New York,** +https://doi.org/10.1145/3340531.3412034 + +**Kay, WW; Aksoy, Sinan G; Baird, Molly; Best, DM; Jenne, Helen; Joslyn, CA; Potvin, CD; Roek, Greg; Seppala, Garrett; Young, Stephen; Purvine, Emilie: (2022)** "Hypergraph Topological Features for Autoencoder-Based Intrusion Detection for Cybersecurity Data", *ML4Cyber Wshop., Int. Conf. Machine Learning 2022*, +https://icml.cc/Conferences/2022/ScheduleMultitrack?event=13458#collapse20252 + +**Liu, Xu T; Firoz, Jesun; Lumsdaine, Andrew; Joslyn, CA; Aksoy, Sinan; Amburg, Ilya; Praggastis, Brenda; Gebremedhin, Assefaw: (2022)** "High-Order Line Graphs of Non-Uniform Hypergraphs: Algorithms, Applications, and Experimental Analysis", *36th IEEE Int. Parallel and Distributed Processing Symp. (IPDPS 22)*, +https://ieeexplore.ieee.org/document/9820632 + +**Liu, Xu T; Firoz, Jesun; Lumsdaine, Andrew; Joslyn, CA; Aksoy, Sinan; Praggastis, Brenda; Gebremedhin, Assefaw: (2021)** "Parallel Algorithms for Efficient Computation of High-Order Line Graphs of Hypergraphs", in: *2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC 2021)*, +https://doi.ieeecomputersociety.org/10.1109/HiPC53243.2021.00045 + diff --git a/_sources/reports/modules.rst.txt b/_sources/reports/modules.rst.txt new file mode 100644 index 00000000..96365041 --- /dev/null +++ b/_sources/reports/modules.rst.txt @@ -0,0 +1,7 @@ +reports +======= + +.. toctree:: + :maxdepth: 4 + + reports diff --git a/_sources/reports/reports.rst.txt b/_sources/reports/reports.rst.txt new file mode 100644 index 00000000..4dde7d91 --- /dev/null +++ b/_sources/reports/reports.rst.txt @@ -0,0 +1,21 @@ +reports package +=============== + +Submodules +---------- + +reports.descriptive\_stats module +--------------------------------- + +.. automodule:: reports.descriptive_stats + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: reports + :members: + :undoc-members: + :show-inheritance: diff --git a/_sources/widget.rst.txt b/_sources/widget.rst.txt new file mode 100644 index 00000000..3c5ffcdc --- /dev/null +++ b/_sources/widget.rst.txt @@ -0,0 +1,66 @@ +.. _widget: + + +================ +Hypernetx-Widget +================ + +.. image:: images/WidgetScreenShot.png + :width: 300px + :align: right + +Overview +-------- +The HyperNetXWidget_ is an addon for HNX, which extends the built in visualization +capabilities of HNX to a JavaScript based interactive visualization. The tool has two main interfaces, +the hypergraph visualization and the nodes & edges panel. +You may `demo the widget here `_ + +Installation +------------ +The HypernetxWidget_ is available on `GitHub `_ and may be +installed using pip: + + >>> pip install hnxwidget + +Using the Tool +-------------- + +Layout +^^^^^^ +The hypergraph visualization is an Euler diagram that shows nodes as circles and hyper edges as outlines +containing the nodes/circles they contain. The visualization uses a force directed optimization to perform +the layout. This algorithm is not perfect and sometimes gives results that the user might want to improve upon. +The visualization allows the user to drag nodes and position them directly at any time. The algorithm will +re-position any nodes that are not specified by the user. Ctrl (Windows) or Command (Mac) clicking a node +will release a pinned node it to be re-positioned by the algorithm. + +Selection +^^^^^^^^^ +Nodes and edges can be selected by clicking them. Nodes and edges can be selected independently of each other, +i.e., it is possible to select an edge without selecting the nodes it contains. Multiple nodes and edges can +be selected, by holding down Shift while clicking. Shift clicking an already selected node will de-select it. +Clicking the background will de-select all nodes and edges. Dragging a selected node will drag all selected +nodes, keeping their relative placement. +Selected nodes can be hidden (having their appearance minimized) or removed completely from the visualization. +Hiding a node or edge will not cause a change in the layout, wheras removing a node or edge will. +The selection can also be expanded. Buttons in the toolbar allow for selecting all nodes contained within selected edges, +and selecting all edges containing any selected nodes. +The toolbar also contains buttons to select all nodes (or edges), un-select all nodes (or edges), +or reverse the selected nodes (or edges). An advanced user might: + +* **Select all nodes not in an edge** by: select an edge, select all nodes in that edge, then reverse the selected nodes to select every node not in that edge. +* **Traverse the graph** by: selecting a start node, then alternating select all edges containing selected nodes and selecting all nodes within selected edges +* **Pin Everything** by: hitting the button to select all nodes, then drag any node slightly to activate the pinning for all nodes. + +Side Panel +^^^^^^^^^^ +Details on nodes and edges are visible in the side panel. For both nodes and edges, a table shows the node name, degree (or size for edges), its selection state, removed state, and color. These properties can also be controlled directly from this panel. The color of nodes and edges can be set in bulk here as well, for example, coloring by degree. + +Other Features +^^^^^^^^^^^^^^ +Nodes with identical edge membership can be collapsed into a super node, which can be helpful for larger hypergraphs. Dragging any node in a super node will drag the entire super node. This feature is available as a toggle in the nodes panel. + +The hypergraph can also be visualized as a bipartite graph (similar to a traditional node-link diagram). Toggling this feature will preserve the locations of the nodes between the bipartite and the Euler diagrams. + +.. _HypernetxWidget: https://github.com/pnnl/hypernetx-widget diff --git a/_static/_sphinx_javascript_frameworks_compat.js b/_static/_sphinx_javascript_frameworks_compat.js new file mode 100644 index 00000000..81415803 --- /dev/null +++ b/_static/_sphinx_javascript_frameworks_compat.js @@ -0,0 +1,123 @@ +/* Compatability shim for jQuery and underscores.js. + * + * Copyright Sphinx contributors + * Released under the two clause BSD licence + */ + +/** + * small helper function to urldecode strings + * + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL + */ +jQuery.urldecode = function(x) { + if (!x) { + return x + } + return decodeURIComponent(x.replace(/\+/g, ' ')); +}; + +/** + * small helper function to urlencode strings + */ +jQuery.urlencode = encodeURIComponent; + +/** + * This function returns the parsed url parameters of the + * current request. Multiple values per key are supported, + * it will always return arrays of strings for the value parts. + */ +jQuery.getQueryParameters = function(s) { + if (typeof s === 'undefined') + s = document.location.search; + var parts = s.substr(s.indexOf('?') + 1).split('&'); + var result = {}; + for (var i = 0; i < parts.length; i++) { + var tmp = parts[i].split('=', 2); + var key = jQuery.urldecode(tmp[0]); + var value = jQuery.urldecode(tmp[1]); + if (key in result) + result[key].push(value); + else + result[key] = [value]; + } + return result; +}; + +/** + * highlight a given string on a jquery object by wrapping it in + * span elements with the given class name. + */ +jQuery.fn.highlightText = function(text, className) { + function highlight(node, addItems) { + if (node.nodeType === 3) { + var val = node.nodeValue; + var pos = val.toLowerCase().indexOf(text); + if (pos >= 0 && + !jQuery(node.parentNode).hasClass(className) && + !jQuery(node.parentNode).hasClass("nohighlight")) { + var span; + var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.className = className; + } + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + node.parentNode.insertBefore(span, node.parentNode.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling)); + node.nodeValue = val.substr(0, pos); + if (isInSVG) { + var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); + var bbox = node.parentElement.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute('class', className); + addItems.push({ + "parent": node.parentNode, + "target": rect}); + } + } + } + else if (!jQuery(node).is("button, select, textarea")) { + jQuery.each(node.childNodes, function() { + highlight(this, addItems); + }); + } + } + var addItems = []; + var result = this.each(function() { + highlight(this, addItems); + }); + for (var i = 0; i < addItems.length; ++i) { + jQuery(addItems[i].parent).before(addItems[i].target); + } + return result; +}; + +/* + * backward compatibility for jQuery.browser + * This will be supported until firefox bug is fixed. + */ +if (!jQuery.browser) { + jQuery.uaMatch = function(ua) { + ua = ua.toLowerCase(); + + var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || + /(webkit)[ \/]([\w.]+)/.exec(ua) || + /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || + /(msie) ([\w.]+)/.exec(ua) || + ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || + []; + + return { + browser: match[ 1 ] || "", + version: match[ 2 ] || "0" + }; + }; + jQuery.browser = {}; + jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; +} diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 00000000..7577acb1 --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,903 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/check-solid.svg b/_static/check-solid.svg new file mode 100644 index 00000000..92fad4b5 --- /dev/null +++ b/_static/check-solid.svg @@ -0,0 +1,4 @@ + + + + diff --git a/_static/clipboard.min.js b/_static/clipboard.min.js new file mode 100644 index 00000000..54b3c463 --- /dev/null +++ b/_static/clipboard.min.js @@ -0,0 +1,7 @@ +/*! + * clipboard.js v2.0.8 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */ +!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return n={686:function(t,e,n){"use strict";n.d(e,{default:function(){return o}});var e=n(279),i=n.n(e),e=n(370),u=n.n(e),e=n(817),c=n.n(e);function a(t){try{return document.execCommand(t)}catch(t){return}}var f=function(t){t=c()(t);return a("cut"),t};var l=function(t){var e,n,o,r=1 + + + + diff --git a/_static/copybutton.css b/_static/copybutton.css new file mode 100644 index 00000000..f1916ec7 --- /dev/null +++ b/_static/copybutton.css @@ -0,0 +1,94 @@ +/* Copy buttons */ +button.copybtn { + position: absolute; + display: flex; + top: .3em; + right: .3em; + width: 1.7em; + height: 1.7em; + opacity: 0; + transition: opacity 0.3s, border .3s, background-color .3s; + user-select: none; + padding: 0; + border: none; + outline: none; + border-radius: 0.4em; + /* The colors that GitHub uses */ + border: #1b1f2426 1px solid; + background-color: #f6f8fa; + color: #57606a; +} + +button.copybtn.success { + border-color: #22863a; + color: #22863a; +} + +button.copybtn svg { + stroke: currentColor; + width: 1.5em; + height: 1.5em; + padding: 0.1em; +} + +div.highlight { + position: relative; +} + +/* Show the copybutton */ +.highlight:hover button.copybtn, button.copybtn.success { + opacity: 1; +} + +.highlight button.copybtn:hover { + background-color: rgb(235, 235, 235); +} + +.highlight button.copybtn:active { + background-color: rgb(187, 187, 187); +} + +/** + * A minimal CSS-only tooltip copied from: + * https://codepen.io/mildrenben/pen/rVBrpK + * + * To use, write HTML like the following: + * + *

Short

+ */ + .o-tooltip--left { + position: relative; + } + + .o-tooltip--left:after { + opacity: 0; + visibility: hidden; + position: absolute; + content: attr(data-tooltip); + padding: .2em; + font-size: .8em; + left: -.2em; + background: grey; + color: white; + white-space: nowrap; + z-index: 2; + border-radius: 2px; + transform: translateX(-102%) translateY(0); + transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); +} + +.o-tooltip--left:hover:after { + display: block; + opacity: 1; + visibility: visible; + transform: translateX(-100%) translateY(0); + transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); + transition-delay: .5s; +} + +/* By default the copy button shouldn't show up when printing a page */ +@media print { + button.copybtn { + display: none; + } +} diff --git a/_static/copybutton.js b/_static/copybutton.js new file mode 100644 index 00000000..b3987037 --- /dev/null +++ b/_static/copybutton.js @@ -0,0 +1,248 @@ +// Localization support +const messages = { + 'en': { + 'copy': 'Copy', + 'copy_to_clipboard': 'Copy to clipboard', + 'copy_success': 'Copied!', + 'copy_failure': 'Failed to copy', + }, + 'es' : { + 'copy': 'Copiar', + 'copy_to_clipboard': 'Copiar al portapapeles', + 'copy_success': '¡Copiado!', + 'copy_failure': 'Error al copiar', + }, + 'de' : { + 'copy': 'Kopieren', + 'copy_to_clipboard': 'In die Zwischenablage kopieren', + 'copy_success': 'Kopiert!', + 'copy_failure': 'Fehler beim Kopieren', + }, + 'fr' : { + 'copy': 'Copier', + 'copy_to_clipboard': 'Copier dans le presse-papier', + 'copy_success': 'Copié !', + 'copy_failure': 'Échec de la copie', + }, + 'ru': { + 'copy': 'Скопировать', + 'copy_to_clipboard': 'Скопировать в буфер', + 'copy_success': 'Скопировано!', + 'copy_failure': 'Не удалось скопировать', + }, + 'zh-CN': { + 'copy': '复制', + 'copy_to_clipboard': '复制到剪贴板', + 'copy_success': '复制成功!', + 'copy_failure': '复制失败', + }, + 'it' : { + 'copy': 'Copiare', + 'copy_to_clipboard': 'Copiato negli appunti', + 'copy_success': 'Copiato!', + 'copy_failure': 'Errore durante la copia', + } +} + +let locale = 'en' +if( document.documentElement.lang !== undefined + && messages[document.documentElement.lang] !== undefined ) { + locale = document.documentElement.lang +} + +let doc_url_root = DOCUMENTATION_OPTIONS.URL_ROOT; +if (doc_url_root == '#') { + doc_url_root = ''; +} + +/** + * SVG files for our copy buttons + */ +let iconCheck = ` + ${messages[locale]['copy_success']} + + +` + +// If the user specified their own SVG use that, otherwise use the default +let iconCopy = ``; +if (!iconCopy) { + iconCopy = ` + ${messages[locale]['copy_to_clipboard']} + + + +` +} + +/** + * Set up copy/paste for code blocks + */ + +const runWhenDOMLoaded = cb => { + if (document.readyState != 'loading') { + cb() + } else if (document.addEventListener) { + document.addEventListener('DOMContentLoaded', cb) + } else { + document.attachEvent('onreadystatechange', function() { + if (document.readyState == 'complete') cb() + }) + } +} + +const codeCellId = index => `codecell${index}` + +// Clears selected text since ClipboardJS will select the text when copying +const clearSelection = () => { + if (window.getSelection) { + window.getSelection().removeAllRanges() + } else if (document.selection) { + document.selection.empty() + } +} + +// Changes tooltip text for a moment, then changes it back +// We want the timeout of our `success` class to be a bit shorter than the +// tooltip and icon change, so that we can hide the icon before changing back. +var timeoutIcon = 2000; +var timeoutSuccessClass = 1500; + +const temporarilyChangeTooltip = (el, oldText, newText) => { + el.setAttribute('data-tooltip', newText) + el.classList.add('success') + // Remove success a little bit sooner than we change the tooltip + // So that we can use CSS to hide the copybutton first + setTimeout(() => el.classList.remove('success'), timeoutSuccessClass) + setTimeout(() => el.setAttribute('data-tooltip', oldText), timeoutIcon) +} + +// Changes the copy button icon for two seconds, then changes it back +const temporarilyChangeIcon = (el) => { + el.innerHTML = iconCheck; + setTimeout(() => {el.innerHTML = iconCopy}, timeoutIcon) +} + +const addCopyButtonToCodeCells = () => { + // If ClipboardJS hasn't loaded, wait a bit and try again. This + // happens because we load ClipboardJS asynchronously. + if (window.ClipboardJS === undefined) { + setTimeout(addCopyButtonToCodeCells, 250) + return + } + + // Add copybuttons to all of our code cells + const COPYBUTTON_SELECTOR = 'div.highlight pre'; + const codeCells = document.querySelectorAll(COPYBUTTON_SELECTOR) + codeCells.forEach((codeCell, index) => { + const id = codeCellId(index) + codeCell.setAttribute('id', id) + + const clipboardButton = id => + `` + codeCell.insertAdjacentHTML('afterend', clipboardButton(id)) + }) + +function escapeRegExp(string) { + return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string +} + +/** + * Removes excluded text from a Node. + * + * @param {Node} target Node to filter. + * @param {string} exclude CSS selector of nodes to exclude. + * @returns {DOMString} Text from `target` with text removed. + */ +function filterText(target, exclude) { + const clone = target.cloneNode(true); // clone as to not modify the live DOM + if (exclude) { + // remove excluded nodes + clone.querySelectorAll(exclude).forEach(node => node.remove()); + } + return clone.innerText; +} + +// Callback when a copy button is clicked. Will be passed the node that was clicked +// should then grab the text and replace pieces of text that shouldn't be used in output +function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { + var regexp; + var match; + + // Do we check for line continuation characters and "HERE-documents"? + var useLineCont = !!lineContinuationChar + var useHereDoc = !!hereDocDelim + + // create regexp to capture prompt and remaining line + if (isRegexp) { + regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') + } else { + regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') + } + + const outputLines = []; + var promptFound = false; + var gotLineCont = false; + var gotHereDoc = false; + const lineGotPrompt = []; + for (const line of textContent.split('\n')) { + match = line.match(regexp) + if (match || gotLineCont || gotHereDoc) { + promptFound = regexp.test(line) + lineGotPrompt.push(promptFound) + if (removePrompts && promptFound) { + outputLines.push(match[2]) + } else { + outputLines.push(line) + } + gotLineCont = line.endsWith(lineContinuationChar) & useLineCont + if (line.includes(hereDocDelim) & useHereDoc) + gotHereDoc = !gotHereDoc + } else if (!onlyCopyPromptLines) { + outputLines.push(line) + } else if (copyEmptyLines && line.trim() === '') { + outputLines.push(line) + } + } + + // If no lines with the prompt were found then just use original lines + if (lineGotPrompt.some(v => v === true)) { + textContent = outputLines.join('\n'); + } + + // Remove a trailing newline to avoid auto-running when pasting + if (textContent.endsWith("\n")) { + textContent = textContent.slice(0, -1) + } + return textContent +} + + +var copyTargetText = (trigger) => { + var target = document.querySelector(trigger.attributes['data-clipboard-target'].value); + + // get filtered text + let exclude = '.linenos, .gp'; + + let text = filterText(target, exclude); + return formatCopyText(text, '', false, true, true, true, '', '') +} + + // Initialize with a callback so we can modify the text before copy + const clipboard = new ClipboardJS('.copybtn', {text: copyTargetText}) + + // Update UI with error/success messages + clipboard.on('success', event => { + clearSelection() + temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_success']) + temporarilyChangeIcon(event.trigger) + }) + + clipboard.on('error', event => { + temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_failure']) + }) +} + +runWhenDOMLoaded(addCopyButtonToCodeCells) \ No newline at end of file diff --git a/_static/copybutton_funcs.js b/_static/copybutton_funcs.js new file mode 100644 index 00000000..dbe1aaad --- /dev/null +++ b/_static/copybutton_funcs.js @@ -0,0 +1,73 @@ +function escapeRegExp(string) { + return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string +} + +/** + * Removes excluded text from a Node. + * + * @param {Node} target Node to filter. + * @param {string} exclude CSS selector of nodes to exclude. + * @returns {DOMString} Text from `target` with text removed. + */ +export function filterText(target, exclude) { + const clone = target.cloneNode(true); // clone as to not modify the live DOM + if (exclude) { + // remove excluded nodes + clone.querySelectorAll(exclude).forEach(node => node.remove()); + } + return clone.innerText; +} + +// Callback when a copy button is clicked. Will be passed the node that was clicked +// should then grab the text and replace pieces of text that shouldn't be used in output +export function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { + var regexp; + var match; + + // Do we check for line continuation characters and "HERE-documents"? + var useLineCont = !!lineContinuationChar + var useHereDoc = !!hereDocDelim + + // create regexp to capture prompt and remaining line + if (isRegexp) { + regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') + } else { + regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') + } + + const outputLines = []; + var promptFound = false; + var gotLineCont = false; + var gotHereDoc = false; + const lineGotPrompt = []; + for (const line of textContent.split('\n')) { + match = line.match(regexp) + if (match || gotLineCont || gotHereDoc) { + promptFound = regexp.test(line) + lineGotPrompt.push(promptFound) + if (removePrompts && promptFound) { + outputLines.push(match[2]) + } else { + outputLines.push(line) + } + gotLineCont = line.endsWith(lineContinuationChar) & useLineCont + if (line.includes(hereDocDelim) & useHereDoc) + gotHereDoc = !gotHereDoc + } else if (!onlyCopyPromptLines) { + outputLines.push(line) + } else if (copyEmptyLines && line.trim() === '') { + outputLines.push(line) + } + } + + // If no lines with the prompt were found then just use original lines + if (lineGotPrompt.some(v => v === true)) { + textContent = outputLines.join('\n'); + } + + // Remove a trailing newline to avoid auto-running when pasting + if (textContent.endsWith("\n")) { + textContent = textContent.slice(0, -1) + } + return textContent +} diff --git a/_static/css/badge_only.css b/_static/css/badge_only.css new file mode 100644 index 00000000..c718cee4 --- /dev/null +++ b/_static/css/badge_only.css @@ -0,0 +1 @@ +.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}@font-face{font-family:FontAwesome;font-style:normal;font-weight:400;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#FontAwesome) format("svg")}.fa:before{font-family:FontAwesome;font-style:normal;font-weight:400;line-height:1}.fa:before,a .fa{text-decoration:inherit}.fa:before,a .fa,li .fa{display:inline-block}li .fa-large:before{width:1.875em}ul.fas{list-style-type:none;margin-left:2em;text-indent:-.8em}ul.fas li .fa{width:.8em}ul.fas li .fa-large:before{vertical-align:baseline}.fa-book:before,.icon-book:before{content:"\f02d"}.fa-caret-down:before,.icon-caret-down:before{content:"\f0d7"}.fa-caret-up:before,.icon-caret-up:before{content:"\f0d8"}.fa-caret-left:before,.icon-caret-left:before{content:"\f0d9"}.fa-caret-right:before,.icon-caret-right:before{content:"\f0da"}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60}.rst-versions .rst-current-version:after{clear:both;content:"";display:block}.rst-versions .rst-current-version .fa{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}} \ No newline at end of file diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff b/_static/css/fonts/Roboto-Slab-Bold.woff new file mode 100644 index 00000000..6cb60000 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Bold.woff2 b/_static/css/fonts/Roboto-Slab-Bold.woff2 new file mode 100644 index 00000000..7059e231 Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Bold.woff2 differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff b/_static/css/fonts/Roboto-Slab-Regular.woff new file mode 100644 index 00000000..f815f63f Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff differ diff --git a/_static/css/fonts/Roboto-Slab-Regular.woff2 b/_static/css/fonts/Roboto-Slab-Regular.woff2 new file mode 100644 index 00000000..f2c76e5b Binary files /dev/null and b/_static/css/fonts/Roboto-Slab-Regular.woff2 differ diff --git a/_static/css/fonts/fontawesome-webfont.eot b/_static/css/fonts/fontawesome-webfont.eot new file mode 100644 index 00000000..e9f60ca9 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.eot differ diff --git a/_static/css/fonts/fontawesome-webfont.svg b/_static/css/fonts/fontawesome-webfont.svg new file mode 100644 index 00000000..855c845e --- /dev/null +++ b/_static/css/fonts/fontawesome-webfont.svg @@ -0,0 +1,2671 @@ + + + + +Created by FontForge 20120731 at Mon Oct 24 17:37:40 2016 + By ,,, +Copyright Dave Gandy 2016. All rights reserved. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/_static/css/fonts/fontawesome-webfont.ttf b/_static/css/fonts/fontawesome-webfont.ttf new file mode 100644 index 00000000..35acda2f Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.ttf differ diff --git a/_static/css/fonts/fontawesome-webfont.woff b/_static/css/fonts/fontawesome-webfont.woff new file mode 100644 index 00000000..400014a4 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff differ diff --git a/_static/css/fonts/fontawesome-webfont.woff2 b/_static/css/fonts/fontawesome-webfont.woff2 new file mode 100644 index 00000000..4d13fc60 Binary files /dev/null and b/_static/css/fonts/fontawesome-webfont.woff2 differ diff --git a/_static/css/fonts/lato-bold-italic.woff b/_static/css/fonts/lato-bold-italic.woff new file mode 100644 index 00000000..88ad05b9 Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff differ diff --git a/_static/css/fonts/lato-bold-italic.woff2 b/_static/css/fonts/lato-bold-italic.woff2 new file mode 100644 index 00000000..c4e3d804 Binary files /dev/null and b/_static/css/fonts/lato-bold-italic.woff2 differ diff --git a/_static/css/fonts/lato-bold.woff b/_static/css/fonts/lato-bold.woff new file mode 100644 index 00000000..c6dff51f Binary files /dev/null and b/_static/css/fonts/lato-bold.woff differ diff --git a/_static/css/fonts/lato-bold.woff2 b/_static/css/fonts/lato-bold.woff2 new file mode 100644 index 00000000..bb195043 Binary files /dev/null and b/_static/css/fonts/lato-bold.woff2 differ diff --git a/_static/css/fonts/lato-normal-italic.woff b/_static/css/fonts/lato-normal-italic.woff new file mode 100644 index 00000000..76114bc0 Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff differ diff --git a/_static/css/fonts/lato-normal-italic.woff2 b/_static/css/fonts/lato-normal-italic.woff2 new file mode 100644 index 00000000..3404f37e Binary files /dev/null and b/_static/css/fonts/lato-normal-italic.woff2 differ diff --git a/_static/css/fonts/lato-normal.woff b/_static/css/fonts/lato-normal.woff new file mode 100644 index 00000000..ae1307ff Binary files /dev/null and b/_static/css/fonts/lato-normal.woff differ diff --git a/_static/css/fonts/lato-normal.woff2 b/_static/css/fonts/lato-normal.woff2 new file mode 100644 index 00000000..3bf98433 Binary files /dev/null and b/_static/css/fonts/lato-normal.woff2 differ diff --git a/_static/css/theme.css b/_static/css/theme.css new file mode 100644 index 00000000..19a446a0 --- /dev/null +++ b/_static/css/theme.css @@ -0,0 +1,4 @@ +html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}article,aside,details,figcaption,figure,footer,header,hgroup,nav,section{display:block}audio,canvas,video{display:inline-block;*display:inline;*zoom:1}[hidden],audio:not([controls]){display:none}*{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}html{font-size:100%;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}body{margin:0}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}blockquote{margin:0}dfn{font-style:italic}ins{background:#ff9;text-decoration:none}ins,mark{color:#000}mark{background:#ff0;font-style:italic;font-weight:700}.rst-content code,.rst-content tt,code,kbd,pre,samp{font-family:monospace,serif;_font-family:courier new,monospace;font-size:1em}pre{white-space:pre}q{quotes:none}q:after,q:before{content:"";content:none}small{font-size:85%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}dl,ol,ul{margin:0;padding:0;list-style:none;list-style-image:none}li{list-style:none}dd{margin:0}img{border:0;-ms-interpolation-mode:bicubic;vertical-align:middle;max-width:100%}svg:not(:root){overflow:hidden}figure,form{margin:0}label{cursor:pointer}button,input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}button,input{line-height:normal}button,input[type=button],input[type=reset],input[type=submit]{cursor:pointer;-webkit-appearance:button;*overflow:visible}button[disabled],input[disabled]{cursor:default}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}textarea{resize:vertical}table{border-collapse:collapse;border-spacing:0}td{vertical-align:top}.chromeframe{margin:.2em 0;background:#ccc;color:#000;padding:.2em 0}.ir{display:block;border:0;text-indent:-999em;overflow:hidden;background-color:transparent;background-repeat:no-repeat;text-align:left;direction:ltr;*line-height:0}.ir br{display:none}.hidden{display:none!important;visibility:hidden}.visuallyhidden{border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px}.visuallyhidden.focusable:active,.visuallyhidden.focusable:focus{clip:auto;height:auto;margin:0;overflow:visible;position:static;width:auto}.invisible{visibility:hidden}.relative{position:relative}big,small{font-size:100%}@media print{body,html,section{background:none!important}*{box-shadow:none!important;text-shadow:none!important;filter:none!important;-ms-filter:none!important}a,a:visited{text-decoration:underline}.ir a:after,a[href^="#"]:after,a[href^="javascript:"]:after{content:""}blockquote,pre{page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}img{max-width:100%!important}@page{margin:.5cm}.rst-content .toctree-wrapper>p.caption,h2,h3,p{orphans:3;widows:3}.rst-content .toctree-wrapper>p.caption,h2,h3{page-break-after:avoid}}.btn,.fa:before,.icon:before,.rst-content .admonition,.rst-content .admonition-title:before,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .code-block-caption .headerlink:before,.rst-content .danger,.rst-content .eqno .headerlink:before,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-alert,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before,input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week],select,textarea{-webkit-font-smoothing:antialiased}.clearfix{*zoom:1}.clearfix:after,.clearfix:before{display:table;content:""}.clearfix:after{clear:both}/*! + * Font Awesome 4.7.0 by @davegandy - http://fontawesome.io - @fontawesome + * License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License) + */@font-face{font-family:FontAwesome;src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713);src:url(fonts/fontawesome-webfont.eot?674f50d287a8c48dc19ba404d20fe713?#iefix&v=4.7.0) format("embedded-opentype"),url(fonts/fontawesome-webfont.woff2?af7ae505a9eed503f8b8e6982036873e) format("woff2"),url(fonts/fontawesome-webfont.woff?fee66e712a8a08eef5805a46892932ad) format("woff"),url(fonts/fontawesome-webfont.ttf?b06871f281fee6b241d60582ae9369b9) format("truetype"),url(fonts/fontawesome-webfont.svg?912ec66d7572ff821749319396470bde#fontawesomeregular) format("svg");font-weight:400;font-style:normal}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{display:inline-block;font:normal normal normal 14px/1 FontAwesome;font-size:inherit;text-rendering:auto;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.fa-lg{font-size:1.33333em;line-height:.75em;vertical-align:-15%}.fa-2x{font-size:2em}.fa-3x{font-size:3em}.fa-4x{font-size:4em}.fa-5x{font-size:5em}.fa-fw{width:1.28571em;text-align:center}.fa-ul{padding-left:0;margin-left:2.14286em;list-style-type:none}.fa-ul>li{position:relative}.fa-li{position:absolute;left:-2.14286em;width:2.14286em;top:.14286em;text-align:center}.fa-li.fa-lg{left:-1.85714em}.fa-border{padding:.2em .25em .15em;border:.08em solid #eee;border-radius:.1em}.fa-pull-left{float:left}.fa-pull-right{float:right}.fa-pull-left.icon,.fa.fa-pull-left,.rst-content .code-block-caption .fa-pull-left.headerlink,.rst-content .eqno .fa-pull-left.headerlink,.rst-content .fa-pull-left.admonition-title,.rst-content code.download span.fa-pull-left:first-child,.rst-content dl dt .fa-pull-left.headerlink,.rst-content h1 .fa-pull-left.headerlink,.rst-content h2 .fa-pull-left.headerlink,.rst-content h3 .fa-pull-left.headerlink,.rst-content h4 .fa-pull-left.headerlink,.rst-content h5 .fa-pull-left.headerlink,.rst-content h6 .fa-pull-left.headerlink,.rst-content p .fa-pull-left.headerlink,.rst-content table>caption .fa-pull-left.headerlink,.rst-content tt.download span.fa-pull-left:first-child,.wy-menu-vertical li.current>a button.fa-pull-left.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-left.toctree-expand,.wy-menu-vertical li button.fa-pull-left.toctree-expand{margin-right:.3em}.fa-pull-right.icon,.fa.fa-pull-right,.rst-content .code-block-caption .fa-pull-right.headerlink,.rst-content .eqno .fa-pull-right.headerlink,.rst-content .fa-pull-right.admonition-title,.rst-content code.download span.fa-pull-right:first-child,.rst-content dl dt .fa-pull-right.headerlink,.rst-content h1 .fa-pull-right.headerlink,.rst-content h2 .fa-pull-right.headerlink,.rst-content h3 .fa-pull-right.headerlink,.rst-content h4 .fa-pull-right.headerlink,.rst-content h5 .fa-pull-right.headerlink,.rst-content h6 .fa-pull-right.headerlink,.rst-content p .fa-pull-right.headerlink,.rst-content table>caption .fa-pull-right.headerlink,.rst-content tt.download span.fa-pull-right:first-child,.wy-menu-vertical li.current>a button.fa-pull-right.toctree-expand,.wy-menu-vertical li.on a button.fa-pull-right.toctree-expand,.wy-menu-vertical li button.fa-pull-right.toctree-expand{margin-left:.3em}.pull-right{float:right}.pull-left{float:left}.fa.pull-left,.pull-left.icon,.rst-content .code-block-caption .pull-left.headerlink,.rst-content .eqno .pull-left.headerlink,.rst-content .pull-left.admonition-title,.rst-content code.download span.pull-left:first-child,.rst-content dl dt .pull-left.headerlink,.rst-content h1 .pull-left.headerlink,.rst-content h2 .pull-left.headerlink,.rst-content h3 .pull-left.headerlink,.rst-content h4 .pull-left.headerlink,.rst-content h5 .pull-left.headerlink,.rst-content h6 .pull-left.headerlink,.rst-content p .pull-left.headerlink,.rst-content table>caption .pull-left.headerlink,.rst-content tt.download span.pull-left:first-child,.wy-menu-vertical li.current>a button.pull-left.toctree-expand,.wy-menu-vertical li.on a button.pull-left.toctree-expand,.wy-menu-vertical li button.pull-left.toctree-expand{margin-right:.3em}.fa.pull-right,.pull-right.icon,.rst-content .code-block-caption .pull-right.headerlink,.rst-content .eqno .pull-right.headerlink,.rst-content .pull-right.admonition-title,.rst-content code.download span.pull-right:first-child,.rst-content dl dt .pull-right.headerlink,.rst-content h1 .pull-right.headerlink,.rst-content h2 .pull-right.headerlink,.rst-content h3 .pull-right.headerlink,.rst-content h4 .pull-right.headerlink,.rst-content h5 .pull-right.headerlink,.rst-content h6 .pull-right.headerlink,.rst-content p .pull-right.headerlink,.rst-content table>caption .pull-right.headerlink,.rst-content tt.download span.pull-right:first-child,.wy-menu-vertical li.current>a button.pull-right.toctree-expand,.wy-menu-vertical li.on a button.pull-right.toctree-expand,.wy-menu-vertical li button.pull-right.toctree-expand{margin-left:.3em}.fa-spin{-webkit-animation:fa-spin 2s linear infinite;animation:fa-spin 2s linear infinite}.fa-pulse{-webkit-animation:fa-spin 1s steps(8) infinite;animation:fa-spin 1s steps(8) infinite}@-webkit-keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}@keyframes fa-spin{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(359deg);transform:rotate(359deg)}}.fa-rotate-90{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";-webkit-transform:rotate(90deg);-ms-transform:rotate(90deg);transform:rotate(90deg)}.fa-rotate-180{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";-webkit-transform:rotate(180deg);-ms-transform:rotate(180deg);transform:rotate(180deg)}.fa-rotate-270{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";-webkit-transform:rotate(270deg);-ms-transform:rotate(270deg);transform:rotate(270deg)}.fa-flip-horizontal{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";-webkit-transform:scaleX(-1);-ms-transform:scaleX(-1);transform:scaleX(-1)}.fa-flip-vertical{-ms-filter:"progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";-webkit-transform:scaleY(-1);-ms-transform:scaleY(-1);transform:scaleY(-1)}:root .fa-flip-horizontal,:root .fa-flip-vertical,:root .fa-rotate-90,:root .fa-rotate-180,:root .fa-rotate-270{filter:none}.fa-stack{position:relative;display:inline-block;width:2em;height:2em;line-height:2em;vertical-align:middle}.fa-stack-1x,.fa-stack-2x{position:absolute;left:0;width:100%;text-align:center}.fa-stack-1x{line-height:inherit}.fa-stack-2x{font-size:2em}.fa-inverse{color:#fff}.fa-glass:before{content:""}.fa-music:before{content:""}.fa-search:before,.icon-search:before{content:""}.fa-envelope-o:before{content:""}.fa-heart:before{content:""}.fa-star:before{content:""}.fa-star-o:before{content:""}.fa-user:before{content:""}.fa-film:before{content:""}.fa-th-large:before{content:""}.fa-th:before{content:""}.fa-th-list:before{content:""}.fa-check:before{content:""}.fa-close:before,.fa-remove:before,.fa-times:before{content:""}.fa-search-plus:before{content:""}.fa-search-minus:before{content:""}.fa-power-off:before{content:""}.fa-signal:before{content:""}.fa-cog:before,.fa-gear:before{content:""}.fa-trash-o:before{content:""}.fa-home:before,.icon-home:before{content:""}.fa-file-o:before{content:""}.fa-clock-o:before{content:""}.fa-road:before{content:""}.fa-download:before,.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{content:""}.fa-arrow-circle-o-down:before{content:""}.fa-arrow-circle-o-up:before{content:""}.fa-inbox:before{content:""}.fa-play-circle-o:before{content:""}.fa-repeat:before,.fa-rotate-right:before{content:""}.fa-refresh:before{content:""}.fa-list-alt:before{content:""}.fa-lock:before{content:""}.fa-flag:before{content:""}.fa-headphones:before{content:""}.fa-volume-off:before{content:""}.fa-volume-down:before{content:""}.fa-volume-up:before{content:""}.fa-qrcode:before{content:""}.fa-barcode:before{content:""}.fa-tag:before{content:""}.fa-tags:before{content:""}.fa-book:before,.icon-book:before{content:""}.fa-bookmark:before{content:""}.fa-print:before{content:""}.fa-camera:before{content:""}.fa-font:before{content:""}.fa-bold:before{content:""}.fa-italic:before{content:""}.fa-text-height:before{content:""}.fa-text-width:before{content:""}.fa-align-left:before{content:""}.fa-align-center:before{content:""}.fa-align-right:before{content:""}.fa-align-justify:before{content:""}.fa-list:before{content:""}.fa-dedent:before,.fa-outdent:before{content:""}.fa-indent:before{content:""}.fa-video-camera:before{content:""}.fa-image:before,.fa-photo:before,.fa-picture-o:before{content:""}.fa-pencil:before{content:""}.fa-map-marker:before{content:""}.fa-adjust:before{content:""}.fa-tint:before{content:""}.fa-edit:before,.fa-pencil-square-o:before{content:""}.fa-share-square-o:before{content:""}.fa-check-square-o:before{content:""}.fa-arrows:before{content:""}.fa-step-backward:before{content:""}.fa-fast-backward:before{content:""}.fa-backward:before{content:""}.fa-play:before{content:""}.fa-pause:before{content:""}.fa-stop:before{content:""}.fa-forward:before{content:""}.fa-fast-forward:before{content:""}.fa-step-forward:before{content:""}.fa-eject:before{content:""}.fa-chevron-left:before{content:""}.fa-chevron-right:before{content:""}.fa-plus-circle:before{content:""}.fa-minus-circle:before{content:""}.fa-times-circle:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before{content:""}.fa-check-circle:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before{content:""}.fa-question-circle:before{content:""}.fa-info-circle:before{content:""}.fa-crosshairs:before{content:""}.fa-times-circle-o:before{content:""}.fa-check-circle-o:before{content:""}.fa-ban:before{content:""}.fa-arrow-left:before{content:""}.fa-arrow-right:before{content:""}.fa-arrow-up:before{content:""}.fa-arrow-down:before{content:""}.fa-mail-forward:before,.fa-share:before{content:""}.fa-expand:before{content:""}.fa-compress:before{content:""}.fa-plus:before{content:""}.fa-minus:before{content:""}.fa-asterisk:before{content:""}.fa-exclamation-circle:before,.rst-content .admonition-title:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before{content:""}.fa-gift:before{content:""}.fa-leaf:before{content:""}.fa-fire:before,.icon-fire:before{content:""}.fa-eye:before{content:""}.fa-eye-slash:before{content:""}.fa-exclamation-triangle:before,.fa-warning:before{content:""}.fa-plane:before{content:""}.fa-calendar:before{content:""}.fa-random:before{content:""}.fa-comment:before{content:""}.fa-magnet:before{content:""}.fa-chevron-up:before{content:""}.fa-chevron-down:before{content:""}.fa-retweet:before{content:""}.fa-shopping-cart:before{content:""}.fa-folder:before{content:""}.fa-folder-open:before{content:""}.fa-arrows-v:before{content:""}.fa-arrows-h:before{content:""}.fa-bar-chart-o:before,.fa-bar-chart:before{content:""}.fa-twitter-square:before{content:""}.fa-facebook-square:before{content:""}.fa-camera-retro:before{content:""}.fa-key:before{content:""}.fa-cogs:before,.fa-gears:before{content:""}.fa-comments:before{content:""}.fa-thumbs-o-up:before{content:""}.fa-thumbs-o-down:before{content:""}.fa-star-half:before{content:""}.fa-heart-o:before{content:""}.fa-sign-out:before{content:""}.fa-linkedin-square:before{content:""}.fa-thumb-tack:before{content:""}.fa-external-link:before{content:""}.fa-sign-in:before{content:""}.fa-trophy:before{content:""}.fa-github-square:before{content:""}.fa-upload:before{content:""}.fa-lemon-o:before{content:""}.fa-phone:before{content:""}.fa-square-o:before{content:""}.fa-bookmark-o:before{content:""}.fa-phone-square:before{content:""}.fa-twitter:before{content:""}.fa-facebook-f:before,.fa-facebook:before{content:""}.fa-github:before,.icon-github:before{content:""}.fa-unlock:before{content:""}.fa-credit-card:before{content:""}.fa-feed:before,.fa-rss:before{content:""}.fa-hdd-o:before{content:""}.fa-bullhorn:before{content:""}.fa-bell:before{content:""}.fa-certificate:before{content:""}.fa-hand-o-right:before{content:""}.fa-hand-o-left:before{content:""}.fa-hand-o-up:before{content:""}.fa-hand-o-down:before{content:""}.fa-arrow-circle-left:before,.icon-circle-arrow-left:before{content:""}.fa-arrow-circle-right:before,.icon-circle-arrow-right:before{content:""}.fa-arrow-circle-up:before{content:""}.fa-arrow-circle-down:before{content:""}.fa-globe:before{content:""}.fa-wrench:before{content:""}.fa-tasks:before{content:""}.fa-filter:before{content:""}.fa-briefcase:before{content:""}.fa-arrows-alt:before{content:""}.fa-group:before,.fa-users:before{content:""}.fa-chain:before,.fa-link:before,.icon-link:before{content:""}.fa-cloud:before{content:""}.fa-flask:before{content:""}.fa-cut:before,.fa-scissors:before{content:""}.fa-copy:before,.fa-files-o:before{content:""}.fa-paperclip:before{content:""}.fa-floppy-o:before,.fa-save:before{content:""}.fa-square:before{content:""}.fa-bars:before,.fa-navicon:before,.fa-reorder:before{content:""}.fa-list-ul:before{content:""}.fa-list-ol:before{content:""}.fa-strikethrough:before{content:""}.fa-underline:before{content:""}.fa-table:before{content:""}.fa-magic:before{content:""}.fa-truck:before{content:""}.fa-pinterest:before{content:""}.fa-pinterest-square:before{content:""}.fa-google-plus-square:before{content:""}.fa-google-plus:before{content:""}.fa-money:before{content:""}.fa-caret-down:before,.icon-caret-down:before,.wy-dropdown .caret:before{content:""}.fa-caret-up:before{content:""}.fa-caret-left:before{content:""}.fa-caret-right:before{content:""}.fa-columns:before{content:""}.fa-sort:before,.fa-unsorted:before{content:""}.fa-sort-desc:before,.fa-sort-down:before{content:""}.fa-sort-asc:before,.fa-sort-up:before{content:""}.fa-envelope:before{content:""}.fa-linkedin:before{content:""}.fa-rotate-left:before,.fa-undo:before{content:""}.fa-gavel:before,.fa-legal:before{content:""}.fa-dashboard:before,.fa-tachometer:before{content:""}.fa-comment-o:before{content:""}.fa-comments-o:before{content:""}.fa-bolt:before,.fa-flash:before{content:""}.fa-sitemap:before{content:""}.fa-umbrella:before{content:""}.fa-clipboard:before,.fa-paste:before{content:""}.fa-lightbulb-o:before{content:""}.fa-exchange:before{content:""}.fa-cloud-download:before{content:""}.fa-cloud-upload:before{content:""}.fa-user-md:before{content:""}.fa-stethoscope:before{content:""}.fa-suitcase:before{content:""}.fa-bell-o:before{content:""}.fa-coffee:before{content:""}.fa-cutlery:before{content:""}.fa-file-text-o:before{content:""}.fa-building-o:before{content:""}.fa-hospital-o:before{content:""}.fa-ambulance:before{content:""}.fa-medkit:before{content:""}.fa-fighter-jet:before{content:""}.fa-beer:before{content:""}.fa-h-square:before{content:""}.fa-plus-square:before{content:""}.fa-angle-double-left:before{content:""}.fa-angle-double-right:before{content:""}.fa-angle-double-up:before{content:""}.fa-angle-double-down:before{content:""}.fa-angle-left:before{content:""}.fa-angle-right:before{content:""}.fa-angle-up:before{content:""}.fa-angle-down:before{content:""}.fa-desktop:before{content:""}.fa-laptop:before{content:""}.fa-tablet:before{content:""}.fa-mobile-phone:before,.fa-mobile:before{content:""}.fa-circle-o:before{content:""}.fa-quote-left:before{content:""}.fa-quote-right:before{content:""}.fa-spinner:before{content:""}.fa-circle:before{content:""}.fa-mail-reply:before,.fa-reply:before{content:""}.fa-github-alt:before{content:""}.fa-folder-o:before{content:""}.fa-folder-open-o:before{content:""}.fa-smile-o:before{content:""}.fa-frown-o:before{content:""}.fa-meh-o:before{content:""}.fa-gamepad:before{content:""}.fa-keyboard-o:before{content:""}.fa-flag-o:before{content:""}.fa-flag-checkered:before{content:""}.fa-terminal:before{content:""}.fa-code:before{content:""}.fa-mail-reply-all:before,.fa-reply-all:before{content:""}.fa-star-half-empty:before,.fa-star-half-full:before,.fa-star-half-o:before{content:""}.fa-location-arrow:before{content:""}.fa-crop:before{content:""}.fa-code-fork:before{content:""}.fa-chain-broken:before,.fa-unlink:before{content:""}.fa-question:before{content:""}.fa-info:before{content:""}.fa-exclamation:before{content:""}.fa-superscript:before{content:""}.fa-subscript:before{content:""}.fa-eraser:before{content:""}.fa-puzzle-piece:before{content:""}.fa-microphone:before{content:""}.fa-microphone-slash:before{content:""}.fa-shield:before{content:""}.fa-calendar-o:before{content:""}.fa-fire-extinguisher:before{content:""}.fa-rocket:before{content:""}.fa-maxcdn:before{content:""}.fa-chevron-circle-left:before{content:""}.fa-chevron-circle-right:before{content:""}.fa-chevron-circle-up:before{content:""}.fa-chevron-circle-down:before{content:""}.fa-html5:before{content:""}.fa-css3:before{content:""}.fa-anchor:before{content:""}.fa-unlock-alt:before{content:""}.fa-bullseye:before{content:""}.fa-ellipsis-h:before{content:""}.fa-ellipsis-v:before{content:""}.fa-rss-square:before{content:""}.fa-play-circle:before{content:""}.fa-ticket:before{content:""}.fa-minus-square:before{content:""}.fa-minus-square-o:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before{content:""}.fa-level-up:before{content:""}.fa-level-down:before{content:""}.fa-check-square:before{content:""}.fa-pencil-square:before{content:""}.fa-external-link-square:before{content:""}.fa-share-square:before{content:""}.fa-compass:before{content:""}.fa-caret-square-o-down:before,.fa-toggle-down:before{content:""}.fa-caret-square-o-up:before,.fa-toggle-up:before{content:""}.fa-caret-square-o-right:before,.fa-toggle-right:before{content:""}.fa-eur:before,.fa-euro:before{content:""}.fa-gbp:before{content:""}.fa-dollar:before,.fa-usd:before{content:""}.fa-inr:before,.fa-rupee:before{content:""}.fa-cny:before,.fa-jpy:before,.fa-rmb:before,.fa-yen:before{content:""}.fa-rouble:before,.fa-rub:before,.fa-ruble:before{content:""}.fa-krw:before,.fa-won:before{content:""}.fa-bitcoin:before,.fa-btc:before{content:""}.fa-file:before{content:""}.fa-file-text:before{content:""}.fa-sort-alpha-asc:before{content:""}.fa-sort-alpha-desc:before{content:""}.fa-sort-amount-asc:before{content:""}.fa-sort-amount-desc:before{content:""}.fa-sort-numeric-asc:before{content:""}.fa-sort-numeric-desc:before{content:""}.fa-thumbs-up:before{content:""}.fa-thumbs-down:before{content:""}.fa-youtube-square:before{content:""}.fa-youtube:before{content:""}.fa-xing:before{content:""}.fa-xing-square:before{content:""}.fa-youtube-play:before{content:""}.fa-dropbox:before{content:""}.fa-stack-overflow:before{content:""}.fa-instagram:before{content:""}.fa-flickr:before{content:""}.fa-adn:before{content:""}.fa-bitbucket:before,.icon-bitbucket:before{content:""}.fa-bitbucket-square:before{content:""}.fa-tumblr:before{content:""}.fa-tumblr-square:before{content:""}.fa-long-arrow-down:before{content:""}.fa-long-arrow-up:before{content:""}.fa-long-arrow-left:before{content:""}.fa-long-arrow-right:before{content:""}.fa-apple:before{content:""}.fa-windows:before{content:""}.fa-android:before{content:""}.fa-linux:before{content:""}.fa-dribbble:before{content:""}.fa-skype:before{content:""}.fa-foursquare:before{content:""}.fa-trello:before{content:""}.fa-female:before{content:""}.fa-male:before{content:""}.fa-gittip:before,.fa-gratipay:before{content:""}.fa-sun-o:before{content:""}.fa-moon-o:before{content:""}.fa-archive:before{content:""}.fa-bug:before{content:""}.fa-vk:before{content:""}.fa-weibo:before{content:""}.fa-renren:before{content:""}.fa-pagelines:before{content:""}.fa-stack-exchange:before{content:""}.fa-arrow-circle-o-right:before{content:""}.fa-arrow-circle-o-left:before{content:""}.fa-caret-square-o-left:before,.fa-toggle-left:before{content:""}.fa-dot-circle-o:before{content:""}.fa-wheelchair:before{content:""}.fa-vimeo-square:before{content:""}.fa-try:before,.fa-turkish-lira:before{content:""}.fa-plus-square-o:before,.wy-menu-vertical li button.toctree-expand:before{content:""}.fa-space-shuttle:before{content:""}.fa-slack:before{content:""}.fa-envelope-square:before{content:""}.fa-wordpress:before{content:""}.fa-openid:before{content:""}.fa-bank:before,.fa-institution:before,.fa-university:before{content:""}.fa-graduation-cap:before,.fa-mortar-board:before{content:""}.fa-yahoo:before{content:""}.fa-google:before{content:""}.fa-reddit:before{content:""}.fa-reddit-square:before{content:""}.fa-stumbleupon-circle:before{content:""}.fa-stumbleupon:before{content:""}.fa-delicious:before{content:""}.fa-digg:before{content:""}.fa-pied-piper-pp:before{content:""}.fa-pied-piper-alt:before{content:""}.fa-drupal:before{content:""}.fa-joomla:before{content:""}.fa-language:before{content:""}.fa-fax:before{content:""}.fa-building:before{content:""}.fa-child:before{content:""}.fa-paw:before{content:""}.fa-spoon:before{content:""}.fa-cube:before{content:""}.fa-cubes:before{content:""}.fa-behance:before{content:""}.fa-behance-square:before{content:""}.fa-steam:before{content:""}.fa-steam-square:before{content:""}.fa-recycle:before{content:""}.fa-automobile:before,.fa-car:before{content:""}.fa-cab:before,.fa-taxi:before{content:""}.fa-tree:before{content:""}.fa-spotify:before{content:""}.fa-deviantart:before{content:""}.fa-soundcloud:before{content:""}.fa-database:before{content:""}.fa-file-pdf-o:before{content:""}.fa-file-word-o:before{content:""}.fa-file-excel-o:before{content:""}.fa-file-powerpoint-o:before{content:""}.fa-file-image-o:before,.fa-file-photo-o:before,.fa-file-picture-o:before{content:""}.fa-file-archive-o:before,.fa-file-zip-o:before{content:""}.fa-file-audio-o:before,.fa-file-sound-o:before{content:""}.fa-file-movie-o:before,.fa-file-video-o:before{content:""}.fa-file-code-o:before{content:""}.fa-vine:before{content:""}.fa-codepen:before{content:""}.fa-jsfiddle:before{content:""}.fa-life-bouy:before,.fa-life-buoy:before,.fa-life-ring:before,.fa-life-saver:before,.fa-support:before{content:""}.fa-circle-o-notch:before{content:""}.fa-ra:before,.fa-rebel:before,.fa-resistance:before{content:""}.fa-empire:before,.fa-ge:before{content:""}.fa-git-square:before{content:""}.fa-git:before{content:""}.fa-hacker-news:before,.fa-y-combinator-square:before,.fa-yc-square:before{content:""}.fa-tencent-weibo:before{content:""}.fa-qq:before{content:""}.fa-wechat:before,.fa-weixin:before{content:""}.fa-paper-plane:before,.fa-send:before{content:""}.fa-paper-plane-o:before,.fa-send-o:before{content:""}.fa-history:before{content:""}.fa-circle-thin:before{content:""}.fa-header:before{content:""}.fa-paragraph:before{content:""}.fa-sliders:before{content:""}.fa-share-alt:before{content:""}.fa-share-alt-square:before{content:""}.fa-bomb:before{content:""}.fa-futbol-o:before,.fa-soccer-ball-o:before{content:""}.fa-tty:before{content:""}.fa-binoculars:before{content:""}.fa-plug:before{content:""}.fa-slideshare:before{content:""}.fa-twitch:before{content:""}.fa-yelp:before{content:""}.fa-newspaper-o:before{content:""}.fa-wifi:before{content:""}.fa-calculator:before{content:""}.fa-paypal:before{content:""}.fa-google-wallet:before{content:""}.fa-cc-visa:before{content:""}.fa-cc-mastercard:before{content:""}.fa-cc-discover:before{content:""}.fa-cc-amex:before{content:""}.fa-cc-paypal:before{content:""}.fa-cc-stripe:before{content:""}.fa-bell-slash:before{content:""}.fa-bell-slash-o:before{content:""}.fa-trash:before{content:""}.fa-copyright:before{content:""}.fa-at:before{content:""}.fa-eyedropper:before{content:""}.fa-paint-brush:before{content:""}.fa-birthday-cake:before{content:""}.fa-area-chart:before{content:""}.fa-pie-chart:before{content:""}.fa-line-chart:before{content:""}.fa-lastfm:before{content:""}.fa-lastfm-square:before{content:""}.fa-toggle-off:before{content:""}.fa-toggle-on:before{content:""}.fa-bicycle:before{content:""}.fa-bus:before{content:""}.fa-ioxhost:before{content:""}.fa-angellist:before{content:""}.fa-cc:before{content:""}.fa-ils:before,.fa-shekel:before,.fa-sheqel:before{content:""}.fa-meanpath:before{content:""}.fa-buysellads:before{content:""}.fa-connectdevelop:before{content:""}.fa-dashcube:before{content:""}.fa-forumbee:before{content:""}.fa-leanpub:before{content:""}.fa-sellsy:before{content:""}.fa-shirtsinbulk:before{content:""}.fa-simplybuilt:before{content:""}.fa-skyatlas:before{content:""}.fa-cart-plus:before{content:""}.fa-cart-arrow-down:before{content:""}.fa-diamond:before{content:""}.fa-ship:before{content:""}.fa-user-secret:before{content:""}.fa-motorcycle:before{content:""}.fa-street-view:before{content:""}.fa-heartbeat:before{content:""}.fa-venus:before{content:""}.fa-mars:before{content:""}.fa-mercury:before{content:""}.fa-intersex:before,.fa-transgender:before{content:""}.fa-transgender-alt:before{content:""}.fa-venus-double:before{content:""}.fa-mars-double:before{content:""}.fa-venus-mars:before{content:""}.fa-mars-stroke:before{content:""}.fa-mars-stroke-v:before{content:""}.fa-mars-stroke-h:before{content:""}.fa-neuter:before{content:""}.fa-genderless:before{content:""}.fa-facebook-official:before{content:""}.fa-pinterest-p:before{content:""}.fa-whatsapp:before{content:""}.fa-server:before{content:""}.fa-user-plus:before{content:""}.fa-user-times:before{content:""}.fa-bed:before,.fa-hotel:before{content:""}.fa-viacoin:before{content:""}.fa-train:before{content:""}.fa-subway:before{content:""}.fa-medium:before{content:""}.fa-y-combinator:before,.fa-yc:before{content:""}.fa-optin-monster:before{content:""}.fa-opencart:before{content:""}.fa-expeditedssl:before{content:""}.fa-battery-4:before,.fa-battery-full:before,.fa-battery:before{content:""}.fa-battery-3:before,.fa-battery-three-quarters:before{content:""}.fa-battery-2:before,.fa-battery-half:before{content:""}.fa-battery-1:before,.fa-battery-quarter:before{content:""}.fa-battery-0:before,.fa-battery-empty:before{content:""}.fa-mouse-pointer:before{content:""}.fa-i-cursor:before{content:""}.fa-object-group:before{content:""}.fa-object-ungroup:before{content:""}.fa-sticky-note:before{content:""}.fa-sticky-note-o:before{content:""}.fa-cc-jcb:before{content:""}.fa-cc-diners-club:before{content:""}.fa-clone:before{content:""}.fa-balance-scale:before{content:""}.fa-hourglass-o:before{content:""}.fa-hourglass-1:before,.fa-hourglass-start:before{content:""}.fa-hourglass-2:before,.fa-hourglass-half:before{content:""}.fa-hourglass-3:before,.fa-hourglass-end:before{content:""}.fa-hourglass:before{content:""}.fa-hand-grab-o:before,.fa-hand-rock-o:before{content:""}.fa-hand-paper-o:before,.fa-hand-stop-o:before{content:""}.fa-hand-scissors-o:before{content:""}.fa-hand-lizard-o:before{content:""}.fa-hand-spock-o:before{content:""}.fa-hand-pointer-o:before{content:""}.fa-hand-peace-o:before{content:""}.fa-trademark:before{content:""}.fa-registered:before{content:""}.fa-creative-commons:before{content:""}.fa-gg:before{content:""}.fa-gg-circle:before{content:""}.fa-tripadvisor:before{content:""}.fa-odnoklassniki:before{content:""}.fa-odnoklassniki-square:before{content:""}.fa-get-pocket:before{content:""}.fa-wikipedia-w:before{content:""}.fa-safari:before{content:""}.fa-chrome:before{content:""}.fa-firefox:before{content:""}.fa-opera:before{content:""}.fa-internet-explorer:before{content:""}.fa-television:before,.fa-tv:before{content:""}.fa-contao:before{content:""}.fa-500px:before{content:""}.fa-amazon:before{content:""}.fa-calendar-plus-o:before{content:""}.fa-calendar-minus-o:before{content:""}.fa-calendar-times-o:before{content:""}.fa-calendar-check-o:before{content:""}.fa-industry:before{content:""}.fa-map-pin:before{content:""}.fa-map-signs:before{content:""}.fa-map-o:before{content:""}.fa-map:before{content:""}.fa-commenting:before{content:""}.fa-commenting-o:before{content:""}.fa-houzz:before{content:""}.fa-vimeo:before{content:""}.fa-black-tie:before{content:""}.fa-fonticons:before{content:""}.fa-reddit-alien:before{content:""}.fa-edge:before{content:""}.fa-credit-card-alt:before{content:""}.fa-codiepie:before{content:""}.fa-modx:before{content:""}.fa-fort-awesome:before{content:""}.fa-usb:before{content:""}.fa-product-hunt:before{content:""}.fa-mixcloud:before{content:""}.fa-scribd:before{content:""}.fa-pause-circle:before{content:""}.fa-pause-circle-o:before{content:""}.fa-stop-circle:before{content:""}.fa-stop-circle-o:before{content:""}.fa-shopping-bag:before{content:""}.fa-shopping-basket:before{content:""}.fa-hashtag:before{content:""}.fa-bluetooth:before{content:""}.fa-bluetooth-b:before{content:""}.fa-percent:before{content:""}.fa-gitlab:before,.icon-gitlab:before{content:""}.fa-wpbeginner:before{content:""}.fa-wpforms:before{content:""}.fa-envira:before{content:""}.fa-universal-access:before{content:""}.fa-wheelchair-alt:before{content:""}.fa-question-circle-o:before{content:""}.fa-blind:before{content:""}.fa-audio-description:before{content:""}.fa-volume-control-phone:before{content:""}.fa-braille:before{content:""}.fa-assistive-listening-systems:before{content:""}.fa-american-sign-language-interpreting:before,.fa-asl-interpreting:before{content:""}.fa-deaf:before,.fa-deafness:before,.fa-hard-of-hearing:before{content:""}.fa-glide:before{content:""}.fa-glide-g:before{content:""}.fa-sign-language:before,.fa-signing:before{content:""}.fa-low-vision:before{content:""}.fa-viadeo:before{content:""}.fa-viadeo-square:before{content:""}.fa-snapchat:before{content:""}.fa-snapchat-ghost:before{content:""}.fa-snapchat-square:before{content:""}.fa-pied-piper:before{content:""}.fa-first-order:before{content:""}.fa-yoast:before{content:""}.fa-themeisle:before{content:""}.fa-google-plus-circle:before,.fa-google-plus-official:before{content:""}.fa-fa:before,.fa-font-awesome:before{content:""}.fa-handshake-o:before{content:""}.fa-envelope-open:before{content:""}.fa-envelope-open-o:before{content:""}.fa-linode:before{content:""}.fa-address-book:before{content:""}.fa-address-book-o:before{content:""}.fa-address-card:before,.fa-vcard:before{content:""}.fa-address-card-o:before,.fa-vcard-o:before{content:""}.fa-user-circle:before{content:""}.fa-user-circle-o:before{content:""}.fa-user-o:before{content:""}.fa-id-badge:before{content:""}.fa-drivers-license:before,.fa-id-card:before{content:""}.fa-drivers-license-o:before,.fa-id-card-o:before{content:""}.fa-quora:before{content:""}.fa-free-code-camp:before{content:""}.fa-telegram:before{content:""}.fa-thermometer-4:before,.fa-thermometer-full:before,.fa-thermometer:before{content:""}.fa-thermometer-3:before,.fa-thermometer-three-quarters:before{content:""}.fa-thermometer-2:before,.fa-thermometer-half:before{content:""}.fa-thermometer-1:before,.fa-thermometer-quarter:before{content:""}.fa-thermometer-0:before,.fa-thermometer-empty:before{content:""}.fa-shower:before{content:""}.fa-bath:before,.fa-bathtub:before,.fa-s15:before{content:""}.fa-podcast:before{content:""}.fa-window-maximize:before{content:""}.fa-window-minimize:before{content:""}.fa-window-restore:before{content:""}.fa-times-rectangle:before,.fa-window-close:before{content:""}.fa-times-rectangle-o:before,.fa-window-close-o:before{content:""}.fa-bandcamp:before{content:""}.fa-grav:before{content:""}.fa-etsy:before{content:""}.fa-imdb:before{content:""}.fa-ravelry:before{content:""}.fa-eercast:before{content:""}.fa-microchip:before{content:""}.fa-snowflake-o:before{content:""}.fa-superpowers:before{content:""}.fa-wpexplorer:before{content:""}.fa-meetup:before{content:""}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;margin:0;overflow:visible;clip:auto}.fa,.icon,.rst-content .admonition-title,.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content code.download span:first-child,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink,.rst-content tt.download span:first-child,.wy-dropdown .caret,.wy-inline-validate.wy-inline-validate-danger .wy-input-context,.wy-inline-validate.wy-inline-validate-info .wy-input-context,.wy-inline-validate.wy-inline-validate-success .wy-input-context,.wy-inline-validate.wy-inline-validate-warning .wy-input-context,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li button.toctree-expand{font-family:inherit}.fa:before,.icon:before,.rst-content .admonition-title:before,.rst-content .code-block-caption .headerlink:before,.rst-content .eqno .headerlink:before,.rst-content code.download span:first-child:before,.rst-content dl dt .headerlink:before,.rst-content h1 .headerlink:before,.rst-content h2 .headerlink:before,.rst-content h3 .headerlink:before,.rst-content h4 .headerlink:before,.rst-content h5 .headerlink:before,.rst-content h6 .headerlink:before,.rst-content p.caption .headerlink:before,.rst-content p .headerlink:before,.rst-content table>caption .headerlink:before,.rst-content tt.download span:first-child:before,.wy-dropdown .caret:before,.wy-inline-validate.wy-inline-validate-danger .wy-input-context:before,.wy-inline-validate.wy-inline-validate-info .wy-input-context:before,.wy-inline-validate.wy-inline-validate-success .wy-input-context:before,.wy-inline-validate.wy-inline-validate-warning .wy-input-context:before,.wy-menu-vertical li.current>a button.toctree-expand:before,.wy-menu-vertical li.on a button.toctree-expand:before,.wy-menu-vertical li button.toctree-expand:before{font-family:FontAwesome;display:inline-block;font-style:normal;font-weight:400;line-height:1;text-decoration:inherit}.rst-content .code-block-caption a .headerlink,.rst-content .eqno a .headerlink,.rst-content a .admonition-title,.rst-content code.download a span:first-child,.rst-content dl dt a .headerlink,.rst-content h1 a .headerlink,.rst-content h2 a .headerlink,.rst-content h3 a .headerlink,.rst-content h4 a .headerlink,.rst-content h5 a .headerlink,.rst-content h6 a .headerlink,.rst-content p.caption a .headerlink,.rst-content p a .headerlink,.rst-content table>caption a .headerlink,.rst-content tt.download a span:first-child,.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand,.wy-menu-vertical li a button.toctree-expand,a .fa,a .icon,a .rst-content .admonition-title,a .rst-content .code-block-caption .headerlink,a .rst-content .eqno .headerlink,a .rst-content code.download span:first-child,a .rst-content dl dt .headerlink,a .rst-content h1 .headerlink,a .rst-content h2 .headerlink,a .rst-content h3 .headerlink,a .rst-content h4 .headerlink,a .rst-content h5 .headerlink,a .rst-content h6 .headerlink,a .rst-content p.caption .headerlink,a .rst-content p .headerlink,a .rst-content table>caption .headerlink,a .rst-content tt.download span:first-child,a .wy-menu-vertical li button.toctree-expand{display:inline-block;text-decoration:inherit}.btn .fa,.btn .icon,.btn .rst-content .admonition-title,.btn .rst-content .code-block-caption .headerlink,.btn .rst-content .eqno .headerlink,.btn .rst-content code.download span:first-child,.btn .rst-content dl dt .headerlink,.btn .rst-content h1 .headerlink,.btn .rst-content h2 .headerlink,.btn .rst-content h3 .headerlink,.btn .rst-content h4 .headerlink,.btn .rst-content h5 .headerlink,.btn .rst-content h6 .headerlink,.btn .rst-content p .headerlink,.btn .rst-content table>caption .headerlink,.btn .rst-content tt.download span:first-child,.btn .wy-menu-vertical li.current>a button.toctree-expand,.btn .wy-menu-vertical li.on a button.toctree-expand,.btn .wy-menu-vertical li button.toctree-expand,.nav .fa,.nav .icon,.nav .rst-content .admonition-title,.nav .rst-content .code-block-caption .headerlink,.nav .rst-content .eqno .headerlink,.nav .rst-content code.download span:first-child,.nav .rst-content dl dt .headerlink,.nav .rst-content h1 .headerlink,.nav .rst-content h2 .headerlink,.nav .rst-content h3 .headerlink,.nav .rst-content h4 .headerlink,.nav .rst-content h5 .headerlink,.nav .rst-content h6 .headerlink,.nav .rst-content p .headerlink,.nav .rst-content table>caption .headerlink,.nav .rst-content tt.download span:first-child,.nav .wy-menu-vertical li.current>a button.toctree-expand,.nav .wy-menu-vertical li.on a button.toctree-expand,.nav .wy-menu-vertical li button.toctree-expand,.rst-content .btn .admonition-title,.rst-content .code-block-caption .btn .headerlink,.rst-content .code-block-caption .nav .headerlink,.rst-content .eqno .btn .headerlink,.rst-content .eqno .nav .headerlink,.rst-content .nav .admonition-title,.rst-content code.download .btn span:first-child,.rst-content code.download .nav span:first-child,.rst-content dl dt .btn .headerlink,.rst-content dl dt .nav .headerlink,.rst-content h1 .btn .headerlink,.rst-content h1 .nav .headerlink,.rst-content h2 .btn .headerlink,.rst-content h2 .nav .headerlink,.rst-content h3 .btn .headerlink,.rst-content h3 .nav .headerlink,.rst-content h4 .btn .headerlink,.rst-content h4 .nav .headerlink,.rst-content h5 .btn .headerlink,.rst-content h5 .nav .headerlink,.rst-content h6 .btn .headerlink,.rst-content h6 .nav .headerlink,.rst-content p .btn .headerlink,.rst-content p .nav .headerlink,.rst-content table>caption .btn .headerlink,.rst-content table>caption .nav .headerlink,.rst-content tt.download .btn span:first-child,.rst-content tt.download .nav span:first-child,.wy-menu-vertical li .btn button.toctree-expand,.wy-menu-vertical li.current>a .btn button.toctree-expand,.wy-menu-vertical li.current>a .nav button.toctree-expand,.wy-menu-vertical li .nav button.toctree-expand,.wy-menu-vertical li.on a .btn button.toctree-expand,.wy-menu-vertical li.on a .nav button.toctree-expand{display:inline}.btn .fa-large.icon,.btn .fa.fa-large,.btn .rst-content .code-block-caption .fa-large.headerlink,.btn .rst-content .eqno .fa-large.headerlink,.btn .rst-content .fa-large.admonition-title,.btn .rst-content code.download span.fa-large:first-child,.btn .rst-content dl dt .fa-large.headerlink,.btn .rst-content h1 .fa-large.headerlink,.btn .rst-content h2 .fa-large.headerlink,.btn .rst-content h3 .fa-large.headerlink,.btn .rst-content h4 .fa-large.headerlink,.btn .rst-content h5 .fa-large.headerlink,.btn .rst-content h6 .fa-large.headerlink,.btn .rst-content p .fa-large.headerlink,.btn .rst-content table>caption .fa-large.headerlink,.btn .rst-content tt.download span.fa-large:first-child,.btn .wy-menu-vertical li button.fa-large.toctree-expand,.nav .fa-large.icon,.nav .fa.fa-large,.nav .rst-content .code-block-caption .fa-large.headerlink,.nav .rst-content .eqno .fa-large.headerlink,.nav .rst-content .fa-large.admonition-title,.nav .rst-content code.download span.fa-large:first-child,.nav .rst-content dl dt .fa-large.headerlink,.nav .rst-content h1 .fa-large.headerlink,.nav .rst-content h2 .fa-large.headerlink,.nav .rst-content h3 .fa-large.headerlink,.nav .rst-content h4 .fa-large.headerlink,.nav .rst-content h5 .fa-large.headerlink,.nav .rst-content h6 .fa-large.headerlink,.nav .rst-content p .fa-large.headerlink,.nav .rst-content table>caption .fa-large.headerlink,.nav .rst-content tt.download span.fa-large:first-child,.nav .wy-menu-vertical li button.fa-large.toctree-expand,.rst-content .btn .fa-large.admonition-title,.rst-content .code-block-caption .btn .fa-large.headerlink,.rst-content .code-block-caption .nav .fa-large.headerlink,.rst-content .eqno .btn .fa-large.headerlink,.rst-content .eqno .nav .fa-large.headerlink,.rst-content .nav .fa-large.admonition-title,.rst-content code.download .btn span.fa-large:first-child,.rst-content code.download .nav span.fa-large:first-child,.rst-content dl dt .btn .fa-large.headerlink,.rst-content dl dt .nav .fa-large.headerlink,.rst-content h1 .btn .fa-large.headerlink,.rst-content h1 .nav .fa-large.headerlink,.rst-content h2 .btn .fa-large.headerlink,.rst-content h2 .nav .fa-large.headerlink,.rst-content h3 .btn .fa-large.headerlink,.rst-content h3 .nav .fa-large.headerlink,.rst-content h4 .btn .fa-large.headerlink,.rst-content h4 .nav .fa-large.headerlink,.rst-content h5 .btn .fa-large.headerlink,.rst-content h5 .nav .fa-large.headerlink,.rst-content h6 .btn .fa-large.headerlink,.rst-content h6 .nav .fa-large.headerlink,.rst-content p .btn .fa-large.headerlink,.rst-content p .nav .fa-large.headerlink,.rst-content table>caption .btn .fa-large.headerlink,.rst-content table>caption .nav .fa-large.headerlink,.rst-content tt.download .btn span.fa-large:first-child,.rst-content tt.download .nav span.fa-large:first-child,.wy-menu-vertical li .btn button.fa-large.toctree-expand,.wy-menu-vertical li .nav button.fa-large.toctree-expand{line-height:.9em}.btn .fa-spin.icon,.btn .fa.fa-spin,.btn .rst-content .code-block-caption .fa-spin.headerlink,.btn .rst-content .eqno .fa-spin.headerlink,.btn .rst-content .fa-spin.admonition-title,.btn .rst-content code.download span.fa-spin:first-child,.btn .rst-content dl dt .fa-spin.headerlink,.btn .rst-content h1 .fa-spin.headerlink,.btn .rst-content h2 .fa-spin.headerlink,.btn .rst-content h3 .fa-spin.headerlink,.btn .rst-content h4 .fa-spin.headerlink,.btn .rst-content h5 .fa-spin.headerlink,.btn .rst-content h6 .fa-spin.headerlink,.btn .rst-content p .fa-spin.headerlink,.btn .rst-content table>caption .fa-spin.headerlink,.btn .rst-content tt.download span.fa-spin:first-child,.btn .wy-menu-vertical li button.fa-spin.toctree-expand,.nav .fa-spin.icon,.nav .fa.fa-spin,.nav .rst-content .code-block-caption .fa-spin.headerlink,.nav .rst-content .eqno .fa-spin.headerlink,.nav .rst-content .fa-spin.admonition-title,.nav .rst-content code.download span.fa-spin:first-child,.nav .rst-content dl dt .fa-spin.headerlink,.nav .rst-content h1 .fa-spin.headerlink,.nav .rst-content h2 .fa-spin.headerlink,.nav .rst-content h3 .fa-spin.headerlink,.nav .rst-content h4 .fa-spin.headerlink,.nav .rst-content h5 .fa-spin.headerlink,.nav .rst-content h6 .fa-spin.headerlink,.nav .rst-content p .fa-spin.headerlink,.nav .rst-content table>caption .fa-spin.headerlink,.nav .rst-content tt.download span.fa-spin:first-child,.nav .wy-menu-vertical li button.fa-spin.toctree-expand,.rst-content .btn .fa-spin.admonition-title,.rst-content .code-block-caption .btn .fa-spin.headerlink,.rst-content .code-block-caption .nav .fa-spin.headerlink,.rst-content .eqno .btn .fa-spin.headerlink,.rst-content .eqno .nav .fa-spin.headerlink,.rst-content .nav .fa-spin.admonition-title,.rst-content code.download .btn span.fa-spin:first-child,.rst-content code.download .nav span.fa-spin:first-child,.rst-content dl dt .btn .fa-spin.headerlink,.rst-content dl dt .nav .fa-spin.headerlink,.rst-content h1 .btn .fa-spin.headerlink,.rst-content h1 .nav .fa-spin.headerlink,.rst-content h2 .btn .fa-spin.headerlink,.rst-content h2 .nav .fa-spin.headerlink,.rst-content h3 .btn .fa-spin.headerlink,.rst-content h3 .nav .fa-spin.headerlink,.rst-content h4 .btn .fa-spin.headerlink,.rst-content h4 .nav .fa-spin.headerlink,.rst-content h5 .btn .fa-spin.headerlink,.rst-content h5 .nav .fa-spin.headerlink,.rst-content h6 .btn .fa-spin.headerlink,.rst-content h6 .nav .fa-spin.headerlink,.rst-content p .btn .fa-spin.headerlink,.rst-content p .nav .fa-spin.headerlink,.rst-content table>caption .btn .fa-spin.headerlink,.rst-content table>caption .nav .fa-spin.headerlink,.rst-content tt.download .btn span.fa-spin:first-child,.rst-content tt.download .nav span.fa-spin:first-child,.wy-menu-vertical li .btn button.fa-spin.toctree-expand,.wy-menu-vertical li .nav button.fa-spin.toctree-expand{display:inline-block}.btn.fa:before,.btn.icon:before,.rst-content .btn.admonition-title:before,.rst-content .code-block-caption .btn.headerlink:before,.rst-content .eqno .btn.headerlink:before,.rst-content code.download span.btn:first-child:before,.rst-content dl dt .btn.headerlink:before,.rst-content h1 .btn.headerlink:before,.rst-content h2 .btn.headerlink:before,.rst-content h3 .btn.headerlink:before,.rst-content h4 .btn.headerlink:before,.rst-content h5 .btn.headerlink:before,.rst-content h6 .btn.headerlink:before,.rst-content p .btn.headerlink:before,.rst-content table>caption .btn.headerlink:before,.rst-content tt.download span.btn:first-child:before,.wy-menu-vertical li button.btn.toctree-expand:before{opacity:.5;-webkit-transition:opacity .05s ease-in;-moz-transition:opacity .05s ease-in;transition:opacity .05s ease-in}.btn.fa:hover:before,.btn.icon:hover:before,.rst-content .btn.admonition-title:hover:before,.rst-content .code-block-caption .btn.headerlink:hover:before,.rst-content .eqno .btn.headerlink:hover:before,.rst-content code.download span.btn:first-child:hover:before,.rst-content dl dt .btn.headerlink:hover:before,.rst-content h1 .btn.headerlink:hover:before,.rst-content h2 .btn.headerlink:hover:before,.rst-content h3 .btn.headerlink:hover:before,.rst-content h4 .btn.headerlink:hover:before,.rst-content h5 .btn.headerlink:hover:before,.rst-content h6 .btn.headerlink:hover:before,.rst-content p .btn.headerlink:hover:before,.rst-content table>caption .btn.headerlink:hover:before,.rst-content tt.download span.btn:first-child:hover:before,.wy-menu-vertical li button.btn.toctree-expand:hover:before{opacity:1}.btn-mini .fa:before,.btn-mini .icon:before,.btn-mini .rst-content .admonition-title:before,.btn-mini .rst-content .code-block-caption .headerlink:before,.btn-mini .rst-content .eqno .headerlink:before,.btn-mini .rst-content code.download span:first-child:before,.btn-mini .rst-content dl dt .headerlink:before,.btn-mini .rst-content h1 .headerlink:before,.btn-mini .rst-content h2 .headerlink:before,.btn-mini .rst-content h3 .headerlink:before,.btn-mini .rst-content h4 .headerlink:before,.btn-mini .rst-content h5 .headerlink:before,.btn-mini .rst-content h6 .headerlink:before,.btn-mini .rst-content p .headerlink:before,.btn-mini .rst-content table>caption .headerlink:before,.btn-mini .rst-content tt.download span:first-child:before,.btn-mini .wy-menu-vertical li button.toctree-expand:before,.rst-content .btn-mini .admonition-title:before,.rst-content .code-block-caption .btn-mini .headerlink:before,.rst-content .eqno .btn-mini .headerlink:before,.rst-content code.download .btn-mini span:first-child:before,.rst-content dl dt .btn-mini .headerlink:before,.rst-content h1 .btn-mini .headerlink:before,.rst-content h2 .btn-mini .headerlink:before,.rst-content h3 .btn-mini .headerlink:before,.rst-content h4 .btn-mini .headerlink:before,.rst-content h5 .btn-mini .headerlink:before,.rst-content h6 .btn-mini .headerlink:before,.rst-content p .btn-mini .headerlink:before,.rst-content table>caption .btn-mini .headerlink:before,.rst-content tt.download .btn-mini span:first-child:before,.wy-menu-vertical li .btn-mini button.toctree-expand:before{font-size:14px;vertical-align:-15%}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning,.wy-alert{padding:12px;line-height:24px;margin-bottom:24px;background:#e7f2fa}.rst-content .admonition-title,.wy-alert-title{font-weight:700;display:block;color:#fff;background:#6ab0de;padding:6px 12px;margin:-12px -12px 12px}.rst-content .danger,.rst-content .error,.rst-content .wy-alert-danger.admonition,.rst-content .wy-alert-danger.admonition-todo,.rst-content .wy-alert-danger.attention,.rst-content .wy-alert-danger.caution,.rst-content .wy-alert-danger.hint,.rst-content .wy-alert-danger.important,.rst-content .wy-alert-danger.note,.rst-content .wy-alert-danger.seealso,.rst-content .wy-alert-danger.tip,.rst-content .wy-alert-danger.warning,.wy-alert.wy-alert-danger{background:#fdf3f2}.rst-content .danger .admonition-title,.rst-content .danger .wy-alert-title,.rst-content .error .admonition-title,.rst-content .error .wy-alert-title,.rst-content .wy-alert-danger.admonition-todo .admonition-title,.rst-content .wy-alert-danger.admonition-todo .wy-alert-title,.rst-content .wy-alert-danger.admonition .admonition-title,.rst-content .wy-alert-danger.admonition .wy-alert-title,.rst-content .wy-alert-danger.attention .admonition-title,.rst-content .wy-alert-danger.attention .wy-alert-title,.rst-content .wy-alert-danger.caution .admonition-title,.rst-content .wy-alert-danger.caution .wy-alert-title,.rst-content .wy-alert-danger.hint .admonition-title,.rst-content .wy-alert-danger.hint .wy-alert-title,.rst-content .wy-alert-danger.important .admonition-title,.rst-content .wy-alert-danger.important .wy-alert-title,.rst-content .wy-alert-danger.note .admonition-title,.rst-content .wy-alert-danger.note .wy-alert-title,.rst-content .wy-alert-danger.seealso .admonition-title,.rst-content .wy-alert-danger.seealso .wy-alert-title,.rst-content .wy-alert-danger.tip .admonition-title,.rst-content .wy-alert-danger.tip .wy-alert-title,.rst-content .wy-alert-danger.warning .admonition-title,.rst-content .wy-alert-danger.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-danger .admonition-title,.wy-alert.wy-alert-danger .rst-content .admonition-title,.wy-alert.wy-alert-danger .wy-alert-title{background:#f29f97}.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .warning,.rst-content .wy-alert-warning.admonition,.rst-content .wy-alert-warning.danger,.rst-content .wy-alert-warning.error,.rst-content .wy-alert-warning.hint,.rst-content .wy-alert-warning.important,.rst-content .wy-alert-warning.note,.rst-content .wy-alert-warning.seealso,.rst-content .wy-alert-warning.tip,.wy-alert.wy-alert-warning{background:#ffedcc}.rst-content .admonition-todo .admonition-title,.rst-content .admonition-todo .wy-alert-title,.rst-content .attention .admonition-title,.rst-content .attention .wy-alert-title,.rst-content .caution .admonition-title,.rst-content .caution .wy-alert-title,.rst-content .warning .admonition-title,.rst-content .warning .wy-alert-title,.rst-content .wy-alert-warning.admonition .admonition-title,.rst-content .wy-alert-warning.admonition .wy-alert-title,.rst-content .wy-alert-warning.danger .admonition-title,.rst-content .wy-alert-warning.danger .wy-alert-title,.rst-content .wy-alert-warning.error .admonition-title,.rst-content .wy-alert-warning.error .wy-alert-title,.rst-content .wy-alert-warning.hint .admonition-title,.rst-content .wy-alert-warning.hint .wy-alert-title,.rst-content .wy-alert-warning.important .admonition-title,.rst-content .wy-alert-warning.important .wy-alert-title,.rst-content .wy-alert-warning.note .admonition-title,.rst-content .wy-alert-warning.note .wy-alert-title,.rst-content .wy-alert-warning.seealso .admonition-title,.rst-content .wy-alert-warning.seealso .wy-alert-title,.rst-content .wy-alert-warning.tip .admonition-title,.rst-content .wy-alert-warning.tip .wy-alert-title,.rst-content .wy-alert.wy-alert-warning .admonition-title,.wy-alert.wy-alert-warning .rst-content .admonition-title,.wy-alert.wy-alert-warning .wy-alert-title{background:#f0b37e}.rst-content .note,.rst-content .seealso,.rst-content .wy-alert-info.admonition,.rst-content .wy-alert-info.admonition-todo,.rst-content .wy-alert-info.attention,.rst-content .wy-alert-info.caution,.rst-content .wy-alert-info.danger,.rst-content .wy-alert-info.error,.rst-content .wy-alert-info.hint,.rst-content .wy-alert-info.important,.rst-content .wy-alert-info.tip,.rst-content .wy-alert-info.warning,.wy-alert.wy-alert-info{background:#e7f2fa}.rst-content .note .admonition-title,.rst-content .note .wy-alert-title,.rst-content .seealso .admonition-title,.rst-content .seealso .wy-alert-title,.rst-content .wy-alert-info.admonition-todo .admonition-title,.rst-content .wy-alert-info.admonition-todo .wy-alert-title,.rst-content .wy-alert-info.admonition .admonition-title,.rst-content .wy-alert-info.admonition .wy-alert-title,.rst-content .wy-alert-info.attention .admonition-title,.rst-content .wy-alert-info.attention .wy-alert-title,.rst-content .wy-alert-info.caution .admonition-title,.rst-content .wy-alert-info.caution .wy-alert-title,.rst-content .wy-alert-info.danger .admonition-title,.rst-content .wy-alert-info.danger .wy-alert-title,.rst-content .wy-alert-info.error .admonition-title,.rst-content .wy-alert-info.error .wy-alert-title,.rst-content .wy-alert-info.hint .admonition-title,.rst-content .wy-alert-info.hint .wy-alert-title,.rst-content .wy-alert-info.important .admonition-title,.rst-content .wy-alert-info.important .wy-alert-title,.rst-content .wy-alert-info.tip .admonition-title,.rst-content .wy-alert-info.tip .wy-alert-title,.rst-content .wy-alert-info.warning .admonition-title,.rst-content .wy-alert-info.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-info .admonition-title,.wy-alert.wy-alert-info .rst-content .admonition-title,.wy-alert.wy-alert-info .wy-alert-title{background:#6ab0de}.rst-content .hint,.rst-content .important,.rst-content .tip,.rst-content .wy-alert-success.admonition,.rst-content .wy-alert-success.admonition-todo,.rst-content .wy-alert-success.attention,.rst-content .wy-alert-success.caution,.rst-content .wy-alert-success.danger,.rst-content .wy-alert-success.error,.rst-content .wy-alert-success.note,.rst-content .wy-alert-success.seealso,.rst-content .wy-alert-success.warning,.wy-alert.wy-alert-success{background:#dbfaf4}.rst-content .hint .admonition-title,.rst-content .hint .wy-alert-title,.rst-content .important .admonition-title,.rst-content .important .wy-alert-title,.rst-content .tip .admonition-title,.rst-content .tip .wy-alert-title,.rst-content .wy-alert-success.admonition-todo .admonition-title,.rst-content .wy-alert-success.admonition-todo .wy-alert-title,.rst-content .wy-alert-success.admonition .admonition-title,.rst-content .wy-alert-success.admonition .wy-alert-title,.rst-content .wy-alert-success.attention .admonition-title,.rst-content .wy-alert-success.attention .wy-alert-title,.rst-content .wy-alert-success.caution .admonition-title,.rst-content .wy-alert-success.caution .wy-alert-title,.rst-content .wy-alert-success.danger .admonition-title,.rst-content .wy-alert-success.danger .wy-alert-title,.rst-content .wy-alert-success.error .admonition-title,.rst-content .wy-alert-success.error .wy-alert-title,.rst-content .wy-alert-success.note .admonition-title,.rst-content .wy-alert-success.note .wy-alert-title,.rst-content .wy-alert-success.seealso .admonition-title,.rst-content .wy-alert-success.seealso .wy-alert-title,.rst-content .wy-alert-success.warning .admonition-title,.rst-content .wy-alert-success.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-success .admonition-title,.wy-alert.wy-alert-success .rst-content .admonition-title,.wy-alert.wy-alert-success .wy-alert-title{background:#1abc9c}.rst-content .wy-alert-neutral.admonition,.rst-content .wy-alert-neutral.admonition-todo,.rst-content .wy-alert-neutral.attention,.rst-content .wy-alert-neutral.caution,.rst-content .wy-alert-neutral.danger,.rst-content .wy-alert-neutral.error,.rst-content .wy-alert-neutral.hint,.rst-content .wy-alert-neutral.important,.rst-content .wy-alert-neutral.note,.rst-content .wy-alert-neutral.seealso,.rst-content .wy-alert-neutral.tip,.rst-content .wy-alert-neutral.warning,.wy-alert.wy-alert-neutral{background:#f3f6f6}.rst-content .wy-alert-neutral.admonition-todo .admonition-title,.rst-content .wy-alert-neutral.admonition-todo .wy-alert-title,.rst-content .wy-alert-neutral.admonition .admonition-title,.rst-content .wy-alert-neutral.admonition .wy-alert-title,.rst-content .wy-alert-neutral.attention .admonition-title,.rst-content .wy-alert-neutral.attention .wy-alert-title,.rst-content .wy-alert-neutral.caution .admonition-title,.rst-content .wy-alert-neutral.caution .wy-alert-title,.rst-content .wy-alert-neutral.danger .admonition-title,.rst-content .wy-alert-neutral.danger .wy-alert-title,.rst-content .wy-alert-neutral.error .admonition-title,.rst-content .wy-alert-neutral.error .wy-alert-title,.rst-content .wy-alert-neutral.hint .admonition-title,.rst-content .wy-alert-neutral.hint .wy-alert-title,.rst-content .wy-alert-neutral.important .admonition-title,.rst-content .wy-alert-neutral.important .wy-alert-title,.rst-content .wy-alert-neutral.note .admonition-title,.rst-content .wy-alert-neutral.note .wy-alert-title,.rst-content .wy-alert-neutral.seealso .admonition-title,.rst-content .wy-alert-neutral.seealso .wy-alert-title,.rst-content .wy-alert-neutral.tip .admonition-title,.rst-content .wy-alert-neutral.tip .wy-alert-title,.rst-content .wy-alert-neutral.warning .admonition-title,.rst-content .wy-alert-neutral.warning .wy-alert-title,.rst-content .wy-alert.wy-alert-neutral .admonition-title,.wy-alert.wy-alert-neutral .rst-content .admonition-title,.wy-alert.wy-alert-neutral .wy-alert-title{color:#404040;background:#e1e4e5}.rst-content .wy-alert-neutral.admonition-todo a,.rst-content .wy-alert-neutral.admonition a,.rst-content .wy-alert-neutral.attention a,.rst-content .wy-alert-neutral.caution a,.rst-content .wy-alert-neutral.danger a,.rst-content .wy-alert-neutral.error a,.rst-content .wy-alert-neutral.hint a,.rst-content .wy-alert-neutral.important a,.rst-content .wy-alert-neutral.note a,.rst-content .wy-alert-neutral.seealso a,.rst-content .wy-alert-neutral.tip a,.rst-content .wy-alert-neutral.warning a,.wy-alert.wy-alert-neutral a{color:#2980b9}.rst-content .admonition-todo p:last-child,.rst-content .admonition p:last-child,.rst-content .attention p:last-child,.rst-content .caution p:last-child,.rst-content .danger p:last-child,.rst-content .error p:last-child,.rst-content .hint p:last-child,.rst-content .important p:last-child,.rst-content .note p:last-child,.rst-content .seealso p:last-child,.rst-content .tip p:last-child,.rst-content .warning p:last-child,.wy-alert p:last-child{margin-bottom:0}.wy-tray-container{position:fixed;bottom:0;left:0;z-index:600}.wy-tray-container li{display:block;width:300px;background:transparent;color:#fff;text-align:center;box-shadow:0 5px 5px 0 rgba(0,0,0,.1);padding:0 24px;min-width:20%;opacity:0;height:0;line-height:56px;overflow:hidden;-webkit-transition:all .3s ease-in;-moz-transition:all .3s ease-in;transition:all .3s ease-in}.wy-tray-container li.wy-tray-item-success{background:#27ae60}.wy-tray-container li.wy-tray-item-info{background:#2980b9}.wy-tray-container li.wy-tray-item-warning{background:#e67e22}.wy-tray-container li.wy-tray-item-danger{background:#e74c3c}.wy-tray-container li.on{opacity:1;height:56px}@media screen and (max-width:768px){.wy-tray-container{bottom:auto;top:0;width:100%}.wy-tray-container li{width:100%}}button{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle;cursor:pointer;line-height:normal;-webkit-appearance:button;*overflow:visible}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}button[disabled]{cursor:default}.btn{display:inline-block;border-radius:2px;line-height:normal;white-space:nowrap;text-align:center;cursor:pointer;font-size:100%;padding:6px 12px 8px;color:#fff;border:1px solid rgba(0,0,0,.1);background-color:#27ae60;text-decoration:none;font-weight:400;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 2px -1px hsla(0,0%,100%,.5),inset 0 -2px 0 0 rgba(0,0,0,.1);outline-none:false;vertical-align:middle;*display:inline;zoom:1;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-transition:all .1s linear;-moz-transition:all .1s linear;transition:all .1s linear}.btn-hover{background:#2e8ece;color:#fff}.btn:hover{background:#2cc36b;color:#fff}.btn:focus{background:#2cc36b;outline:0}.btn:active{box-shadow:inset 0 -1px 0 0 rgba(0,0,0,.05),inset 0 2px 0 0 rgba(0,0,0,.1);padding:8px 12px 6px}.btn:visited{color:#fff}.btn-disabled,.btn-disabled:active,.btn-disabled:focus,.btn-disabled:hover,.btn:disabled{background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled = false);filter:alpha(opacity=40);opacity:.4;cursor:not-allowed;box-shadow:none}.btn::-moz-focus-inner{padding:0;border:0}.btn-small{font-size:80%}.btn-info{background-color:#2980b9!important}.btn-info:hover{background-color:#2e8ece!important}.btn-neutral{background-color:#f3f6f6!important;color:#404040!important}.btn-neutral:hover{background-color:#e5ebeb!important;color:#404040}.btn-neutral:visited{color:#404040!important}.btn-success{background-color:#27ae60!important}.btn-success:hover{background-color:#295!important}.btn-danger{background-color:#e74c3c!important}.btn-danger:hover{background-color:#ea6153!important}.btn-warning{background-color:#e67e22!important}.btn-warning:hover{background-color:#e98b39!important}.btn-invert{background-color:#222}.btn-invert:hover{background-color:#2f2f2f!important}.btn-link{background-color:transparent!important;color:#2980b9;box-shadow:none;border-color:transparent!important}.btn-link:active,.btn-link:hover{background-color:transparent!important;color:#409ad5!important;box-shadow:none}.btn-link:visited{color:#9b59b6}.wy-btn-group .btn,.wy-control .btn{vertical-align:middle}.wy-btn-group{margin-bottom:24px;*zoom:1}.wy-btn-group:after,.wy-btn-group:before{display:table;content:""}.wy-btn-group:after{clear:both}.wy-dropdown{position:relative;display:inline-block}.wy-dropdown-active .wy-dropdown-menu{display:block}.wy-dropdown-menu{position:absolute;left:0;display:none;float:left;top:100%;min-width:100%;background:#fcfcfc;z-index:100;border:1px solid #cfd7dd;box-shadow:0 2px 2px 0 rgba(0,0,0,.1);padding:12px}.wy-dropdown-menu>dd>a{display:block;clear:both;color:#404040;white-space:nowrap;font-size:90%;padding:0 12px;cursor:pointer}.wy-dropdown-menu>dd>a:hover{background:#2980b9;color:#fff}.wy-dropdown-menu>dd.divider{border-top:1px solid #cfd7dd;margin:6px 0}.wy-dropdown-menu>dd.search{padding-bottom:12px}.wy-dropdown-menu>dd.search input[type=search]{width:100%}.wy-dropdown-menu>dd.call-to-action{background:#e3e3e3;text-transform:uppercase;font-weight:500;font-size:80%}.wy-dropdown-menu>dd.call-to-action:hover{background:#e3e3e3}.wy-dropdown-menu>dd.call-to-action .btn{color:#fff}.wy-dropdown.wy-dropdown-up .wy-dropdown-menu{bottom:100%;top:auto;left:auto;right:0}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu{background:#fcfcfc;margin-top:2px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a{padding:6px 12px}.wy-dropdown.wy-dropdown-bubble .wy-dropdown-menu a:hover{background:#2980b9;color:#fff}.wy-dropdown.wy-dropdown-left .wy-dropdown-menu{right:0;left:auto;text-align:right}.wy-dropdown-arrow:before{content:" ";border-bottom:5px solid #f5f5f5;border-left:5px solid transparent;border-right:5px solid transparent;position:absolute;display:block;top:-4px;left:50%;margin-left:-3px}.wy-dropdown-arrow.wy-dropdown-arrow-left:before{left:11px}.wy-form-stacked select{display:block}.wy-form-aligned .wy-help-inline,.wy-form-aligned input,.wy-form-aligned label,.wy-form-aligned select,.wy-form-aligned textarea{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-form-aligned .wy-control-group>label{display:inline-block;vertical-align:middle;width:10em;margin:6px 12px 0 0;float:left}.wy-form-aligned .wy-control{float:left}.wy-form-aligned .wy-control label{display:block}.wy-form-aligned .wy-control select{margin-top:6px}fieldset{margin:0}fieldset,legend{border:0;padding:0}legend{width:100%;white-space:normal;margin-bottom:24px;font-size:150%;*margin-left:-7px}label,legend{display:block}label{margin:0 0 .3125em;color:#333;font-size:90%}input,select,textarea{font-size:100%;margin:0;vertical-align:baseline;*vertical-align:middle}.wy-control-group{margin-bottom:24px;max-width:1200px;margin-left:auto;margin-right:auto;*zoom:1}.wy-control-group:after,.wy-control-group:before{display:table;content:""}.wy-control-group:after{clear:both}.wy-control-group.wy-control-group-required>label:after{content:" *";color:#e74c3c}.wy-control-group .wy-form-full,.wy-control-group .wy-form-halves,.wy-control-group .wy-form-thirds{padding-bottom:12px}.wy-control-group .wy-form-full input[type=color],.wy-control-group .wy-form-full input[type=date],.wy-control-group .wy-form-full input[type=datetime-local],.wy-control-group .wy-form-full input[type=datetime],.wy-control-group .wy-form-full input[type=email],.wy-control-group .wy-form-full input[type=month],.wy-control-group .wy-form-full input[type=number],.wy-control-group .wy-form-full input[type=password],.wy-control-group .wy-form-full input[type=search],.wy-control-group .wy-form-full input[type=tel],.wy-control-group .wy-form-full input[type=text],.wy-control-group .wy-form-full input[type=time],.wy-control-group .wy-form-full input[type=url],.wy-control-group .wy-form-full input[type=week],.wy-control-group .wy-form-full select,.wy-control-group .wy-form-halves input[type=color],.wy-control-group .wy-form-halves input[type=date],.wy-control-group .wy-form-halves input[type=datetime-local],.wy-control-group .wy-form-halves input[type=datetime],.wy-control-group .wy-form-halves input[type=email],.wy-control-group .wy-form-halves input[type=month],.wy-control-group .wy-form-halves input[type=number],.wy-control-group .wy-form-halves input[type=password],.wy-control-group .wy-form-halves input[type=search],.wy-control-group .wy-form-halves input[type=tel],.wy-control-group .wy-form-halves input[type=text],.wy-control-group .wy-form-halves input[type=time],.wy-control-group .wy-form-halves input[type=url],.wy-control-group .wy-form-halves input[type=week],.wy-control-group .wy-form-halves select,.wy-control-group .wy-form-thirds input[type=color],.wy-control-group .wy-form-thirds input[type=date],.wy-control-group .wy-form-thirds input[type=datetime-local],.wy-control-group .wy-form-thirds input[type=datetime],.wy-control-group .wy-form-thirds input[type=email],.wy-control-group .wy-form-thirds input[type=month],.wy-control-group .wy-form-thirds input[type=number],.wy-control-group .wy-form-thirds input[type=password],.wy-control-group .wy-form-thirds input[type=search],.wy-control-group .wy-form-thirds input[type=tel],.wy-control-group .wy-form-thirds input[type=text],.wy-control-group .wy-form-thirds input[type=time],.wy-control-group .wy-form-thirds input[type=url],.wy-control-group .wy-form-thirds input[type=week],.wy-control-group .wy-form-thirds select{width:100%}.wy-control-group .wy-form-full{float:left;display:block;width:100%;margin-right:0}.wy-control-group .wy-form-full:last-child{margin-right:0}.wy-control-group .wy-form-halves{float:left;display:block;margin-right:2.35765%;width:48.82117%}.wy-control-group .wy-form-halves:last-child,.wy-control-group .wy-form-halves:nth-of-type(2n){margin-right:0}.wy-control-group .wy-form-halves:nth-of-type(odd){clear:left}.wy-control-group .wy-form-thirds{float:left;display:block;margin-right:2.35765%;width:31.76157%}.wy-control-group .wy-form-thirds:last-child,.wy-control-group .wy-form-thirds:nth-of-type(3n){margin-right:0}.wy-control-group .wy-form-thirds:nth-of-type(3n+1){clear:left}.wy-control-group.wy-control-group-no-input .wy-control,.wy-control-no-input{margin:6px 0 0;font-size:90%}.wy-control-no-input{display:inline-block}.wy-control-group.fluid-input input[type=color],.wy-control-group.fluid-input input[type=date],.wy-control-group.fluid-input input[type=datetime-local],.wy-control-group.fluid-input input[type=datetime],.wy-control-group.fluid-input input[type=email],.wy-control-group.fluid-input input[type=month],.wy-control-group.fluid-input input[type=number],.wy-control-group.fluid-input input[type=password],.wy-control-group.fluid-input input[type=search],.wy-control-group.fluid-input input[type=tel],.wy-control-group.fluid-input input[type=text],.wy-control-group.fluid-input input[type=time],.wy-control-group.fluid-input input[type=url],.wy-control-group.fluid-input input[type=week]{width:100%}.wy-form-message-inline{padding-left:.3em;color:#666;font-size:90%}.wy-form-message{display:block;color:#999;font-size:70%;margin-top:.3125em;font-style:italic}.wy-form-message p{font-size:inherit;font-style:italic;margin-bottom:6px}.wy-form-message p:last-child{margin-bottom:0}input{line-height:normal}input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;*overflow:visible}input[type=color],input[type=date],input[type=datetime-local],input[type=datetime],input[type=email],input[type=month],input[type=number],input[type=password],input[type=search],input[type=tel],input[type=text],input[type=time],input[type=url],input[type=week]{-webkit-appearance:none;padding:6px;display:inline-block;border:1px solid #ccc;font-size:80%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;box-shadow:inset 0 1px 3px #ddd;border-radius:0;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}input[type=datetime-local]{padding:.34375em .625em}input[disabled]{cursor:default}input[type=checkbox],input[type=radio]{padding:0;margin-right:.3125em;*height:13px;*width:13px}input[type=checkbox],input[type=radio],input[type=search]{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}input[type=color]:focus,input[type=date]:focus,input[type=datetime-local]:focus,input[type=datetime]:focus,input[type=email]:focus,input[type=month]:focus,input[type=number]:focus,input[type=password]:focus,input[type=search]:focus,input[type=tel]:focus,input[type=text]:focus,input[type=time]:focus,input[type=url]:focus,input[type=week]:focus{outline:0;outline:thin dotted\9;border-color:#333}input.no-focus:focus{border-color:#ccc!important}input[type=checkbox]:focus,input[type=file]:focus,input[type=radio]:focus{outline:thin dotted #333;outline:1px auto #129fea}input[type=color][disabled],input[type=date][disabled],input[type=datetime-local][disabled],input[type=datetime][disabled],input[type=email][disabled],input[type=month][disabled],input[type=number][disabled],input[type=password][disabled],input[type=search][disabled],input[type=tel][disabled],input[type=text][disabled],input[type=time][disabled],input[type=url][disabled],input[type=week][disabled]{cursor:not-allowed;background-color:#fafafa}input:focus:invalid,select:focus:invalid,textarea:focus:invalid{color:#e74c3c;border:1px solid #e74c3c}input:focus:invalid:focus,select:focus:invalid:focus,textarea:focus:invalid:focus{border-color:#e74c3c}input[type=checkbox]:focus:invalid:focus,input[type=file]:focus:invalid:focus,input[type=radio]:focus:invalid:focus{outline-color:#e74c3c}input.wy-input-large{padding:12px;font-size:100%}textarea{overflow:auto;vertical-align:top;width:100%;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif}select,textarea{padding:.5em .625em;display:inline-block;border:1px solid #ccc;font-size:80%;box-shadow:inset 0 1px 3px #ddd;-webkit-transition:border .3s linear;-moz-transition:border .3s linear;transition:border .3s linear}select{border:1px solid #ccc;background-color:#fff}select[multiple]{height:auto}select:focus,textarea:focus{outline:0}input[readonly],select[disabled],select[readonly],textarea[disabled],textarea[readonly]{cursor:not-allowed;background-color:#fafafa}input[type=checkbox][disabled],input[type=radio][disabled]{cursor:not-allowed}.wy-checkbox,.wy-radio{margin:6px 0;color:#404040;display:block}.wy-checkbox input,.wy-radio input{vertical-align:baseline}.wy-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.wy-input-prefix,.wy-input-suffix{white-space:nowrap;padding:6px}.wy-input-prefix .wy-input-context,.wy-input-suffix .wy-input-context{line-height:27px;padding:0 8px;display:inline-block;font-size:80%;background-color:#f3f6f6;border:1px solid #ccc;color:#999}.wy-input-suffix .wy-input-context{border-left:0}.wy-input-prefix .wy-input-context{border-right:0}.wy-switch{position:relative;display:block;height:24px;margin-top:12px;cursor:pointer}.wy-switch:before{left:0;top:0;width:36px;height:12px;background:#ccc}.wy-switch:after,.wy-switch:before{position:absolute;content:"";display:block;border-radius:4px;-webkit-transition:all .2s ease-in-out;-moz-transition:all .2s ease-in-out;transition:all .2s ease-in-out}.wy-switch:after{width:18px;height:18px;background:#999;left:-3px;top:-3px}.wy-switch span{position:absolute;left:48px;display:block;font-size:12px;color:#ccc;line-height:1}.wy-switch.active:before{background:#1e8449}.wy-switch.active:after{left:24px;background:#27ae60}.wy-switch.disabled{cursor:not-allowed;opacity:.8}.wy-control-group.wy-control-group-error .wy-form-message,.wy-control-group.wy-control-group-error>label{color:#e74c3c}.wy-control-group.wy-control-group-error input[type=color],.wy-control-group.wy-control-group-error input[type=date],.wy-control-group.wy-control-group-error input[type=datetime-local],.wy-control-group.wy-control-group-error input[type=datetime],.wy-control-group.wy-control-group-error input[type=email],.wy-control-group.wy-control-group-error input[type=month],.wy-control-group.wy-control-group-error input[type=number],.wy-control-group.wy-control-group-error input[type=password],.wy-control-group.wy-control-group-error input[type=search],.wy-control-group.wy-control-group-error input[type=tel],.wy-control-group.wy-control-group-error input[type=text],.wy-control-group.wy-control-group-error input[type=time],.wy-control-group.wy-control-group-error input[type=url],.wy-control-group.wy-control-group-error input[type=week],.wy-control-group.wy-control-group-error textarea{border:1px solid #e74c3c}.wy-inline-validate{white-space:nowrap}.wy-inline-validate .wy-input-context{padding:.5em .625em;display:inline-block;font-size:80%}.wy-inline-validate.wy-inline-validate-success .wy-input-context{color:#27ae60}.wy-inline-validate.wy-inline-validate-danger .wy-input-context{color:#e74c3c}.wy-inline-validate.wy-inline-validate-warning .wy-input-context{color:#e67e22}.wy-inline-validate.wy-inline-validate-info .wy-input-context{color:#2980b9}.rotate-90{-webkit-transform:rotate(90deg);-moz-transform:rotate(90deg);-ms-transform:rotate(90deg);-o-transform:rotate(90deg);transform:rotate(90deg)}.rotate-180{-webkit-transform:rotate(180deg);-moz-transform:rotate(180deg);-ms-transform:rotate(180deg);-o-transform:rotate(180deg);transform:rotate(180deg)}.rotate-270{-webkit-transform:rotate(270deg);-moz-transform:rotate(270deg);-ms-transform:rotate(270deg);-o-transform:rotate(270deg);transform:rotate(270deg)}.mirror{-webkit-transform:scaleX(-1);-moz-transform:scaleX(-1);-ms-transform:scaleX(-1);-o-transform:scaleX(-1);transform:scaleX(-1)}.mirror.rotate-90{-webkit-transform:scaleX(-1) rotate(90deg);-moz-transform:scaleX(-1) rotate(90deg);-ms-transform:scaleX(-1) rotate(90deg);-o-transform:scaleX(-1) rotate(90deg);transform:scaleX(-1) rotate(90deg)}.mirror.rotate-180{-webkit-transform:scaleX(-1) rotate(180deg);-moz-transform:scaleX(-1) rotate(180deg);-ms-transform:scaleX(-1) rotate(180deg);-o-transform:scaleX(-1) rotate(180deg);transform:scaleX(-1) rotate(180deg)}.mirror.rotate-270{-webkit-transform:scaleX(-1) rotate(270deg);-moz-transform:scaleX(-1) rotate(270deg);-ms-transform:scaleX(-1) rotate(270deg);-o-transform:scaleX(-1) rotate(270deg);transform:scaleX(-1) rotate(270deg)}@media only screen and (max-width:480px){.wy-form button[type=submit]{margin:.7em 0 0}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=text],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week],.wy-form label{margin-bottom:.3em;display:block}.wy-form input[type=color],.wy-form input[type=date],.wy-form input[type=datetime-local],.wy-form input[type=datetime],.wy-form input[type=email],.wy-form input[type=month],.wy-form input[type=number],.wy-form input[type=password],.wy-form input[type=search],.wy-form input[type=tel],.wy-form input[type=time],.wy-form input[type=url],.wy-form input[type=week]{margin-bottom:0}.wy-form-aligned .wy-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.wy-form-aligned .wy-control{margin:1.5em 0 0}.wy-form-message,.wy-form-message-inline,.wy-form .wy-help-inline{display:block;font-size:80%;padding:6px 0}}@media screen and (max-width:768px){.tablet-hide{display:none}}@media screen and (max-width:480px){.mobile-hide{display:none}}.float-left{float:left}.float-right{float:right}.full-width{width:100%}.rst-content table.docutils,.rst-content table.field-list,.wy-table{border-collapse:collapse;border-spacing:0;empty-cells:show;margin-bottom:24px}.rst-content table.docutils caption,.rst-content table.field-list caption,.wy-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.rst-content table.docutils td,.rst-content table.docutils th,.rst-content table.field-list td,.rst-content table.field-list th,.wy-table td,.wy-table th{font-size:90%;margin:0;overflow:visible;padding:8px 16px}.rst-content table.docutils td:first-child,.rst-content table.docutils th:first-child,.rst-content table.field-list td:first-child,.rst-content table.field-list th:first-child,.wy-table td:first-child,.wy-table th:first-child{border-left-width:0}.rst-content table.docutils thead,.rst-content table.field-list thead,.wy-table thead{color:#000;text-align:left;vertical-align:bottom;white-space:nowrap}.rst-content table.docutils thead th,.rst-content table.field-list thead th,.wy-table thead th{font-weight:700;border-bottom:2px solid #e1e4e5}.rst-content table.docutils td,.rst-content table.field-list td,.wy-table td{background-color:transparent;vertical-align:middle}.rst-content table.docutils td p,.rst-content table.field-list td p,.wy-table td p{line-height:18px}.rst-content table.docutils td p:last-child,.rst-content table.field-list td p:last-child,.wy-table td p:last-child{margin-bottom:0}.rst-content table.docutils .wy-table-cell-min,.rst-content table.field-list .wy-table-cell-min,.wy-table .wy-table-cell-min{width:1%;padding-right:0}.rst-content table.docutils .wy-table-cell-min input[type=checkbox],.rst-content table.field-list .wy-table-cell-min input[type=checkbox],.wy-table .wy-table-cell-min input[type=checkbox]{margin:0}.wy-table-secondary{color:grey;font-size:90%}.wy-table-tertiary{color:grey;font-size:80%}.rst-content table.docutils:not(.field-list) tr:nth-child(2n-1) td,.wy-table-backed,.wy-table-odd td,.wy-table-striped tr:nth-child(2n-1) td{background-color:#f3f6f6}.rst-content table.docutils,.wy-table-bordered-all{border:1px solid #e1e4e5}.rst-content table.docutils td,.wy-table-bordered-all td{border-bottom:1px solid #e1e4e5;border-left:1px solid #e1e4e5}.rst-content table.docutils tbody>tr:last-child td,.wy-table-bordered-all tbody>tr:last-child td{border-bottom-width:0}.wy-table-bordered{border:1px solid #e1e4e5}.wy-table-bordered-rows td{border-bottom:1px solid #e1e4e5}.wy-table-bordered-rows tbody>tr:last-child td{border-bottom-width:0}.wy-table-horizontal td,.wy-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #e1e4e5}.wy-table-horizontal tbody>tr:last-child td{border-bottom-width:0}.wy-table-responsive{margin-bottom:24px;max-width:100%;overflow:auto}.wy-table-responsive table{margin-bottom:0!important}.wy-table-responsive table td,.wy-table-responsive table th{white-space:nowrap}a{color:#2980b9;text-decoration:none;cursor:pointer}a:hover{color:#3091d1}a:visited{color:#9b59b6}html{height:100%}body,html{overflow-x:hidden}body{font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;font-weight:400;color:#404040;min-height:100%;background:#edf0f2}.wy-text-left{text-align:left}.wy-text-center{text-align:center}.wy-text-right{text-align:right}.wy-text-large{font-size:120%}.wy-text-normal{font-size:100%}.wy-text-small,small{font-size:80%}.wy-text-strike{text-decoration:line-through}.wy-text-warning{color:#e67e22!important}a.wy-text-warning:hover{color:#eb9950!important}.wy-text-info{color:#2980b9!important}a.wy-text-info:hover{color:#409ad5!important}.wy-text-success{color:#27ae60!important}a.wy-text-success:hover{color:#36d278!important}.wy-text-danger{color:#e74c3c!important}a.wy-text-danger:hover{color:#ed7669!important}.wy-text-neutral{color:#404040!important}a.wy-text-neutral:hover{color:#595959!important}.rst-content .toctree-wrapper>p.caption,h1,h2,h3,h4,h5,h6,legend{margin-top:0;font-weight:700;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif}p{line-height:24px;font-size:16px;margin:0 0 24px}h1{font-size:175%}.rst-content .toctree-wrapper>p.caption,h2{font-size:150%}h3{font-size:125%}h4{font-size:115%}h5{font-size:110%}h6{font-size:100%}hr{display:block;height:1px;border:0;border-top:1px solid #e1e4e5;margin:24px 0;padding:0}.rst-content code,.rst-content tt,code{white-space:nowrap;max-width:100%;background:#fff;border:1px solid #e1e4e5;font-size:75%;padding:0 5px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#e74c3c;overflow-x:auto}.rst-content tt.code-large,code.code-large{font-size:90%}.rst-content .section ul,.rst-content .toctree-wrapper ul,.rst-content section ul,.wy-plain-list-disc,article ul{list-style:disc;line-height:24px;margin-bottom:24px}.rst-content .section ul li,.rst-content .toctree-wrapper ul li,.rst-content section ul li,.wy-plain-list-disc li,article ul li{list-style:disc;margin-left:24px}.rst-content .section ul li p:last-child,.rst-content .section ul li ul,.rst-content .toctree-wrapper ul li p:last-child,.rst-content .toctree-wrapper ul li ul,.rst-content section ul li p:last-child,.rst-content section ul li ul,.wy-plain-list-disc li p:last-child,.wy-plain-list-disc li ul,article ul li p:last-child,article ul li ul{margin-bottom:0}.rst-content .section ul li li,.rst-content .toctree-wrapper ul li li,.rst-content section ul li li,.wy-plain-list-disc li li,article ul li li{list-style:circle}.rst-content .section ul li li li,.rst-content .toctree-wrapper ul li li li,.rst-content section ul li li li,.wy-plain-list-disc li li li,article ul li li li{list-style:square}.rst-content .section ul li ol li,.rst-content .toctree-wrapper ul li ol li,.rst-content section ul li ol li,.wy-plain-list-disc li ol li,article ul li ol li{list-style:decimal}.rst-content .section ol,.rst-content .section ol.arabic,.rst-content .toctree-wrapper ol,.rst-content .toctree-wrapper ol.arabic,.rst-content section ol,.rst-content section ol.arabic,.wy-plain-list-decimal,article ol{list-style:decimal;line-height:24px;margin-bottom:24px}.rst-content .section ol.arabic li,.rst-content .section ol li,.rst-content .toctree-wrapper ol.arabic li,.rst-content .toctree-wrapper ol li,.rst-content section ol.arabic li,.rst-content section ol li,.wy-plain-list-decimal li,article ol li{list-style:decimal;margin-left:24px}.rst-content .section ol.arabic li ul,.rst-content .section ol li p:last-child,.rst-content .section ol li ul,.rst-content .toctree-wrapper ol.arabic li ul,.rst-content .toctree-wrapper ol li p:last-child,.rst-content .toctree-wrapper ol li ul,.rst-content section ol.arabic li ul,.rst-content section ol li p:last-child,.rst-content section ol li ul,.wy-plain-list-decimal li p:last-child,.wy-plain-list-decimal li ul,article ol li p:last-child,article ol li ul{margin-bottom:0}.rst-content .section ol.arabic li ul li,.rst-content .section ol li ul li,.rst-content .toctree-wrapper ol.arabic li ul li,.rst-content .toctree-wrapper ol li ul li,.rst-content section ol.arabic li ul li,.rst-content section ol li ul li,.wy-plain-list-decimal li ul li,article ol li ul li{list-style:disc}.wy-breadcrumbs{*zoom:1}.wy-breadcrumbs:after,.wy-breadcrumbs:before{display:table;content:""}.wy-breadcrumbs:after{clear:both}.wy-breadcrumbs>li{display:inline-block;padding-top:5px}.wy-breadcrumbs>li.wy-breadcrumbs-aside{float:right}.rst-content .wy-breadcrumbs>li code,.rst-content .wy-breadcrumbs>li tt,.wy-breadcrumbs>li .rst-content tt,.wy-breadcrumbs>li code{all:inherit;color:inherit}.breadcrumb-item:before{content:"/";color:#bbb;font-size:13px;padding:0 6px 0 3px}.wy-breadcrumbs-extra{margin-bottom:0;color:#b3b3b3;font-size:80%;display:inline-block}@media screen and (max-width:480px){.wy-breadcrumbs-extra,.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}@media print{.wy-breadcrumbs li.wy-breadcrumbs-aside{display:none}}html{font-size:16px}.wy-affix{position:fixed;top:1.618em}.wy-menu a:hover{text-decoration:none}.wy-menu-horiz{*zoom:1}.wy-menu-horiz:after,.wy-menu-horiz:before{display:table;content:""}.wy-menu-horiz:after{clear:both}.wy-menu-horiz li,.wy-menu-horiz ul{display:inline-block}.wy-menu-horiz li:hover{background:hsla(0,0%,100%,.1)}.wy-menu-horiz li.divide-left{border-left:1px solid #404040}.wy-menu-horiz li.divide-right{border-right:1px solid #404040}.wy-menu-horiz a{height:32px;display:inline-block;line-height:32px;padding:0 16px}.wy-menu-vertical{width:300px}.wy-menu-vertical header,.wy-menu-vertical p.caption{color:#55a5d9;height:32px;line-height:32px;padding:0 1.618em;margin:12px 0 0;display:block;font-weight:700;text-transform:uppercase;font-size:85%;white-space:nowrap}.wy-menu-vertical ul{margin-bottom:0}.wy-menu-vertical li.divide-top{border-top:1px solid #404040}.wy-menu-vertical li.divide-bottom{border-bottom:1px solid #404040}.wy-menu-vertical li.current{background:#e3e3e3}.wy-menu-vertical li.current a{color:grey;border-right:1px solid #c9c9c9;padding:.4045em 2.427em}.wy-menu-vertical li.current a:hover{background:#d6d6d6}.rst-content .wy-menu-vertical li tt,.wy-menu-vertical li .rst-content tt,.wy-menu-vertical li code{border:none;background:inherit;color:inherit;padding-left:0;padding-right:0}.wy-menu-vertical li button.toctree-expand{display:block;float:left;margin-left:-1.2em;line-height:18px;color:#4d4d4d;border:none;background:none;padding:0}.wy-menu-vertical li.current>a,.wy-menu-vertical li.on a{color:#404040;font-weight:700;position:relative;background:#fcfcfc;border:none;padding:.4045em 1.618em}.wy-menu-vertical li.current>a:hover,.wy-menu-vertical li.on a:hover{background:#fcfcfc}.wy-menu-vertical li.current>a:hover button.toctree-expand,.wy-menu-vertical li.on a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.current>a button.toctree-expand,.wy-menu-vertical li.on a button.toctree-expand{display:block;line-height:18px;color:#333}.wy-menu-vertical li.toctree-l1.current>a{border-bottom:1px solid #c9c9c9;border-top:1px solid #c9c9c9}.wy-menu-vertical .toctree-l1.current .toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .toctree-l11>ul{display:none}.wy-menu-vertical .toctree-l1.current .current.toctree-l2>ul,.wy-menu-vertical .toctree-l2.current .current.toctree-l3>ul,.wy-menu-vertical .toctree-l3.current .current.toctree-l4>ul,.wy-menu-vertical .toctree-l4.current .current.toctree-l5>ul,.wy-menu-vertical .toctree-l5.current .current.toctree-l6>ul,.wy-menu-vertical .toctree-l6.current .current.toctree-l7>ul,.wy-menu-vertical .toctree-l7.current .current.toctree-l8>ul,.wy-menu-vertical .toctree-l8.current .current.toctree-l9>ul,.wy-menu-vertical .toctree-l9.current .current.toctree-l10>ul,.wy-menu-vertical .toctree-l10.current .current.toctree-l11>ul{display:block}.wy-menu-vertical li.toctree-l3,.wy-menu-vertical li.toctree-l4{font-size:.9em}.wy-menu-vertical li.toctree-l2 a,.wy-menu-vertical li.toctree-l3 a,.wy-menu-vertical li.toctree-l4 a,.wy-menu-vertical li.toctree-l5 a,.wy-menu-vertical li.toctree-l6 a,.wy-menu-vertical li.toctree-l7 a,.wy-menu-vertical li.toctree-l8 a,.wy-menu-vertical li.toctree-l9 a,.wy-menu-vertical li.toctree-l10 a{color:#404040}.wy-menu-vertical li.toctree-l2 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l3 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l4 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l5 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l6 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l7 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l8 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l9 a:hover button.toctree-expand,.wy-menu-vertical li.toctree-l10 a:hover button.toctree-expand{color:grey}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a,.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a,.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a,.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a,.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a,.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a,.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a,.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{display:block}.wy-menu-vertical li.toctree-l2.current>a{padding:.4045em 2.427em}.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{padding:.4045em 1.618em .4045em 4.045em}.wy-menu-vertical li.toctree-l3.current>a{padding:.4045em 4.045em}.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{padding:.4045em 1.618em .4045em 5.663em}.wy-menu-vertical li.toctree-l4.current>a{padding:.4045em 5.663em}.wy-menu-vertical li.toctree-l4.current li.toctree-l5>a{padding:.4045em 1.618em .4045em 7.281em}.wy-menu-vertical li.toctree-l5.current>a{padding:.4045em 7.281em}.wy-menu-vertical li.toctree-l5.current li.toctree-l6>a{padding:.4045em 1.618em .4045em 8.899em}.wy-menu-vertical li.toctree-l6.current>a{padding:.4045em 8.899em}.wy-menu-vertical li.toctree-l6.current li.toctree-l7>a{padding:.4045em 1.618em .4045em 10.517em}.wy-menu-vertical li.toctree-l7.current>a{padding:.4045em 10.517em}.wy-menu-vertical li.toctree-l7.current li.toctree-l8>a{padding:.4045em 1.618em .4045em 12.135em}.wy-menu-vertical li.toctree-l8.current>a{padding:.4045em 12.135em}.wy-menu-vertical li.toctree-l8.current li.toctree-l9>a{padding:.4045em 1.618em .4045em 13.753em}.wy-menu-vertical li.toctree-l9.current>a{padding:.4045em 13.753em}.wy-menu-vertical li.toctree-l9.current li.toctree-l10>a{padding:.4045em 1.618em .4045em 15.371em}.wy-menu-vertical li.toctree-l10.current>a{padding:.4045em 15.371em}.wy-menu-vertical li.toctree-l10.current li.toctree-l11>a{padding:.4045em 1.618em .4045em 16.989em}.wy-menu-vertical li.toctree-l2.current>a,.wy-menu-vertical li.toctree-l2.current li.toctree-l3>a{background:#c9c9c9}.wy-menu-vertical li.toctree-l2 button.toctree-expand{color:#a3a3a3}.wy-menu-vertical li.toctree-l3.current>a,.wy-menu-vertical li.toctree-l3.current li.toctree-l4>a{background:#bdbdbd}.wy-menu-vertical li.toctree-l3 button.toctree-expand{color:#969696}.wy-menu-vertical li.current ul{display:block}.wy-menu-vertical li ul{margin-bottom:0;display:none}.wy-menu-vertical li ul li a{margin-bottom:0;color:#d9d9d9;font-weight:400}.wy-menu-vertical a{line-height:18px;padding:.4045em 1.618em;display:block;position:relative;font-size:90%;color:#d9d9d9}.wy-menu-vertical a:hover{background-color:#4e4a4a;cursor:pointer}.wy-menu-vertical a:hover button.toctree-expand{color:#d9d9d9}.wy-menu-vertical a:active{background-color:#2980b9;cursor:pointer;color:#fff}.wy-menu-vertical a:active button.toctree-expand{color:#fff}.wy-side-nav-search{display:block;width:300px;padding:.809em;margin-bottom:.809em;z-index:200;background-color:#2980b9;text-align:center;color:#fcfcfc}.wy-side-nav-search input[type=text]{width:100%;border-radius:50px;padding:6px 12px;border-color:#2472a4}.wy-side-nav-search img{display:block;margin:auto auto .809em;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-side-nav-search .wy-dropdown>a,.wy-side-nav-search>a{color:#fcfcfc;font-size:100%;font-weight:700;display:inline-block;padding:4px 6px;margin-bottom:.809em;max-width:100%}.wy-side-nav-search .wy-dropdown>a:hover,.wy-side-nav-search>a:hover{background:hsla(0,0%,100%,.1)}.wy-side-nav-search .wy-dropdown>a img.logo,.wy-side-nav-search>a img.logo{display:block;margin:0 auto;height:auto;width:auto;border-radius:0;max-width:100%;background:transparent}.wy-side-nav-search .wy-dropdown>a.icon img.logo,.wy-side-nav-search>a.icon img.logo{margin-top:.85em}.wy-side-nav-search>div.version{margin-top:-.4045em;margin-bottom:.809em;font-weight:400;color:hsla(0,0%,100%,.3)}.wy-nav .wy-menu-vertical header{color:#2980b9}.wy-nav .wy-menu-vertical a{color:#b3b3b3}.wy-nav .wy-menu-vertical a:hover{background-color:#2980b9;color:#fff}[data-menu-wrap]{-webkit-transition:all .2s ease-in;-moz-transition:all .2s ease-in;transition:all .2s ease-in;position:absolute;opacity:1;width:100%;opacity:0}[data-menu-wrap].move-center{left:0;right:auto;opacity:1}[data-menu-wrap].move-left{right:auto;left:-100%;opacity:0}[data-menu-wrap].move-right{right:-100%;left:auto;opacity:0}.wy-body-for-nav{background:#fcfcfc}.wy-grid-for-nav{position:absolute;width:100%;height:100%}.wy-nav-side{position:fixed;top:0;bottom:0;left:0;padding-bottom:2em;width:300px;overflow-x:hidden;overflow-y:hidden;min-height:100%;color:#9b9b9b;background:#343131;z-index:200}.wy-side-scroll{width:320px;position:relative;overflow-x:hidden;overflow-y:scroll;height:100%}.wy-nav-top{display:none;background:#2980b9;color:#fff;padding:.4045em .809em;position:relative;line-height:50px;text-align:center;font-size:100%;*zoom:1}.wy-nav-top:after,.wy-nav-top:before{display:table;content:""}.wy-nav-top:after{clear:both}.wy-nav-top a{color:#fff;font-weight:700}.wy-nav-top img{margin-right:12px;height:45px;width:45px;background-color:#2980b9;padding:5px;border-radius:100%}.wy-nav-top i{font-size:30px;float:left;cursor:pointer;padding-top:inherit}.wy-nav-content-wrap{margin-left:300px;background:#fcfcfc;min-height:100%}.wy-nav-content{padding:1.618em 3.236em;height:100%;max-width:800px;margin:auto}.wy-body-mask{position:fixed;width:100%;height:100%;background:rgba(0,0,0,.2);display:none;z-index:499}.wy-body-mask.on{display:block}footer{color:grey}footer p{margin-bottom:12px}.rst-content footer span.commit tt,footer span.commit .rst-content tt,footer span.commit code{padding:0;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:1em;background:none;border:none;color:grey}.rst-footer-buttons{*zoom:1}.rst-footer-buttons:after,.rst-footer-buttons:before{width:100%;display:table;content:""}.rst-footer-buttons:after{clear:both}.rst-breadcrumbs-buttons{margin-top:12px;*zoom:1}.rst-breadcrumbs-buttons:after,.rst-breadcrumbs-buttons:before{display:table;content:""}.rst-breadcrumbs-buttons:after{clear:both}#search-results .search li{margin-bottom:24px;border-bottom:1px solid #e1e4e5;padding-bottom:24px}#search-results .search li:first-child{border-top:1px solid #e1e4e5;padding-top:24px}#search-results .search li a{font-size:120%;margin-bottom:12px;display:inline-block}#search-results .context{color:grey;font-size:90%}.genindextable li>ul{margin-left:24px}@media screen and (max-width:768px){.wy-body-for-nav{background:#fcfcfc}.wy-nav-top{display:block}.wy-nav-side{left:-300px}.wy-nav-side.shift{width:85%;left:0}.wy-menu.wy-menu-vertical,.wy-side-nav-search,.wy-side-scroll{width:auto}.wy-nav-content-wrap{margin-left:0}.wy-nav-content-wrap .wy-nav-content{padding:1.618em}.wy-nav-content-wrap.shift{position:fixed;min-width:100%;left:85%;top:0;height:100%;overflow:hidden}}@media screen and (min-width:1100px){.wy-nav-content-wrap{background:rgba(0,0,0,.05)}.wy-nav-content{margin:0;background:#fcfcfc}}@media print{.rst-versions,.wy-nav-side,footer{display:none}.wy-nav-content-wrap{margin-left:0}}.rst-versions{position:fixed;bottom:0;left:0;width:300px;color:#fcfcfc;background:#1f1d1d;font-family:Lato,proxima-nova,Helvetica Neue,Arial,sans-serif;z-index:400}.rst-versions a{color:#2980b9;text-decoration:none}.rst-versions .rst-badge-small{display:none}.rst-versions .rst-current-version{padding:12px;background-color:#272525;display:block;text-align:right;font-size:90%;cursor:pointer;color:#27ae60;*zoom:1}.rst-versions .rst-current-version:after,.rst-versions .rst-current-version:before{display:table;content:""}.rst-versions .rst-current-version:after{clear:both}.rst-content .code-block-caption .rst-versions .rst-current-version .headerlink,.rst-content .eqno .rst-versions .rst-current-version .headerlink,.rst-content .rst-versions .rst-current-version .admonition-title,.rst-content code.download .rst-versions .rst-current-version span:first-child,.rst-content dl dt .rst-versions .rst-current-version .headerlink,.rst-content h1 .rst-versions .rst-current-version .headerlink,.rst-content h2 .rst-versions .rst-current-version .headerlink,.rst-content h3 .rst-versions .rst-current-version .headerlink,.rst-content h4 .rst-versions .rst-current-version .headerlink,.rst-content h5 .rst-versions .rst-current-version .headerlink,.rst-content h6 .rst-versions .rst-current-version .headerlink,.rst-content p .rst-versions .rst-current-version .headerlink,.rst-content table>caption .rst-versions .rst-current-version .headerlink,.rst-content tt.download .rst-versions .rst-current-version span:first-child,.rst-versions .rst-current-version .fa,.rst-versions .rst-current-version .icon,.rst-versions .rst-current-version .rst-content .admonition-title,.rst-versions .rst-current-version .rst-content .code-block-caption .headerlink,.rst-versions .rst-current-version .rst-content .eqno .headerlink,.rst-versions .rst-current-version .rst-content code.download span:first-child,.rst-versions .rst-current-version .rst-content dl dt .headerlink,.rst-versions .rst-current-version .rst-content h1 .headerlink,.rst-versions .rst-current-version .rst-content h2 .headerlink,.rst-versions .rst-current-version .rst-content h3 .headerlink,.rst-versions .rst-current-version .rst-content h4 .headerlink,.rst-versions .rst-current-version .rst-content h5 .headerlink,.rst-versions .rst-current-version .rst-content h6 .headerlink,.rst-versions .rst-current-version .rst-content p .headerlink,.rst-versions .rst-current-version .rst-content table>caption .headerlink,.rst-versions .rst-current-version .rst-content tt.download span:first-child,.rst-versions .rst-current-version .wy-menu-vertical li button.toctree-expand,.wy-menu-vertical li .rst-versions .rst-current-version button.toctree-expand{color:#fcfcfc}.rst-versions .rst-current-version .fa-book,.rst-versions .rst-current-version .icon-book{float:left}.rst-versions .rst-current-version.rst-out-of-date{background-color:#e74c3c;color:#fff}.rst-versions .rst-current-version.rst-active-old-version{background-color:#f1c40f;color:#000}.rst-versions.shift-up{height:auto;max-height:100%;overflow-y:scroll}.rst-versions.shift-up .rst-other-versions{display:block}.rst-versions .rst-other-versions{font-size:90%;padding:12px;color:grey;display:none}.rst-versions .rst-other-versions hr{display:block;height:1px;border:0;margin:20px 0;padding:0;border-top:1px solid #413d3d}.rst-versions .rst-other-versions dd{display:inline-block;margin:0}.rst-versions .rst-other-versions dd a{display:inline-block;padding:6px;color:#fcfcfc}.rst-versions.rst-badge{width:auto;bottom:20px;right:20px;left:auto;border:none;max-width:300px;max-height:90%}.rst-versions.rst-badge .fa-book,.rst-versions.rst-badge .icon-book{float:none;line-height:30px}.rst-versions.rst-badge.shift-up .rst-current-version{text-align:right}.rst-versions.rst-badge.shift-up .rst-current-version .fa-book,.rst-versions.rst-badge.shift-up .rst-current-version .icon-book{float:left}.rst-versions.rst-badge>.rst-current-version{width:auto;height:30px;line-height:30px;padding:0 6px;display:block;text-align:center}@media screen and (max-width:768px){.rst-versions{width:85%;display:none}.rst-versions.shift{display:block}}.rst-content .toctree-wrapper>p.caption,.rst-content h1,.rst-content h2,.rst-content h3,.rst-content h4,.rst-content h5,.rst-content h6{margin-bottom:24px}.rst-content img{max-width:100%;height:auto}.rst-content div.figure,.rst-content figure{margin-bottom:24px}.rst-content div.figure .caption-text,.rst-content figure .caption-text{font-style:italic}.rst-content div.figure p:last-child.caption,.rst-content figure p:last-child.caption{margin-bottom:0}.rst-content div.figure.align-center,.rst-content figure.align-center{text-align:center}.rst-content .section>a>img,.rst-content .section>img,.rst-content section>a>img,.rst-content section>img{margin-bottom:24px}.rst-content abbr[title]{text-decoration:none}.rst-content.style-external-links a.reference.external:after{font-family:FontAwesome;content:"\f08e";color:#b3b3b3;vertical-align:super;font-size:60%;margin:0 .2em}.rst-content blockquote{margin-left:24px;line-height:24px;margin-bottom:24px}.rst-content pre.literal-block{white-space:pre;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;display:block;overflow:auto}.rst-content div[class^=highlight],.rst-content pre.literal-block{border:1px solid #e1e4e5;overflow-x:auto;margin:1px 0 24px}.rst-content div[class^=highlight] div[class^=highlight],.rst-content pre.literal-block div[class^=highlight]{padding:0;border:none;margin:0}.rst-content div[class^=highlight] td.code{width:100%}.rst-content .linenodiv pre{border-right:1px solid #e6e9ea;margin:0;padding:12px;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;user-select:none;pointer-events:none}.rst-content div[class^=highlight] pre{white-space:pre;margin:0;padding:12px;display:block;overflow:auto}.rst-content div[class^=highlight] pre .hll{display:block;margin:0 -12px;padding:0 12px}.rst-content .linenodiv pre,.rst-content div[class^=highlight] pre,.rst-content pre.literal-block{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;font-size:12px;line-height:1.4}.rst-content div.highlight .gp,.rst-content div.highlight span.linenos{user-select:none;pointer-events:none}.rst-content div.highlight span.linenos{display:inline-block;padding-left:0;padding-right:12px;margin-right:12px;border-right:1px solid #e6e9ea}.rst-content .code-block-caption{font-style:italic;font-size:85%;line-height:1;padding:1em 0;text-align:center}@media print{.rst-content .codeblock,.rst-content div[class^=highlight],.rst-content div[class^=highlight] pre{white-space:pre-wrap}}.rst-content .admonition,.rst-content .admonition-todo,.rst-content .attention,.rst-content .caution,.rst-content .danger,.rst-content .error,.rst-content .hint,.rst-content .important,.rst-content .note,.rst-content .seealso,.rst-content .tip,.rst-content .warning{clear:both}.rst-content .admonition-todo .last,.rst-content .admonition-todo>:last-child,.rst-content .admonition .last,.rst-content .admonition>:last-child,.rst-content .attention .last,.rst-content .attention>:last-child,.rst-content .caution .last,.rst-content .caution>:last-child,.rst-content .danger .last,.rst-content .danger>:last-child,.rst-content .error .last,.rst-content .error>:last-child,.rst-content .hint .last,.rst-content .hint>:last-child,.rst-content .important .last,.rst-content .important>:last-child,.rst-content .note .last,.rst-content .note>:last-child,.rst-content .seealso .last,.rst-content .seealso>:last-child,.rst-content .tip .last,.rst-content .tip>:last-child,.rst-content .warning .last,.rst-content .warning>:last-child{margin-bottom:0}.rst-content .admonition-title:before{margin-right:4px}.rst-content .admonition table{border-color:rgba(0,0,0,.1)}.rst-content .admonition table td,.rst-content .admonition table th{background:transparent!important;border-color:rgba(0,0,0,.1)!important}.rst-content .section ol.loweralpha,.rst-content .section ol.loweralpha>li,.rst-content .toctree-wrapper ol.loweralpha,.rst-content .toctree-wrapper ol.loweralpha>li,.rst-content section ol.loweralpha,.rst-content section ol.loweralpha>li{list-style:lower-alpha}.rst-content .section ol.upperalpha,.rst-content .section ol.upperalpha>li,.rst-content .toctree-wrapper ol.upperalpha,.rst-content .toctree-wrapper ol.upperalpha>li,.rst-content section ol.upperalpha,.rst-content section ol.upperalpha>li{list-style:upper-alpha}.rst-content .section ol li>*,.rst-content .section ul li>*,.rst-content .toctree-wrapper ol li>*,.rst-content .toctree-wrapper ul li>*,.rst-content section ol li>*,.rst-content section ul li>*{margin-top:12px;margin-bottom:12px}.rst-content .section ol li>:first-child,.rst-content .section ul li>:first-child,.rst-content .toctree-wrapper ol li>:first-child,.rst-content .toctree-wrapper ul li>:first-child,.rst-content section ol li>:first-child,.rst-content section ul li>:first-child{margin-top:0}.rst-content .section ol li>p,.rst-content .section ol li>p:last-child,.rst-content .section ul li>p,.rst-content .section ul li>p:last-child,.rst-content .toctree-wrapper ol li>p,.rst-content .toctree-wrapper ol li>p:last-child,.rst-content .toctree-wrapper ul li>p,.rst-content .toctree-wrapper ul li>p:last-child,.rst-content section ol li>p,.rst-content section ol li>p:last-child,.rst-content section ul li>p,.rst-content section ul li>p:last-child{margin-bottom:12px}.rst-content .section ol li>p:only-child,.rst-content .section ol li>p:only-child:last-child,.rst-content .section ul li>p:only-child,.rst-content .section ul li>p:only-child:last-child,.rst-content .toctree-wrapper ol li>p:only-child,.rst-content .toctree-wrapper ol li>p:only-child:last-child,.rst-content .toctree-wrapper ul li>p:only-child,.rst-content .toctree-wrapper ul li>p:only-child:last-child,.rst-content section ol li>p:only-child,.rst-content section ol li>p:only-child:last-child,.rst-content section ul li>p:only-child,.rst-content section ul li>p:only-child:last-child{margin-bottom:0}.rst-content .section ol li>ol,.rst-content .section ol li>ul,.rst-content .section ul li>ol,.rst-content .section ul li>ul,.rst-content .toctree-wrapper ol li>ol,.rst-content .toctree-wrapper ol li>ul,.rst-content .toctree-wrapper ul li>ol,.rst-content .toctree-wrapper ul li>ul,.rst-content section ol li>ol,.rst-content section ol li>ul,.rst-content section ul li>ol,.rst-content section ul li>ul{margin-bottom:12px}.rst-content .section ol.simple li>*,.rst-content .section ol.simple li ol,.rst-content .section ol.simple li ul,.rst-content .section ul.simple li>*,.rst-content .section ul.simple li ol,.rst-content .section ul.simple li ul,.rst-content .toctree-wrapper ol.simple li>*,.rst-content .toctree-wrapper ol.simple li ol,.rst-content .toctree-wrapper ol.simple li ul,.rst-content .toctree-wrapper ul.simple li>*,.rst-content .toctree-wrapper ul.simple li ol,.rst-content .toctree-wrapper ul.simple li ul,.rst-content section ol.simple li>*,.rst-content section ol.simple li ol,.rst-content section ol.simple li ul,.rst-content section ul.simple li>*,.rst-content section ul.simple li ol,.rst-content section ul.simple li ul{margin-top:0;margin-bottom:0}.rst-content .line-block{margin-left:0;margin-bottom:24px;line-height:24px}.rst-content .line-block .line-block{margin-left:24px;margin-bottom:0}.rst-content .topic-title{font-weight:700;margin-bottom:12px}.rst-content .toc-backref{color:#404040}.rst-content .align-right{float:right;margin:0 0 24px 24px}.rst-content .align-left{float:left;margin:0 24px 24px 0}.rst-content .align-center{margin:auto}.rst-content .align-center:not(table){display:block}.rst-content .code-block-caption .headerlink,.rst-content .eqno .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink,.rst-content dl dt .headerlink,.rst-content h1 .headerlink,.rst-content h2 .headerlink,.rst-content h3 .headerlink,.rst-content h4 .headerlink,.rst-content h5 .headerlink,.rst-content h6 .headerlink,.rst-content p.caption .headerlink,.rst-content p .headerlink,.rst-content table>caption .headerlink{opacity:0;font-size:14px;font-family:FontAwesome;margin-left:.5em}.rst-content .code-block-caption .headerlink:focus,.rst-content .code-block-caption:hover .headerlink,.rst-content .eqno .headerlink:focus,.rst-content .eqno:hover .headerlink,.rst-content .toctree-wrapper>p.caption .headerlink:focus,.rst-content .toctree-wrapper>p.caption:hover .headerlink,.rst-content dl dt .headerlink:focus,.rst-content dl dt:hover .headerlink,.rst-content h1 .headerlink:focus,.rst-content h1:hover .headerlink,.rst-content h2 .headerlink:focus,.rst-content h2:hover .headerlink,.rst-content h3 .headerlink:focus,.rst-content h3:hover .headerlink,.rst-content h4 .headerlink:focus,.rst-content h4:hover .headerlink,.rst-content h5 .headerlink:focus,.rst-content h5:hover .headerlink,.rst-content h6 .headerlink:focus,.rst-content h6:hover .headerlink,.rst-content p.caption .headerlink:focus,.rst-content p.caption:hover .headerlink,.rst-content p .headerlink:focus,.rst-content p:hover .headerlink,.rst-content table>caption .headerlink:focus,.rst-content table>caption:hover .headerlink{opacity:1}.rst-content p a{overflow-wrap:anywhere}.rst-content .wy-table td p,.rst-content .wy-table td ul,.rst-content .wy-table th p,.rst-content .wy-table th ul,.rst-content table.docutils td p,.rst-content table.docutils td ul,.rst-content table.docutils th p,.rst-content table.docutils th ul,.rst-content table.field-list td p,.rst-content table.field-list td ul,.rst-content table.field-list th p,.rst-content table.field-list th ul{font-size:inherit}.rst-content .btn:focus{outline:2px solid}.rst-content table>caption .headerlink:after{font-size:12px}.rst-content .centered{text-align:center}.rst-content .sidebar{float:right;width:40%;display:block;margin:0 0 24px 24px;padding:24px;background:#f3f6f6;border:1px solid #e1e4e5}.rst-content .sidebar dl,.rst-content .sidebar p,.rst-content .sidebar ul{font-size:90%}.rst-content .sidebar .last,.rst-content .sidebar>:last-child{margin-bottom:0}.rst-content .sidebar .sidebar-title{display:block;font-family:Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif;font-weight:700;background:#e1e4e5;padding:6px 12px;margin:-24px -24px 24px;font-size:100%}.rst-content .highlighted{background:#f1c40f;box-shadow:0 0 0 2px #f1c40f;display:inline;font-weight:700}.rst-content .citation-reference,.rst-content .footnote-reference{vertical-align:baseline;position:relative;top:-.4em;line-height:0;font-size:90%}.rst-content .citation-reference>span.fn-bracket,.rst-content .footnote-reference>span.fn-bracket{display:none}.rst-content .hlist{width:100%}.rst-content dl dt span.classifier:before{content:" : "}.rst-content dl dt span.classifier-delimiter{display:none!important}html.writer-html4 .rst-content table.docutils.citation,html.writer-html4 .rst-content table.docutils.footnote{background:none;border:none}html.writer-html4 .rst-content table.docutils.citation td,html.writer-html4 .rst-content table.docutils.citation tr,html.writer-html4 .rst-content table.docutils.footnote td,html.writer-html4 .rst-content table.docutils.footnote tr{border:none;background-color:transparent!important;white-space:normal}html.writer-html4 .rst-content table.docutils.citation td.label,html.writer-html4 .rst-content table.docutils.footnote td.label{padding-left:0;padding-right:0;vertical-align:top}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{display:grid;grid-template-columns:auto minmax(80%,95%)}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{display:inline-grid;grid-template-columns:max-content auto}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{display:grid;grid-template-columns:auto auto minmax(.65rem,auto) minmax(40%,95%)}html.writer-html5 .rst-content aside.citation>span.label,html.writer-html5 .rst-content aside.footnote>span.label,html.writer-html5 .rst-content div.citation>span.label{grid-column-start:1;grid-column-end:2}html.writer-html5 .rst-content aside.citation>span.backrefs,html.writer-html5 .rst-content aside.footnote>span.backrefs,html.writer-html5 .rst-content div.citation>span.backrefs{grid-column-start:2;grid-column-end:3;grid-row-start:1;grid-row-end:3}html.writer-html5 .rst-content aside.citation>p,html.writer-html5 .rst-content aside.footnote>p,html.writer-html5 .rst-content div.citation>p{grid-column-start:4;grid-column-end:5}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.field-list,html.writer-html5 .rst-content dl.footnote{margin-bottom:24px}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dt{padding-left:1rem}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.field-list>dd,html.writer-html5 .rst-content dl.field-list>dt,html.writer-html5 .rst-content dl.footnote>dd,html.writer-html5 .rst-content dl.footnote>dt{margin-bottom:0}html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{font-size:.9rem}html.writer-html5 .rst-content dl.citation>dt,html.writer-html5 .rst-content dl.footnote>dt{margin:0 .5rem .5rem 0;line-height:1.2rem;word-break:break-all;font-weight:400}html.writer-html5 .rst-content dl.citation>dt>span.brackets:before,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:before{content:"["}html.writer-html5 .rst-content dl.citation>dt>span.brackets:after,html.writer-html5 .rst-content dl.footnote>dt>span.brackets:after{content:"]"}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a{word-break:keep-all}html.writer-html5 .rst-content dl.citation>dt>span.fn-backref>a:not(:first-child):before,html.writer-html5 .rst-content dl.footnote>dt>span.fn-backref>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content dl.citation>dd,html.writer-html5 .rst-content dl.footnote>dd{margin:0 0 .5rem;line-height:1.2rem}html.writer-html5 .rst-content dl.citation>dd p,html.writer-html5 .rst-content dl.footnote>dd p{font-size:.9rem}html.writer-html5 .rst-content aside.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content div.citation{padding-left:1rem;padding-right:1rem;font-size:.9rem;line-height:1.2rem}html.writer-html5 .rst-content aside.citation p,html.writer-html5 .rst-content aside.footnote p,html.writer-html5 .rst-content div.citation p{font-size:.9rem;line-height:1.2rem;margin-bottom:12px}html.writer-html5 .rst-content aside.citation span.backrefs,html.writer-html5 .rst-content aside.footnote span.backrefs,html.writer-html5 .rst-content div.citation span.backrefs{text-align:left;font-style:italic;margin-left:.65rem;word-break:break-word;word-spacing:-.1rem;max-width:5rem}html.writer-html5 .rst-content aside.citation span.backrefs>a,html.writer-html5 .rst-content aside.footnote span.backrefs>a,html.writer-html5 .rst-content div.citation span.backrefs>a{word-break:keep-all}html.writer-html5 .rst-content aside.citation span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content aside.footnote span.backrefs>a:not(:first-child):before,html.writer-html5 .rst-content div.citation span.backrefs>a:not(:first-child):before{content:" "}html.writer-html5 .rst-content aside.citation span.label,html.writer-html5 .rst-content aside.footnote span.label,html.writer-html5 .rst-content div.citation span.label{line-height:1.2rem}html.writer-html5 .rst-content aside.citation-list,html.writer-html5 .rst-content aside.footnote-list,html.writer-html5 .rst-content div.citation-list{margin-bottom:24px}html.writer-html5 .rst-content dl.option-list kbd{font-size:.9rem}.rst-content table.docutils.footnote,html.writer-html4 .rst-content table.docutils.citation,html.writer-html5 .rst-content aside.footnote,html.writer-html5 .rst-content aside.footnote-list aside.footnote,html.writer-html5 .rst-content div.citation-list>div.citation,html.writer-html5 .rst-content dl.citation,html.writer-html5 .rst-content dl.footnote{color:grey}.rst-content table.docutils.footnote code,.rst-content table.docutils.footnote tt,html.writer-html4 .rst-content table.docutils.citation code,html.writer-html4 .rst-content table.docutils.citation tt,html.writer-html5 .rst-content aside.footnote-list aside.footnote code,html.writer-html5 .rst-content aside.footnote-list aside.footnote tt,html.writer-html5 .rst-content aside.footnote code,html.writer-html5 .rst-content aside.footnote tt,html.writer-html5 .rst-content div.citation-list>div.citation code,html.writer-html5 .rst-content div.citation-list>div.citation tt,html.writer-html5 .rst-content dl.citation code,html.writer-html5 .rst-content dl.citation tt,html.writer-html5 .rst-content dl.footnote code,html.writer-html5 .rst-content dl.footnote tt{color:#555}.rst-content .wy-table-responsive.citation,.rst-content .wy-table-responsive.footnote{margin-bottom:0}.rst-content .wy-table-responsive.citation+:not(.citation),.rst-content .wy-table-responsive.footnote+:not(.footnote){margin-top:24px}.rst-content .wy-table-responsive.citation:last-child,.rst-content .wy-table-responsive.footnote:last-child{margin-bottom:24px}.rst-content table.docutils th{border-color:#e1e4e5}html.writer-html5 .rst-content table.docutils th{border:1px solid #e1e4e5}html.writer-html5 .rst-content table.docutils td>p,html.writer-html5 .rst-content table.docutils th>p{line-height:1rem;margin-bottom:0;font-size:.9rem}.rst-content table.docutils td .last,.rst-content table.docutils td .last>:last-child{margin-bottom:0}.rst-content table.field-list,.rst-content table.field-list td{border:none}.rst-content table.field-list td p{line-height:inherit}.rst-content table.field-list td>strong{display:inline-block}.rst-content table.field-list .field-name{padding-right:10px;text-align:left;white-space:nowrap}.rst-content table.field-list .field-body{text-align:left}.rst-content code,.rst-content tt{color:#000;font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;padding:2px 5px}.rst-content code big,.rst-content code em,.rst-content tt big,.rst-content tt em{font-size:100%!important;line-height:normal}.rst-content code.literal,.rst-content tt.literal{color:#e74c3c;white-space:normal}.rst-content code.xref,.rst-content tt.xref,a .rst-content code,a .rst-content tt{font-weight:700;color:#404040;overflow-wrap:normal}.rst-content kbd,.rst-content pre,.rst-content samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace}.rst-content a code,.rst-content a tt{color:#2980b9}.rst-content dl{margin-bottom:24px}.rst-content dl dt{font-weight:700;margin-bottom:12px}.rst-content dl ol,.rst-content dl p,.rst-content dl table,.rst-content dl ul{margin-bottom:12px}.rst-content dl dd{margin:0 0 12px 24px;line-height:24px}.rst-content dl dd>ol:last-child,.rst-content dl dd>p:last-child,.rst-content dl dd>table:last-child,.rst-content dl dd>ul:last-child{margin-bottom:0}html.writer-html4 .rst-content dl:not(.docutils),html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple){margin-bottom:24px}html.writer-html4 .rst-content dl:not(.docutils)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{display:table;margin:6px 0;font-size:90%;line-height:normal;background:#e7f2fa;color:#2980b9;border-top:3px solid #6ab0de;padding:6px;position:relative}html.writer-html4 .rst-content dl:not(.docutils)>dt:before,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:before{color:#6ab0de}html.writer-html4 .rst-content dl:not(.docutils)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt{margin-bottom:6px;border:none;border-left:3px solid #ccc;background:#f0f0f0;color:#555}html.writer-html4 .rst-content dl:not(.docutils) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) dl:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt .headerlink{color:#404040;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils)>dt:first-child,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple)>dt:first-child{margin-top:0}html.writer-html4 .rst-content dl:not(.docutils) code.descclassname,html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descclassname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{background-color:transparent;border:none;padding:0;font-size:100%!important}html.writer-html4 .rst-content dl:not(.docutils) code.descname,html.writer-html4 .rst-content dl:not(.docutils) tt.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) code.descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) tt.descname{font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .optional,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .optional{display:inline-block;padding:0 4px;color:#000;font-weight:700}html.writer-html4 .rst-content dl:not(.docutils) .property,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .property{display:inline-block;padding-right:8px;max-width:100%}html.writer-html4 .rst-content dl:not(.docutils) .k,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .k{font-style:italic}html.writer-html4 .rst-content dl:not(.docutils) .descclassname,html.writer-html4 .rst-content dl:not(.docutils) .descname,html.writer-html4 .rst-content dl:not(.docutils) .sig-name,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descclassname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .descname,html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.citation):not(.glossary):not(.simple) .sig-name{font-family:SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace;color:#000}.rst-content .viewcode-back,.rst-content .viewcode-link{display:inline-block;color:#27ae60;font-size:80%;padding-left:24px}.rst-content .viewcode-back{display:block;float:right}.rst-content p.rubric{margin-bottom:12px;font-weight:700}.rst-content code.download,.rst-content tt.download{background:inherit;padding:inherit;font-weight:400;font-family:inherit;font-size:inherit;color:inherit;border:inherit;white-space:inherit}.rst-content code.download span:first-child,.rst-content tt.download span:first-child{-webkit-font-smoothing:subpixel-antialiased}.rst-content code.download span:first-child:before,.rst-content tt.download span:first-child:before{margin-right:4px}.rst-content .guilabel,.rst-content .menuselection{font-size:80%;font-weight:700;border-radius:4px;padding:2.4px 6px;margin:auto 2px}.rst-content .guilabel,.rst-content .menuselection{border:1px solid #7fbbe3;background:#e7f2fa}.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>.kbd,.rst-content :not(dl.option-list)>:not(dt):not(kbd):not(.kbd)>kbd{color:inherit;font-size:80%;background-color:#fff;border:1px solid #a6a6a6;border-radius:4px;box-shadow:0 2px grey;padding:2.4px 6px;margin:auto 0}.rst-content .versionmodified{font-style:italic}@media screen and (max-width:480px){.rst-content .sidebar{width:100%}}span[id*=MathJax-Span]{color:#404040}.math{text-align:center}@font-face{font-family:Lato;src:url(fonts/lato-normal.woff2?bd03a2cc277bbbc338d464e679fe9942) format("woff2"),url(fonts/lato-normal.woff?27bd77b9162d388cb8d4c4217c7c5e2a) format("woff");font-weight:400;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold.woff2?cccb897485813c7c256901dbca54ecf2) format("woff2"),url(fonts/lato-bold.woff?d878b6c29b10beca227e9eef4246111b) format("woff");font-weight:700;font-style:normal;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-bold-italic.woff2?0b6bb6725576b072c5d0b02ecdd1900d) format("woff2"),url(fonts/lato-bold-italic.woff?9c7e4e9eb485b4a121c760e61bc3707c) format("woff");font-weight:700;font-style:italic;font-display:block}@font-face{font-family:Lato;src:url(fonts/lato-normal-italic.woff2?4eb103b4d12be57cb1d040ed5e162e9d) format("woff2"),url(fonts/lato-normal-italic.woff?f28f2d6482446544ef1ea1ccc6dd5892) format("woff");font-weight:400;font-style:italic;font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:400;src:url(fonts/Roboto-Slab-Regular.woff2?7abf5b8d04d26a2cafea937019bca958) format("woff2"),url(fonts/Roboto-Slab-Regular.woff?c1be9284088d487c5e3ff0a10a92e58c) format("woff");font-display:block}@font-face{font-family:Roboto Slab;font-style:normal;font-weight:700;src:url(fonts/Roboto-Slab-Bold.woff2?9984f4a9bda09be08e83f2506954adbe) format("woff2"),url(fonts/Roboto-Slab-Bold.woff?bed5564a116b05148e3b3bea6fb1162a) format("woff");font-display:block} \ No newline at end of file diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 00000000..d06a71d7 --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 00000000..abaa5cd8 --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,14 @@ +var DOCUMENTATION_OPTIONS = { + URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), + VERSION: '2.0.4', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 00000000..a858a410 Binary files /dev/null and b/_static/file.png differ diff --git a/_static/hnx_logo_smaller.png b/_static/hnx_logo_smaller.png new file mode 100644 index 00000000..7f8d5447 Binary files /dev/null and b/_static/hnx_logo_smaller.png differ diff --git a/_static/jquery.js b/_static/jquery.js new file mode 100644 index 00000000..c4c6022f --- /dev/null +++ b/_static/jquery.js @@ -0,0 +1,2 @@ +/*! jQuery v3.6.0 | (c) OpenJS Foundation and other contributors | jquery.org/license */ +!function(e,t){"use strict";"object"==typeof module&&"object"==typeof module.exports?module.exports=e.document?t(e,!0):function(e){if(!e.document)throw new Error("jQuery requires a window with a document");return t(e)}:t(e)}("undefined"!=typeof window?window:this,function(C,e){"use strict";var t=[],r=Object.getPrototypeOf,s=t.slice,g=t.flat?function(e){return t.flat.call(e)}:function(e){return t.concat.apply([],e)},u=t.push,i=t.indexOf,n={},o=n.toString,v=n.hasOwnProperty,a=v.toString,l=a.call(Object),y={},m=function(e){return"function"==typeof e&&"number"!=typeof e.nodeType&&"function"!=typeof e.item},x=function(e){return null!=e&&e===e.window},E=C.document,c={type:!0,src:!0,nonce:!0,noModule:!0};function b(e,t,n){var r,i,o=(n=n||E).createElement("script");if(o.text=e,t)for(r in c)(i=t[r]||t.getAttribute&&t.getAttribute(r))&&o.setAttribute(r,i);n.head.appendChild(o).parentNode.removeChild(o)}function w(e){return null==e?e+"":"object"==typeof e||"function"==typeof e?n[o.call(e)]||"object":typeof e}var f="3.6.0",S=function(e,t){return new S.fn.init(e,t)};function p(e){var t=!!e&&"length"in e&&e.length,n=w(e);return!m(e)&&!x(e)&&("array"===n||0===t||"number"==typeof t&&0+~]|"+M+")"+M+"*"),U=new RegExp(M+"|>"),X=new RegExp(F),V=new RegExp("^"+I+"$"),G={ID:new RegExp("^#("+I+")"),CLASS:new RegExp("^\\.("+I+")"),TAG:new RegExp("^("+I+"|[*])"),ATTR:new RegExp("^"+W),PSEUDO:new RegExp("^"+F),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+R+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/HTML$/i,Q=/^(?:input|select|textarea|button)$/i,J=/^h\d$/i,K=/^[^{]+\{\s*\[native \w/,Z=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ee=/[+~]/,te=new RegExp("\\\\[\\da-fA-F]{1,6}"+M+"?|\\\\([^\\r\\n\\f])","g"),ne=function(e,t){var n="0x"+e.slice(1)-65536;return t||(n<0?String.fromCharCode(n+65536):String.fromCharCode(n>>10|55296,1023&n|56320))},re=/([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g,ie=function(e,t){return t?"\0"===e?"\ufffd":e.slice(0,-1)+"\\"+e.charCodeAt(e.length-1).toString(16)+" ":"\\"+e},oe=function(){T()},ae=be(function(e){return!0===e.disabled&&"fieldset"===e.nodeName.toLowerCase()},{dir:"parentNode",next:"legend"});try{H.apply(t=O.call(p.childNodes),p.childNodes),t[p.childNodes.length].nodeType}catch(e){H={apply:t.length?function(e,t){L.apply(e,O.call(t))}:function(e,t){var n=e.length,r=0;while(e[n++]=t[r++]);e.length=n-1}}}function se(t,e,n,r){var i,o,a,s,u,l,c,f=e&&e.ownerDocument,p=e?e.nodeType:9;if(n=n||[],"string"!=typeof t||!t||1!==p&&9!==p&&11!==p)return n;if(!r&&(T(e),e=e||C,E)){if(11!==p&&(u=Z.exec(t)))if(i=u[1]){if(9===p){if(!(a=e.getElementById(i)))return n;if(a.id===i)return n.push(a),n}else if(f&&(a=f.getElementById(i))&&y(e,a)&&a.id===i)return n.push(a),n}else{if(u[2])return H.apply(n,e.getElementsByTagName(t)),n;if((i=u[3])&&d.getElementsByClassName&&e.getElementsByClassName)return H.apply(n,e.getElementsByClassName(i)),n}if(d.qsa&&!N[t+" "]&&(!v||!v.test(t))&&(1!==p||"object"!==e.nodeName.toLowerCase())){if(c=t,f=e,1===p&&(U.test(t)||z.test(t))){(f=ee.test(t)&&ye(e.parentNode)||e)===e&&d.scope||((s=e.getAttribute("id"))?s=s.replace(re,ie):e.setAttribute("id",s=S)),o=(l=h(t)).length;while(o--)l[o]=(s?"#"+s:":scope")+" "+xe(l[o]);c=l.join(",")}try{return H.apply(n,f.querySelectorAll(c)),n}catch(e){N(t,!0)}finally{s===S&&e.removeAttribute("id")}}}return g(t.replace($,"$1"),e,n,r)}function ue(){var r=[];return function e(t,n){return r.push(t+" ")>b.cacheLength&&delete e[r.shift()],e[t+" "]=n}}function le(e){return e[S]=!0,e}function ce(e){var t=C.createElement("fieldset");try{return!!e(t)}catch(e){return!1}finally{t.parentNode&&t.parentNode.removeChild(t),t=null}}function fe(e,t){var n=e.split("|"),r=n.length;while(r--)b.attrHandle[n[r]]=t}function pe(e,t){var n=t&&e,r=n&&1===e.nodeType&&1===t.nodeType&&e.sourceIndex-t.sourceIndex;if(r)return r;if(n)while(n=n.nextSibling)if(n===t)return-1;return e?1:-1}function de(t){return function(e){return"input"===e.nodeName.toLowerCase()&&e.type===t}}function he(n){return function(e){var t=e.nodeName.toLowerCase();return("input"===t||"button"===t)&&e.type===n}}function ge(t){return function(e){return"form"in e?e.parentNode&&!1===e.disabled?"label"in e?"label"in e.parentNode?e.parentNode.disabled===t:e.disabled===t:e.isDisabled===t||e.isDisabled!==!t&&ae(e)===t:e.disabled===t:"label"in e&&e.disabled===t}}function ve(a){return le(function(o){return o=+o,le(function(e,t){var n,r=a([],e.length,o),i=r.length;while(i--)e[n=r[i]]&&(e[n]=!(t[n]=e[n]))})})}function ye(e){return e&&"undefined"!=typeof e.getElementsByTagName&&e}for(e in d=se.support={},i=se.isXML=function(e){var t=e&&e.namespaceURI,n=e&&(e.ownerDocument||e).documentElement;return!Y.test(t||n&&n.nodeName||"HTML")},T=se.setDocument=function(e){var t,n,r=e?e.ownerDocument||e:p;return r!=C&&9===r.nodeType&&r.documentElement&&(a=(C=r).documentElement,E=!i(C),p!=C&&(n=C.defaultView)&&n.top!==n&&(n.addEventListener?n.addEventListener("unload",oe,!1):n.attachEvent&&n.attachEvent("onunload",oe)),d.scope=ce(function(e){return a.appendChild(e).appendChild(C.createElement("div")),"undefined"!=typeof e.querySelectorAll&&!e.querySelectorAll(":scope fieldset div").length}),d.attributes=ce(function(e){return e.className="i",!e.getAttribute("className")}),d.getElementsByTagName=ce(function(e){return e.appendChild(C.createComment("")),!e.getElementsByTagName("*").length}),d.getElementsByClassName=K.test(C.getElementsByClassName),d.getById=ce(function(e){return a.appendChild(e).id=S,!C.getElementsByName||!C.getElementsByName(S).length}),d.getById?(b.filter.ID=function(e){var t=e.replace(te,ne);return function(e){return e.getAttribute("id")===t}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n=t.getElementById(e);return n?[n]:[]}}):(b.filter.ID=function(e){var n=e.replace(te,ne);return function(e){var t="undefined"!=typeof e.getAttributeNode&&e.getAttributeNode("id");return t&&t.value===n}},b.find.ID=function(e,t){if("undefined"!=typeof t.getElementById&&E){var n,r,i,o=t.getElementById(e);if(o){if((n=o.getAttributeNode("id"))&&n.value===e)return[o];i=t.getElementsByName(e),r=0;while(o=i[r++])if((n=o.getAttributeNode("id"))&&n.value===e)return[o]}return[]}}),b.find.TAG=d.getElementsByTagName?function(e,t){return"undefined"!=typeof t.getElementsByTagName?t.getElementsByTagName(e):d.qsa?t.querySelectorAll(e):void 0}:function(e,t){var n,r=[],i=0,o=t.getElementsByTagName(e);if("*"===e){while(n=o[i++])1===n.nodeType&&r.push(n);return r}return o},b.find.CLASS=d.getElementsByClassName&&function(e,t){if("undefined"!=typeof t.getElementsByClassName&&E)return t.getElementsByClassName(e)},s=[],v=[],(d.qsa=K.test(C.querySelectorAll))&&(ce(function(e){var t;a.appendChild(e).innerHTML="",e.querySelectorAll("[msallowcapture^='']").length&&v.push("[*^$]="+M+"*(?:''|\"\")"),e.querySelectorAll("[selected]").length||v.push("\\["+M+"*(?:value|"+R+")"),e.querySelectorAll("[id~="+S+"-]").length||v.push("~="),(t=C.createElement("input")).setAttribute("name",""),e.appendChild(t),e.querySelectorAll("[name='']").length||v.push("\\["+M+"*name"+M+"*="+M+"*(?:''|\"\")"),e.querySelectorAll(":checked").length||v.push(":checked"),e.querySelectorAll("a#"+S+"+*").length||v.push(".#.+[+~]"),e.querySelectorAll("\\\f"),v.push("[\\r\\n\\f]")}),ce(function(e){e.innerHTML="";var t=C.createElement("input");t.setAttribute("type","hidden"),e.appendChild(t).setAttribute("name","D"),e.querySelectorAll("[name=d]").length&&v.push("name"+M+"*[*^$|!~]?="),2!==e.querySelectorAll(":enabled").length&&v.push(":enabled",":disabled"),a.appendChild(e).disabled=!0,2!==e.querySelectorAll(":disabled").length&&v.push(":enabled",":disabled"),e.querySelectorAll("*,:x"),v.push(",.*:")})),(d.matchesSelector=K.test(c=a.matches||a.webkitMatchesSelector||a.mozMatchesSelector||a.oMatchesSelector||a.msMatchesSelector))&&ce(function(e){d.disconnectedMatch=c.call(e,"*"),c.call(e,"[s!='']:x"),s.push("!=",F)}),v=v.length&&new RegExp(v.join("|")),s=s.length&&new RegExp(s.join("|")),t=K.test(a.compareDocumentPosition),y=t||K.test(a.contains)?function(e,t){var n=9===e.nodeType?e.documentElement:e,r=t&&t.parentNode;return e===r||!(!r||1!==r.nodeType||!(n.contains?n.contains(r):e.compareDocumentPosition&&16&e.compareDocumentPosition(r)))}:function(e,t){if(t)while(t=t.parentNode)if(t===e)return!0;return!1},j=t?function(e,t){if(e===t)return l=!0,0;var n=!e.compareDocumentPosition-!t.compareDocumentPosition;return n||(1&(n=(e.ownerDocument||e)==(t.ownerDocument||t)?e.compareDocumentPosition(t):1)||!d.sortDetached&&t.compareDocumentPosition(e)===n?e==C||e.ownerDocument==p&&y(p,e)?-1:t==C||t.ownerDocument==p&&y(p,t)?1:u?P(u,e)-P(u,t):0:4&n?-1:1)}:function(e,t){if(e===t)return l=!0,0;var n,r=0,i=e.parentNode,o=t.parentNode,a=[e],s=[t];if(!i||!o)return e==C?-1:t==C?1:i?-1:o?1:u?P(u,e)-P(u,t):0;if(i===o)return pe(e,t);n=e;while(n=n.parentNode)a.unshift(n);n=t;while(n=n.parentNode)s.unshift(n);while(a[r]===s[r])r++;return r?pe(a[r],s[r]):a[r]==p?-1:s[r]==p?1:0}),C},se.matches=function(e,t){return se(e,null,null,t)},se.matchesSelector=function(e,t){if(T(e),d.matchesSelector&&E&&!N[t+" "]&&(!s||!s.test(t))&&(!v||!v.test(t)))try{var n=c.call(e,t);if(n||d.disconnectedMatch||e.document&&11!==e.document.nodeType)return n}catch(e){N(t,!0)}return 0":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace(te,ne),e[3]=(e[3]||e[4]||e[5]||"").replace(te,ne),"~="===e[2]&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),"nth"===e[1].slice(0,3)?(e[3]||se.error(e[0]),e[4]=+(e[4]?e[5]+(e[6]||1):2*("even"===e[3]||"odd"===e[3])),e[5]=+(e[7]+e[8]||"odd"===e[3])):e[3]&&se.error(e[0]),e},PSEUDO:function(e){var t,n=!e[6]&&e[2];return G.CHILD.test(e[0])?null:(e[3]?e[2]=e[4]||e[5]||"":n&&X.test(n)&&(t=h(n,!0))&&(t=n.indexOf(")",n.length-t)-n.length)&&(e[0]=e[0].slice(0,t),e[2]=n.slice(0,t)),e.slice(0,3))}},filter:{TAG:function(e){var t=e.replace(te,ne).toLowerCase();return"*"===e?function(){return!0}:function(e){return e.nodeName&&e.nodeName.toLowerCase()===t}},CLASS:function(e){var t=m[e+" "];return t||(t=new RegExp("(^|"+M+")"+e+"("+M+"|$)"))&&m(e,function(e){return t.test("string"==typeof e.className&&e.className||"undefined"!=typeof e.getAttribute&&e.getAttribute("class")||"")})},ATTR:function(n,r,i){return function(e){var t=se.attr(e,n);return null==t?"!="===r:!r||(t+="","="===r?t===i:"!="===r?t!==i:"^="===r?i&&0===t.indexOf(i):"*="===r?i&&-1:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i;function j(e,n,r){return m(n)?S.grep(e,function(e,t){return!!n.call(e,t,e)!==r}):n.nodeType?S.grep(e,function(e){return e===n!==r}):"string"!=typeof n?S.grep(e,function(e){return-1)[^>]*|#([\w-]+))$/;(S.fn.init=function(e,t,n){var r,i;if(!e)return this;if(n=n||D,"string"==typeof e){if(!(r="<"===e[0]&&">"===e[e.length-1]&&3<=e.length?[null,e,null]:q.exec(e))||!r[1]&&t)return!t||t.jquery?(t||n).find(e):this.constructor(t).find(e);if(r[1]){if(t=t instanceof S?t[0]:t,S.merge(this,S.parseHTML(r[1],t&&t.nodeType?t.ownerDocument||t:E,!0)),N.test(r[1])&&S.isPlainObject(t))for(r in t)m(this[r])?this[r](t[r]):this.attr(r,t[r]);return this}return(i=E.getElementById(r[2]))&&(this[0]=i,this.length=1),this}return e.nodeType?(this[0]=e,this.length=1,this):m(e)?void 0!==n.ready?n.ready(e):e(S):S.makeArray(e,this)}).prototype=S.fn,D=S(E);var L=/^(?:parents|prev(?:Until|All))/,H={children:!0,contents:!0,next:!0,prev:!0};function O(e,t){while((e=e[t])&&1!==e.nodeType);return e}S.fn.extend({has:function(e){var t=S(e,this),n=t.length;return this.filter(function(){for(var e=0;e\x20\t\r\n\f]*)/i,he=/^$|^module$|\/(?:java|ecma)script/i;ce=E.createDocumentFragment().appendChild(E.createElement("div")),(fe=E.createElement("input")).setAttribute("type","radio"),fe.setAttribute("checked","checked"),fe.setAttribute("name","t"),ce.appendChild(fe),y.checkClone=ce.cloneNode(!0).cloneNode(!0).lastChild.checked,ce.innerHTML="",y.noCloneChecked=!!ce.cloneNode(!0).lastChild.defaultValue,ce.innerHTML="",y.option=!!ce.lastChild;var ge={thead:[1,"","
"],col:[2,"","
"],tr:[2,"","
"],td:[3,"","
"],_default:[0,"",""]};function ve(e,t){var n;return n="undefined"!=typeof e.getElementsByTagName?e.getElementsByTagName(t||"*"):"undefined"!=typeof e.querySelectorAll?e.querySelectorAll(t||"*"):[],void 0===t||t&&A(e,t)?S.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n",""]);var me=/<|&#?\w+;/;function xe(e,t,n,r,i){for(var o,a,s,u,l,c,f=t.createDocumentFragment(),p=[],d=0,h=e.length;d\s*$/g;function je(e,t){return A(e,"table")&&A(11!==t.nodeType?t:t.firstChild,"tr")&&S(e).children("tbody")[0]||e}function De(e){return e.type=(null!==e.getAttribute("type"))+"/"+e.type,e}function qe(e){return"true/"===(e.type||"").slice(0,5)?e.type=e.type.slice(5):e.removeAttribute("type"),e}function Le(e,t){var n,r,i,o,a,s;if(1===t.nodeType){if(Y.hasData(e)&&(s=Y.get(e).events))for(i in Y.remove(t,"handle events"),s)for(n=0,r=s[i].length;n").attr(n.scriptAttrs||{}).prop({charset:n.scriptCharset,src:n.url}).on("load error",i=function(e){r.remove(),i=null,e&&t("error"===e.type?404:200,e.type)}),E.head.appendChild(r[0])},abort:function(){i&&i()}}});var _t,zt=[],Ut=/(=)\?(?=&|$)|\?\?/;S.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=zt.pop()||S.expando+"_"+wt.guid++;return this[e]=!0,e}}),S.ajaxPrefilter("json jsonp",function(e,t,n){var r,i,o,a=!1!==e.jsonp&&(Ut.test(e.url)?"url":"string"==typeof e.data&&0===(e.contentType||"").indexOf("application/x-www-form-urlencoded")&&Ut.test(e.data)&&"data");if(a||"jsonp"===e.dataTypes[0])return r=e.jsonpCallback=m(e.jsonpCallback)?e.jsonpCallback():e.jsonpCallback,a?e[a]=e[a].replace(Ut,"$1"+r):!1!==e.jsonp&&(e.url+=(Tt.test(e.url)?"&":"?")+e.jsonp+"="+r),e.converters["script json"]=function(){return o||S.error(r+" was not called"),o[0]},e.dataTypes[0]="json",i=C[r],C[r]=function(){o=arguments},n.always(function(){void 0===i?S(C).removeProp(r):C[r]=i,e[r]&&(e.jsonpCallback=t.jsonpCallback,zt.push(r)),o&&m(i)&&i(o[0]),o=i=void 0}),"script"}),y.createHTMLDocument=((_t=E.implementation.createHTMLDocument("").body).innerHTML="
",2===_t.childNodes.length),S.parseHTML=function(e,t,n){return"string"!=typeof e?[]:("boolean"==typeof t&&(n=t,t=!1),t||(y.createHTMLDocument?((r=(t=E.implementation.createHTMLDocument("")).createElement("base")).href=E.location.href,t.head.appendChild(r)):t=E),o=!n&&[],(i=N.exec(e))?[t.createElement(i[1])]:(i=xe([e],t,o),o&&o.length&&S(o).remove(),S.merge([],i.childNodes)));var r,i,o},S.fn.load=function(e,t,n){var r,i,o,a=this,s=e.indexOf(" ");return-1").append(S.parseHTML(e)).find(r):e)}).always(n&&function(e,t){a.each(function(){n.apply(this,o||[e.responseText,t,e])})}),this},S.expr.pseudos.animated=function(t){return S.grep(S.timers,function(e){return t===e.elem}).length},S.offset={setOffset:function(e,t,n){var r,i,o,a,s,u,l=S.css(e,"position"),c=S(e),f={};"static"===l&&(e.style.position="relative"),s=c.offset(),o=S.css(e,"top"),u=S.css(e,"left"),("absolute"===l||"fixed"===l)&&-1<(o+u).indexOf("auto")?(a=(r=c.position()).top,i=r.left):(a=parseFloat(o)||0,i=parseFloat(u)||0),m(t)&&(t=t.call(e,n,S.extend({},s))),null!=t.top&&(f.top=t.top-s.top+a),null!=t.left&&(f.left=t.left-s.left+i),"using"in t?t.using.call(e,f):c.css(f)}},S.fn.extend({offset:function(t){if(arguments.length)return void 0===t?this:this.each(function(e){S.offset.setOffset(this,t,e)});var e,n,r=this[0];return r?r.getClientRects().length?(e=r.getBoundingClientRect(),n=r.ownerDocument.defaultView,{top:e.top+n.pageYOffset,left:e.left+n.pageXOffset}):{top:0,left:0}:void 0},position:function(){if(this[0]){var e,t,n,r=this[0],i={top:0,left:0};if("fixed"===S.css(r,"position"))t=r.getBoundingClientRect();else{t=this.offset(),n=r.ownerDocument,e=r.offsetParent||n.documentElement;while(e&&(e===n.body||e===n.documentElement)&&"static"===S.css(e,"position"))e=e.parentNode;e&&e!==r&&1===e.nodeType&&((i=S(e).offset()).top+=S.css(e,"borderTopWidth",!0),i.left+=S.css(e,"borderLeftWidth",!0))}return{top:t.top-i.top-S.css(r,"marginTop",!0),left:t.left-i.left-S.css(r,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var e=this.offsetParent;while(e&&"static"===S.css(e,"position"))e=e.offsetParent;return e||re})}}),S.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(t,i){var o="pageYOffset"===i;S.fn[t]=function(e){return $(this,function(e,t,n){var r;if(x(e)?r=e:9===e.nodeType&&(r=e.defaultView),void 0===n)return r?r[i]:e[t];r?r.scrollTo(o?r.pageXOffset:n,o?n:r.pageYOffset):e[t]=n},t,e,arguments.length)}}),S.each(["top","left"],function(e,n){S.cssHooks[n]=Fe(y.pixelPosition,function(e,t){if(t)return t=We(e,n),Pe.test(t)?S(e).position()[n]+"px":t})}),S.each({Height:"height",Width:"width"},function(a,s){S.each({padding:"inner"+a,content:s,"":"outer"+a},function(r,o){S.fn[o]=function(e,t){var n=arguments.length&&(r||"boolean"!=typeof e),i=r||(!0===e||!0===t?"margin":"border");return $(this,function(e,t,n){var r;return x(e)?0===o.indexOf("outer")?e["inner"+a]:e.document.documentElement["client"+a]:9===e.nodeType?(r=e.documentElement,Math.max(e.body["scroll"+a],r["scroll"+a],e.body["offset"+a],r["offset"+a],r["client"+a])):void 0===n?S.css(e,t,i):S.style(e,t,n,i)},s,n?e:void 0,n)}})}),S.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(e,t){S.fn[t]=function(e){return this.on(t,e)}}),S.fn.extend({bind:function(e,t,n){return this.on(e,null,t,n)},unbind:function(e,t){return this.off(e,null,t)},delegate:function(e,t,n,r){return this.on(t,e,n,r)},undelegate:function(e,t,n){return 1===arguments.length?this.off(e,"**"):this.off(t,e||"**",n)},hover:function(e,t){return this.mouseenter(e).mouseleave(t||e)}}),S.each("blur focus focusin focusout resize scroll click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup contextmenu".split(" "),function(e,n){S.fn[n]=function(e,t){return 0",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=y.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=y.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),y.elements=c+" "+a,j(b)}function f(a){var b=x[a[v]];return b||(b={},w++,a[v]=w,x[w]=b),b}function g(a,c,d){if(c||(c=b),q)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():u.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||t.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),q)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return y.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(y,b.frag)}function j(a){a||(a=b);var d=f(a);return!y.shivCSS||p||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),q||i(a,d),a}function k(a){for(var b,c=a.getElementsByTagName("*"),e=c.length,f=RegExp("^(?:"+d().join("|")+")$","i"),g=[];e--;)b=c[e],f.test(b.nodeName)&&g.push(b.applyElement(l(b)));return g}function l(a){for(var b,c=a.attributes,d=c.length,e=a.ownerDocument.createElement(A+":"+a.nodeName);d--;)b=c[d],b.specified&&e.setAttribute(b.nodeName,b.nodeValue);return e.style.cssText=a.style.cssText,e}function m(a){for(var b,c=a.split("{"),e=c.length,f=RegExp("(^|[\\s,>+~])("+d().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),g="$1"+A+"\\:$2";e--;)b=c[e]=c[e].split("}"),b[b.length-1]=b[b.length-1].replace(f,g),c[e]=b.join("}");return c.join("{")}function n(a){for(var b=a.length;b--;)a[b].removeNode()}function o(a){function b(){clearTimeout(g._removeSheetTimer),d&&d.removeNode(!0),d=null}var d,e,g=f(a),h=a.namespaces,i=a.parentWindow;return!B||a.printShived?a:("undefined"==typeof h[A]&&h.add(A),i.attachEvent("onbeforeprint",function(){b();for(var f,g,h,i=a.styleSheets,j=[],l=i.length,n=Array(l);l--;)n[l]=i[l];for(;h=n.pop();)if(!h.disabled&&z.test(h.media)){try{f=h.imports,g=f.length}catch(o){g=0}for(l=0;g>l;l++)n.push(f[l]);try{j.push(h.cssText)}catch(o){}}j=m(j.reverse().join("")),e=k(a),d=c(a,j)}),i.attachEvent("onafterprint",function(){n(e),clearTimeout(g._removeSheetTimer),g._removeSheetTimer=setTimeout(b,500)}),a.printShived=!0,a)}var p,q,r="3.7.3",s=a.html5||{},t=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,u=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,v="_html5shiv",w=0,x={};!function(){try{var a=b.createElement("a");a.innerHTML="",p="hidden"in a,q=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){p=!0,q=!0}}();var y={elements:s.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:r,shivCSS:s.shivCSS!==!1,supportsUnknownElements:q,shivMethods:s.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=y,j(b);var z=/^$|\b(?:all|print)\b/,A="html5shiv",B=!q&&function(){var c=b.documentElement;return!("undefined"==typeof b.namespaces||"undefined"==typeof b.parentWindow||"undefined"==typeof c.applyElement||"undefined"==typeof c.removeNode||"undefined"==typeof a.attachEvent)}();y.type+=" print",y.shivPrint=o,o(b),"object"==typeof module&&module.exports&&(module.exports=y)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/html5shiv.min.js b/_static/js/html5shiv.min.js new file mode 100644 index 00000000..cd1c674f --- /dev/null +++ b/_static/js/html5shiv.min.js @@ -0,0 +1,4 @@ +/** +* @preserve HTML5 Shiv 3.7.3 | @afarkas @jdalton @jon_neal @rem | MIT/GPL2 Licensed +*/ +!function(a,b){function c(a,b){var c=a.createElement("p"),d=a.getElementsByTagName("head")[0]||a.documentElement;return c.innerHTML="x",d.insertBefore(c.lastChild,d.firstChild)}function d(){var a=t.elements;return"string"==typeof a?a.split(" "):a}function e(a,b){var c=t.elements;"string"!=typeof c&&(c=c.join(" ")),"string"!=typeof a&&(a=a.join(" ")),t.elements=c+" "+a,j(b)}function f(a){var b=s[a[q]];return b||(b={},r++,a[q]=r,s[r]=b),b}function g(a,c,d){if(c||(c=b),l)return c.createElement(a);d||(d=f(c));var e;return e=d.cache[a]?d.cache[a].cloneNode():p.test(a)?(d.cache[a]=d.createElem(a)).cloneNode():d.createElem(a),!e.canHaveChildren||o.test(a)||e.tagUrn?e:d.frag.appendChild(e)}function h(a,c){if(a||(a=b),l)return a.createDocumentFragment();c=c||f(a);for(var e=c.frag.cloneNode(),g=0,h=d(),i=h.length;i>g;g++)e.createElement(h[g]);return e}function i(a,b){b.cache||(b.cache={},b.createElem=a.createElement,b.createFrag=a.createDocumentFragment,b.frag=b.createFrag()),a.createElement=function(c){return t.shivMethods?g(c,a,b):b.createElem(c)},a.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+d().join().replace(/[\w\-:]+/g,function(a){return b.createElem(a),b.frag.createElement(a),'c("'+a+'")'})+");return n}")(t,b.frag)}function j(a){a||(a=b);var d=f(a);return!t.shivCSS||k||d.hasCSS||(d.hasCSS=!!c(a,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),l||i(a,d),a}var k,l,m="3.7.3-pre",n=a.html5||{},o=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,p=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,q="_html5shiv",r=0,s={};!function(){try{var a=b.createElement("a");a.innerHTML="",k="hidden"in a,l=1==a.childNodes.length||function(){b.createElement("a");var a=b.createDocumentFragment();return"undefined"==typeof a.cloneNode||"undefined"==typeof a.createDocumentFragment||"undefined"==typeof a.createElement}()}catch(c){k=!0,l=!0}}();var t={elements:n.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:m,shivCSS:n.shivCSS!==!1,supportsUnknownElements:l,shivMethods:n.shivMethods!==!1,type:"default",shivDocument:j,createElement:g,createDocumentFragment:h,addElements:e};a.html5=t,j(b),"object"==typeof module&&module.exports&&(module.exports=t)}("undefined"!=typeof window?window:this,document); \ No newline at end of file diff --git a/_static/js/theme.js b/_static/js/theme.js new file mode 100644 index 00000000..1fddb6ee --- /dev/null +++ b/_static/js/theme.js @@ -0,0 +1 @@ +!function(n){var e={};function t(i){if(e[i])return e[i].exports;var o=e[i]={i:i,l:!1,exports:{}};return n[i].call(o.exports,o,o.exports,t),o.l=!0,o.exports}t.m=n,t.c=e,t.d=function(n,e,i){t.o(n,e)||Object.defineProperty(n,e,{enumerable:!0,get:i})},t.r=function(n){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(n,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(n,"__esModule",{value:!0})},t.t=function(n,e){if(1&e&&(n=t(n)),8&e)return n;if(4&e&&"object"==typeof n&&n&&n.__esModule)return n;var i=Object.create(null);if(t.r(i),Object.defineProperty(i,"default",{enumerable:!0,value:n}),2&e&&"string"!=typeof n)for(var o in n)t.d(i,o,function(e){return n[e]}.bind(null,o));return i},t.n=function(n){var e=n&&n.__esModule?function(){return n.default}:function(){return n};return t.d(e,"a",e),e},t.o=function(n,e){return Object.prototype.hasOwnProperty.call(n,e)},t.p="",t(t.s=0)}([function(n,e,t){t(1),n.exports=t(3)},function(n,e,t){(function(){var e="undefined"!=typeof window?window.jQuery:t(2);n.exports.ThemeNav={navBar:null,win:null,winScroll:!1,winResize:!1,linkScroll:!1,winPosition:0,winHeight:null,docHeight:null,isRunning:!1,enable:function(n){var t=this;void 0===n&&(n=!0),t.isRunning||(t.isRunning=!0,e((function(e){t.init(e),t.reset(),t.win.on("hashchange",t.reset),n&&t.win.on("scroll",(function(){t.linkScroll||t.winScroll||(t.winScroll=!0,requestAnimationFrame((function(){t.onScroll()})))})),t.win.on("resize",(function(){t.winResize||(t.winResize=!0,requestAnimationFrame((function(){t.onResize()})))})),t.onResize()})))},enableSticky:function(){this.enable(!0)},init:function(n){n(document);var e=this;this.navBar=n("div.wy-side-scroll:first"),this.win=n(window),n(document).on("click","[data-toggle='wy-nav-top']",(function(){n("[data-toggle='wy-nav-shift']").toggleClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift")})).on("click",".wy-menu-vertical .current ul li a",(function(){var t=n(this);n("[data-toggle='wy-nav-shift']").removeClass("shift"),n("[data-toggle='rst-versions']").toggleClass("shift"),e.toggleCurrent(t),e.hashChange()})).on("click","[data-toggle='rst-current-version']",(function(){n("[data-toggle='rst-versions']").toggleClass("shift-up")})),n("table.docutils:not(.field-list,.footnote,.citation)").wrap("
"),n("table.docutils.footnote").wrap("
"),n("table.docutils.citation").wrap("
"),n(".wy-menu-vertical ul").not(".simple").siblings("a").each((function(){var t=n(this);expand=n(''),expand.on("click",(function(n){return e.toggleCurrent(t),n.stopPropagation(),!1})),t.prepend(expand)}))},reset:function(){var n=encodeURI(window.location.hash)||"#";try{var e=$(".wy-menu-vertical"),t=e.find('[href="'+n+'"]');if(0===t.length){var i=$('.document [id="'+n.substring(1)+'"]').closest("div.section");0===(t=e.find('[href="#'+i.attr("id")+'"]')).length&&(t=e.find('[href="#"]'))}if(t.length>0){$(".wy-menu-vertical .current").removeClass("current").attr("aria-expanded","false"),t.addClass("current").attr("aria-expanded","true"),t.closest("li.toctree-l1").parent().addClass("current").attr("aria-expanded","true");for(let n=1;n<=10;n++)t.closest("li.toctree-l"+n).addClass("current").attr("aria-expanded","true");t[0].scrollIntoView()}}catch(n){console.log("Error expanding nav for anchor",n)}},onScroll:function(){this.winScroll=!1;var n=this.win.scrollTop(),e=n+this.winHeight,t=this.navBar.scrollTop()+(n-this.winPosition);n<0||e>this.docHeight||(this.navBar.scrollTop(t),this.winPosition=n)},onResize:function(){this.winResize=!1,this.winHeight=this.win.height(),this.docHeight=$(document).height()},hashChange:function(){this.linkScroll=!0,this.win.one("hashchange",(function(){this.linkScroll=!1}))},toggleCurrent:function(n){var e=n.closest("li");e.siblings("li.current").removeClass("current").attr("aria-expanded","false"),e.siblings().find("li.current").removeClass("current").attr("aria-expanded","false");var t=e.find("> ul li");t.length&&(t.removeClass("current").attr("aria-expanded","false"),e.toggleClass("current").attr("aria-expanded",(function(n,e){return"true"==e?"false":"true"})))}},"undefined"!=typeof window&&(window.SphinxRtdTheme={Navigation:n.exports.ThemeNav,StickyNav:n.exports.ThemeNav}),function(){for(var n=0,e=["ms","moz","webkit","o"],t=0;t0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 00000000..d96755fd Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 00000000..7107cec9 Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 00000000..0d49244e --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: inherit; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #eeffcc; } +.highlight .c { color: #408090; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #007020; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #408090; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #408090; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #007020 } /* Comment.Preproc */ +.highlight .cpf { color: #408090; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #408090; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #FF0000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #00A000 } /* Generic.Inserted */ +.highlight .go { color: #333333 } /* Generic.Output */ +.highlight .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #007020; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #007020 } /* Keyword.Pseudo */ +.highlight .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #902000 } /* Keyword.Type */ +.highlight .m { color: #208050 } /* Literal.Number */ +.highlight .s { color: #4070a0 } /* Literal.String */ +.highlight .na { color: #4070a0 } /* Name.Attribute */ +.highlight .nb { color: #007020 } /* Name.Builtin */ +.highlight .nc { color: #0e84b5; font-weight: bold } /* Name.Class */ +.highlight .no { color: #60add5 } /* Name.Constant */ +.highlight .nd { color: #555555; font-weight: bold } /* Name.Decorator */ +.highlight .ni { color: #d55537; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #007020 } /* Name.Exception */ +.highlight .nf { color: #06287e } /* Name.Function */ +.highlight .nl { color: #002070; font-weight: bold } /* Name.Label */ +.highlight .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #062873; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #bb60d5 } /* Name.Variable */ +.highlight .ow { color: #007020; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #208050 } /* Literal.Number.Bin */ +.highlight .mf { color: #208050 } /* Literal.Number.Float */ +.highlight .mh { color: #208050 } /* Literal.Number.Hex */ +.highlight .mi { color: #208050 } /* Literal.Number.Integer */ +.highlight .mo { color: #208050 } /* Literal.Number.Oct */ +.highlight .sa { color: #4070a0 } /* Literal.String.Affix */ +.highlight .sb { color: #4070a0 } /* Literal.String.Backtick */ +.highlight .sc { color: #4070a0 } /* Literal.String.Char */ +.highlight .dl { color: #4070a0 } /* Literal.String.Delimiter */ +.highlight .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #4070a0 } /* Literal.String.Double */ +.highlight .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #4070a0 } /* Literal.String.Heredoc */ +.highlight .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */ +.highlight .sx { color: #c65d09 } /* Literal.String.Other */ +.highlight .sr { color: #235388 } /* Literal.String.Regex */ +.highlight .s1 { color: #4070a0 } /* Literal.String.Single */ +.highlight .ss { color: #517918 } /* Literal.String.Symbol */ +.highlight .bp { color: #007020 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #06287e } /* Name.Function.Magic */ +.highlight .vc { color: #bb60d5 } /* Name.Variable.Class */ +.highlight .vg { color: #bb60d5 } /* Name.Variable.Global */ +.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */ +.highlight .vm { color: #bb60d5 } /* Name.Variable.Magic */ +.highlight .il { color: #208050 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 00000000..97d56a74 --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,566 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2023 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docUrlRoot = DOCUMENTATION_OPTIONS.URL_ROOT; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = docUrlRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = docUrlRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms) + ); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + `Search finished, found ${resultCount} page(s) matching the search query.` + ); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + htmlElement.querySelectorAll(".headerlink").forEach((el) => { el.remove() }); + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent !== undefined) return docContent.textContent; + console.warn( + "Content block not found. Sphinx search tries to obtain it via '[role=main]'. Could you check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + /** + * execute search (requires search index to be loaded) + */ + query: (query) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + // array of [docname, title, anchor, descr, score, filename] + let results = []; + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + results.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id] of foundEntries) { + let score = Math.round(100 * queryLower.length / entry.length) + results.push([ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // lookup as object + objectTerms.forEach((term) => + results.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + results.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) results.forEach((item) => (item[4] = Scorer.score(item))); + + // now sort the results by score (in opposite order of appearance, since the + // display function below uses pop() to retrieve items) and then + // alphabetically + results.sort((a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; + }); + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + results = results.reverse(); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord) && !terms[word]) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord) && !titleTerms[word]) + arr.push({ files: titleTerms[word], score: Scorer.partialTitle }); + }); + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (fileMap.has(file) && fileMap.get(file).indexOf(word) === -1) + fileMap.get(file).push(word); + else fileMap.set(file, [word]); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords) => { + const text = Search.htmlToText(htmlText); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 00000000..aae669d7 --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,144 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + parent.insertBefore( + span, + parent.insertBefore( + document.createTextNode(val.substr(pos + text.length)), + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(SphinxHighlight.highlightSearchWords); +_ready(SphinxHighlight.initEscapeListener); diff --git a/algorithms/algorithms.html b/algorithms/algorithms.html new file mode 100644 index 00000000..45930ee6 --- /dev/null +++ b/algorithms/algorithms.html @@ -0,0 +1,3093 @@ + + + + + + + algorithms package — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

algorithms package

+
+

Submodules

+
+
+

algorithms.contagion module

+
+
+algorithms.contagion.Gillespie_SIR(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, initial_recovereds=None, rho=None, tmin=0, tmax=inf, **args)[source]
+

A continuous-time SIR model for hypergraphs similar to the model in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +implemented for networks in the EoN package by Joel C. Miller +https://epidemicsonnetworks.readthedocs.io/en/latest/

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • initial_recovereds (list or numpy array, default: None) – An iterable of initially recovered node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: float('Inf')) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

t, S, I, R – time (t), number of susceptible (S), infected (I), and recovered (R) at each time.

+
+
Return type:
+

numpy arrays

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> t, S, I, R = contagion.Gillespie_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax)
+
+
+
+ +
+
+algorithms.contagion.Gillespie_SIS(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, rho=None, tmin=0, tmax=inf, return_full_data=False, sim_kwargs=None, **args)[source]
+

A continuous-time SIS model for hypergraphs similar to the model in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +implemented for networks in the EoN package by Joel C. Miller +https://epidemicsonnetworks.readthedocs.io/en/latest/

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: 100) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

t, S, I – time (t), number of susceptible (S), and infected (I) at each time.

+
+
Return type:
+

numpy arrays

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> t, S, I = contagion.Gillespie_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax)
+
+
+
+ +
+
+algorithms.contagion.collective_contagion(node, status, edge)[source]
+

The collective contagion mechanism described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034

+
+
Parameters:
+
    +
  • node (hashable) – the node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> collective_contagion(0, status, (0, 1, 2))
+    True
+>>> collective_contagion(1, status, (0, 1, 2))
+    False
+>>> collective_contagion(3, status, (0, 1, 2))
+    False
+
+
+
+ +
+
+algorithms.contagion.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=1)[source]
+

A function to animate discrete-time contagion models for hypergraphs. Currently only supports a circular layout.

+
+
Parameters:
+
    +
  • fig (matplotlib Figure object) –

  • +
  • H (HyperNetX Hypergraph object) –

  • +
  • transition_events (dictionary) – The dictionary that is output from the discrete_SIS and discrete_SIR functions with return_full_data=True

  • +
  • node_state_color_dict (dictionary) – Dictionary which specifies the colors of each node state. All node states must be specified.

  • +
  • edge_state_color_dict (dictionary) – Dictionary with keys that are edge states and values which specify the colors of each edge state +(can specify an alpha parameter). All edge-dependent transition states must be specified +(most common is “I”) and there must be a a default “OFF” setting.

  • +
  • node_radius (float, default: 1) – The radius of the nodes to draw

  • +
  • fps (int > 0, default: 1) – Frames per second of the animation

  • +
+
+
Return type:
+

matplotlib Animation object

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> import matplotlib.pyplot as plt
+>>> from IPython.display import HTML
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> transition_events = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt, return_full_data=True)
+>>> node_state_color_dict = {"S":"green", "I":"red", "R":"blue"}
+>>> edge_state_color_dict = {"S":(0, 1, 0, 0.3), "I":(1, 0, 0, 0.3), "R":(0, 0, 1, 0.3), "OFF": (1, 1, 1, 0)}
+>>> fps = 1
+>>> fig = plt.figure()
+>>> animation = contagion.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=fps)
+>>> HTML(animation.to_jshtml())
+
+
+
+ +
+
+algorithms.contagion.discrete_SIR(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, initial_recovereds=None, rho=None, tmin=0, tmax=inf, dt=1.0, return_full_data=False, **args)[source]
+

A discrete-time SIR model for hypergraphs similar to the construction described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +“Simplicial models of social contagion” by Iacopini et al. +https://doi.org/10.1038/s41467-019-10431-6

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • initial_recovereds (list or numpy array, default: None) – An iterable of initially recovered node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: float('Inf')) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • dt (float > 0, default: 1.0) – Step forward in time that the simulation takes at each step.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

    +
  • if return_full_data

    +
    +
    dictionary

    Time as the keys and events that happen as the values.

    +
    +
    +
  • +
  • else

    +
    +
    t, S, I, Rnumpy arrays

    time (t), number of susceptible (S), infected (I), and recovered (R) at each time.

    +
    +
    +
  • +
+

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> t, S, I, R = contagion.discrete_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt)
+
+
+
+ +
+
+algorithms.contagion.discrete_SIS(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, rho=None, tmin=0, tmax=100, dt=1.0, return_full_data=False, **args)[source]
+

A discrete-time SIS model for hypergraphs as implemented in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +“Simplicial models of social contagion” by Iacopini et al. +https://doi.org/10.1038/s41467-019-10431-6

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: 100) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • dt (float > 0, default: 1.0) – Step forward in time that the simulation takes at each step.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

    +
  • if return_full_data

    +
    +
    dictionary

    Time as the keys and events that happen as the values.

    +
    +
    +
  • +
  • else

    +
    +
    t, S, Inumpy arrays

    time (t), number of susceptible (S), and infected (I) at each time.

    +
    +
    +
  • +
+

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> t, S, I = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt)
+
+
+
+ +
+
+algorithms.contagion.individual_contagion(node, status, edge)[source]
+

The individual contagion mechanism described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> individual_contagion(0, status, (0, 1, 3))
+    True
+>>> individual_contagion(1, status, (0, 1, 2))
+    False
+>>> collective_contagion(3, status, (0, 3, 4))
+    False
+
+
+
+ +
+
+algorithms.contagion.majority_vote(node, status, edge)[source]
+

The majority vote contagion mechanism. If a majority of neighbors are contagious, +it is possible for an individual to change their opinion. If opinions are divided equally, +choose randomly.

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> majority_vote(0, status, (0, 1, 2))
+    True
+>>> majority_vote(0, status, (0, 1, 2, 3))
+    True
+>>> majority_vote(1, status, (0, 1, 2))
+    False
+>>> majority_vote(3, status, (0, 1, 2))
+    False
+
+
+
+ +
+
+algorithms.contagion.threshold(node, status, edge, tau=0.1)[source]
+

The threshold contagion mechanism

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
  • tau (float between 0 and 1, default: 0.1) – The fraction of nodes in an edge that must be infected for the edge to be able to transmit to the node

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> threshold(0, status, (0, 2, 3, 4), tau=0.2)
+    True
+>>> threshold(0, status, (0, 2, 3, 4), tau=0.5)
+    False
+>>> threshold(3, status, (1, 2, 3), tau=1)
+    False
+
+
+
+ +
+
+

algorithms.generative_models module

+
+
+algorithms.generative_models.chung_lu_hypergraph(k1, k2)[source]
+

A function to generate an extension of Chung-Lu hypergraph as implemented by Mirah Shi and described for +bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001

+
+
Parameters:
+
    +
  • k1 (dictionary) – This a dictionary where the keys are node ids and the values are node degrees.

  • +
  • k2 (dictionary) – This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes.

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Notes

+

The sums of k1 and k2 should be roughly the same. If they are not the same, this function returns a warning but still runs. +The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported.

+

Example:

+
>>> import hypernetx.algorithms.generative_models as gm
+>>> import random
+>>> n = 100
+>>> k1 = {i : random.randint(1, 100) for i in range(n)}
+>>> k2 = {i : sorted(k1.values())[i] for i in range(n)}
+>>> H = gm.chung_lu_hypergraph(k1, k2)
+
+
+
+ +
+
+algorithms.generative_models.dcsbm_hypergraph(k1, k2, g1, g2, omega)[source]
+

A function to generate an extension of DCSBM hypergraph as implemented by Mirah Shi and described for +bipartite networks by Larremore et al. in https://doi.org/10.1103/PhysRevE.90.012805

+
+
Parameters:
+
    +
  • k1 (dictionary) – This a dictionary where the keys are node ids and the values are node degrees.

  • +
  • k2 (dictionary) – This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes.

  • +
  • g1 (dictionary) – This a dictionary where the keys are node ids and the values are the group ids to which the node belongs. +The keys must match the keys of k1.

  • +
  • g2 (dictionary) – This a dictionary where the keys are edge ids and the values are the group ids to which the edge belongs. +The keys must match the keys of k2.

  • +
  • omega (2D numpy array) – This is a matrix with entries which specify the number of edges between a given node community and edge community. +The number of rows must match the number of node communities and the number of columns +must match the number of edge communities.

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Notes

+

The sums of k1 and k2 should be the same. If they are not the same, this function returns a warning but still runs. +The sum of k1 (and k2) and omega should be the same. If they are not the same, this function returns a warning +but still runs and the number of entries in the incidence matrix is determined by the omega matrix.

+

The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported.

+

Example:

+
>>> n = 100
+>>> k1 = {i : random.randint(1, 100) for i in range(n)}
+>>> k2 = {i : sorted(k1.values())[i] for i in range(n)}
+>>> g1 = {i : random.choice([0, 1]) for i in range(n)}
+>>> g2 = {i : random.choice([0, 1]) for i in range(n)}
+>>> omega = np.array([[100, 10], [10, 100]])
+>>> H = gm.dcsbm_hypergraph(k1, k2, g1, g2, omega)
+
+
+
+ +
+
+algorithms.generative_models.erdos_renyi_hypergraph(n, m, p, node_labels=None, edge_labels=None)[source]
+

A function to generate an Erdos-Renyi hypergraph as implemented by Mirah Shi and described for +bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001

+
+
Parameters:
+
    +
  • n (int) – Number of nodes

  • +
  • m (int) – Number of edges

  • +
  • p (float) – The probability that a bipartite edge is created

  • +
  • node_labels (list, default=None) – Vertex labels

  • +
  • edge_labels (list, default=None) – Hyperedge labels

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Example:

+
>>> import hypernetx.algorithms.generative_models as gm
+>>> n = 1000
+>>> m = n
+>>> p = 0.01
+>>> H = gm.erdos_renyi_hypergraph(n, m, p)
+
+
+
+ +
+
+

algorithms.homology_mod2 module

+
+

Homology and Smith Normal Form

+

The purpose of computing the Homology groups for data generated +hypergraphs is to identify data sources that correspond to interesting +features in the topology of the hypergraph.

+

The elements of one of these Homology groups are generated by \(k\) +dimensional cycles of relationships in the original data that are not +bound together by higher order relationships. Ideally, we want the +briefest description of these cycles; we want a minimal set of +relationships exhibiting interesting cyclic behavior. This minimal set +will be a bases for the Homology group.

+

The cyclic relationships in the data are discovered using a boundary +map represented as a matrix. To discover the bases we compute the +Smith Normal Form of the boundary map.

+
+

Homology Mod2

+

This module computes the homology groups for data represented as an +abstract simplicial complex with chain groups \(\{C_k\}\) and \(Z_2\) additions. +The boundary matrices are represented as rectangular matrices over \(Z_2\). +These matrices are diagonalized and represented in Smith +Normal Form. The kernel and image bases are computed and the Betti +numbers and homology bases are returned.

+

Methods for obtaining SNF for Z/2Z are based on Ferrario’s work: +http://www.dlfer.xyz/post/2016-10-27-smith-normal-form/

+
+
+
+
+algorithms.homology_mod2.add_to_column(M, i, j)[source]
+

Replaces column i (of M) with logical xor between column i and j

+
+
Parameters:
+
    +
  • M (np.array) – matrix

  • +
  • i (int) – index of column being altered

  • +
  • j (int) – index of column being added to altered

  • +
+
+
Returns:
+

N

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.homology_mod2.add_to_row(M, i, j)[source]
+

Replaces row i with logical xor between row i and j

+
+
Parameters:
+
    +
  • M (np.array) –

  • +
  • i (int) – index of row being altered

  • +
  • j (int) – index of row being added to altered

  • +
+
+
Returns:
+

N

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.homology_mod2.betti(bd, k=None)[source]
+

Generate the kth-betti numbers for a chain complex with boundary +matrices given by bd

+
+
Parameters:
+
    +
  • bd (dict of k-boundary matrices keyed on dimension of domain) –

  • +
  • k (int, list or tuple, optional, default=None) – list must be min value and max value of k values inclusive +if None, then all betti numbers for dimensions of existing cells will be +computed.

  • +
+
+
Returns:
+

betti – Description

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.homology_mod2.betti_numbers(h, k=None)[source]
+

Return the kth betti numbers for the simplicial homology of the ASC +associated to h

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) – Hypergraph to compute the betti numbers from

  • +
  • k (int or list, optional, default=None) – list must be min value and max value of k values inclusive +if None, then all betti numbers for dimensions of existing cells will be +computed.

  • +
+
+
Returns:
+

betti – A dictionary of betti numbers keyed by dimension

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.homology_mod2.bkMatrix(km1basis, kbasis)[source]
+

Compute the boundary map from \(C_{k-1}\)-basis to \(C_k\) basis with +respect to \(Z_2\)

+
+
Parameters:
+
    +
  • km1basis (indexable iterable) – Ordered list of \(k-1\) dimensional cell

  • +
  • kbasis (indexable iterable) – Ordered list of \(k\) dimensional cells

  • +
+
+
Returns:
+

bk – boundary matrix in \(Z_2\) stored as boolean

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.homology_mod2.boundary_group(image_basis)[source]
+

Returns a csr_matrix with rows corresponding to the elements of the +group generated by image basis over \(\mathbb{Z}_2\)

+
+
Parameters:
+

image_basis (numpy.ndarray or scipy.sparse.csr_matrix) – 2d-array of basis elements

+
+
Return type:
+

scipy.sparse.csr_matrix

+
+
+
+ +
+
+algorithms.homology_mod2.chain_complex(h, k=None)[source]
+

Compute the k-chains and k-boundary maps required to compute homology +for all values in k

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int or list of length 2, optional, default=None) – k must be an integer greater than 0 or a list of +length 2 indicating min and max dimensions to be +computed. eg. if k = [1,2] then 0,1,2,3-chains +and boundary maps for k=1,2,3 will be returned, +if None than k = [1,max dimension of edge in h]

  • +
+
+
Returns:
+

C, bd – C is a dictionary of lists +bd is a dictionary of numpy arrays

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.homology_mod2.homology_basis(bd, k=None, boundary=False, **kwargs)[source]
+

Compute a basis for the kth-simplicial homology group, \(H_k\), defined by a +chain complex \(C\) with boundary maps given by bd \(= \{k:\partial_k \}\)

+
+
Parameters:
+
    +
  • bd (dict) – dict of boundary matrices on k-chains to k-1 chains keyed on k +if krange is a tuple then all boundary matrices k in [krange[0],..,krange[1]] +inclusive must be in the dictionary

  • +
  • k (int or list of ints, optional, default=None) – k must be a positive integer or a list of +2 integers indicating min and max dimensions to be +computed, if none given all homology groups will be computed from +available boundary matrices in bd

  • +
  • boundary (bool) – option to return a basis for the boundary group from each dimension. +Needed to compute the shortest generators in the homology group.

  • +
+
+
Returns:
+

    +
  • basis (dict) – dict of generators as 0-1 tuples keyed by dim +basis for dimension k will be returned only if bd[k] and bd[k+1] have +been provided.

  • +
  • im (dict) – dict of boundary group generators keyed by dim

  • +
+

+
+
+
+ +
+
+algorithms.homology_mod2.hypergraph_homology_basis(h, k=None, shortest=False, interpreted=True)[source]
+

Computes the kth-homology groups mod 2 for the ASC +associated with the hypergraph h for k in krange inclusive

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int or list of length 2, optional, default = None) – k must be an integer greater than 0 or a list of +length 2 indicating min and max dimensions to be +computed

  • +
  • shortest (bool, optional, default=False) – option to look for shortest representative for each coset in the +homology group, only good for relatively small examples

  • +
  • interpreted (bool, optional, default = True) – if True will return an explicit basis in terms of the k-chains

  • +
+
+
Returns:
+

    +
  • basis (list) – list of generators as k-chains as boolean vectors

  • +
  • interpreted_basis – lists of kchains in basis

  • +
+

+
+
+
+ +
+
+algorithms.homology_mod2.interpret(Ck, arr, labels=None)[source]
+

Returns the data as represented in Ck associated with the arr

+
+
Parameters:
+
    +
  • Ck (list) – a list of k-cells being referenced by arr

  • +
  • arr (np.array) – array of 0-1 vectors

  • +
  • labels (dict, optional) – dictionary of labels to associate to the nodes in the cells

  • +
+
+
Returns:
+

list of k-cells referenced by data in Ck

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.homology_mod2.kchainbasis(h, k)[source]
+

Compute the set of k dimensional cells in the abstract simplicial +complex associated with the hypergraph.

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int) – dimension of cell

  • +
+
+
Returns:
+

an ordered list of kchains represented as tuples of length k+1

+
+
Return type:
+

list

+
+
+
+

See also

+

hnx.hypergraph.toplexes

+
+

Notes

+
    +
  • Method works best if h is simple [Berge], i.e. no edge contains another and there are no duplicate edges (toplexes).

  • +
  • Hypergraph node uids must be sortable.

  • +
+
+ +
+
+algorithms.homology_mod2.logical_dot(ar1, ar2)[source]
+

Returns the boolean equivalent of the dot product mod 2 on two 1-d arrays of +the same length.

+
+
Parameters:
+
    +
  • ar1 (numpy.ndarray) – 1-d array

  • +
  • ar2 (numpy.ndarray) – 1-d array

  • +
+
+
Returns:
+

boolean value associated with dot product mod 2

+
+
Return type:
+

bool

+
+
Raises:
+

HyperNetXError – If arrays are not of the same length an error will be raised.

+
+
+
+ +
+
+algorithms.homology_mod2.logical_matadd(mat1, mat2)[source]
+

Returns the boolean equivalent of matrix addition mod 2 on two +binary arrays stored as type boolean

+
+
Parameters:
+
    +
  • mat1 (np.ndarray) – 2-d array of boolean values

  • +
  • mat2 (np.ndarray) – 2-d array of boolean values

  • +
+
+
Returns:
+

mat – boolean matrix equivalent to the mod 2 matrix addition of the +matrices as matrices over Z/2Z

+
+
Return type:
+

np.ndarray

+
+
Raises:
+

HyperNetXError – If dimensions are not equal an error will be raised.

+
+
+
+ +
+
+algorithms.homology_mod2.logical_matmul(mat1, mat2)[source]
+

Returns the boolean equivalent of matrix multiplication mod 2 on two +binary arrays stored as type boolean

+
+
Parameters:
+
    +
  • mat1 (np.ndarray) – 2-d array of boolean values

  • +
  • mat2 (np.ndarray) – 2-d array of boolean values

  • +
+
+
Returns:
+

mat – boolean matrix equivalent to the mod 2 matrix multiplication of the +matrices as matrices over Z/2Z

+
+
Return type:
+

np.ndarray

+
+
Raises:
+

HyperNetXError – If inner dimensions are not equal an error will be raised.

+
+
+
+ +
+
+algorithms.homology_mod2.matmulreduce(arr, reverse=False)[source]
+

Recursively applies a ‘logical multiplication’ to a list of boolean arrays.

+

For arr = [arr[0],arr[1],arr[2]…arr[n]] returns product arr[0]arr[1]…arr[n] +If reverse = True, returns product arr[n]arr[n-1]…arr[0]

+
+
Parameters:
+
    +
  • arr (list of np.array) – list of nxm matrices represented as np.array

  • +
  • reverse (bool, optional) – order to multiply the matrices

  • +
+
+
Returns:
+

P – Product of matrices in the list

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.homology_mod2.reduced_row_echelon_form_mod2(M)[source]
+

Computes the invertible transformation matrices needed to compute +the reduced row echelon form of M modulo 2

+
+
Parameters:
+

M (np.array) – a rectangular matrix with elements in \(Z_2\)

+
+
Returns:
+

L, S, Linv – LM = S where S is the reduced echelon form of M +and M = LinvS

+
+
Return type:
+

np.arrays

+
+
+
+ +
+
+algorithms.homology_mod2.smith_normal_form_mod2(M)[source]
+

Computes the invertible transformation matrices needed to compute the +Smith Normal Form of M modulo 2

+
+
Parameters:
+
    +
  • M (np.array) – a rectangular matrix with data type bool

  • +
  • track (bool) – if track=True will print out the transformation as Z/2Z matrix as it +discovers L[i] and R[j]

  • +
+
+
Returns:
+

L, R, S, Linv – LMR = S is the Smith Normal Form of the matrix M.

+
+
Return type:
+

np.arrays

+
+
+
+

Note

+

Given a mxn matrix \(M\) with +entries in \(Z_2\) we start with the equation: \(L M R = S\), where +\(L = I_m\), and \(R=I_n\) are identity matrices and \(S = M\). We +repeatedly apply actions to the left and right side of the equation +to transform S into a diagonal matrix. +For each action applied to the left side we apply its inverse +action to the right side of I_m to generate \(L^{-1}\). +Finally we verify: +\(L M R = S\) and \(LLinv = I_m\).

+
+
+ +
+
+algorithms.homology_mod2.swap_columns(i, j, *args)[source]
+

Swaps ith and jth column of each matrix in args +Returns a list of new matrices

+
+
Parameters:
+
    +
  • i (int) –

  • +
  • j (int) –

  • +
  • args (np.arrays) –

  • +
+
+
Returns:
+

list of copies of args with ith and jth row swapped

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.homology_mod2.swap_rows(i, j, *args)[source]
+

Swaps ith and jth row of each matrix in args +Returns a list of new matrices

+
+
Parameters:
+
    +
  • i (int) –

  • +
  • j (int) –

  • +
  • args (np.arrays) –

  • +
+
+
Returns:
+

list of copies of args with ith and jth row swapped

+
+
Return type:
+

list

+
+
+
+ +
+
+

algorithms.hypergraph_modularity module

+
+

Hypergraph_Modularity

+

Modularity and clustering for hypergraphs using HyperNetX. +Adapted from F. Théberge’s GitHub repository: Hypergraph Clustering +See Tutorial 13 in the tutorials folder for library usage.

+

References

+ +
+
+
+algorithms.hypergraph_modularity.dict2part(D)[source]
+

Given a dictionary mapping the part for each vertex, return a partition as a list of sets; inverse function to part2dict

+
+
Parameters:
+

D (dict) – Dictionary keyed by vertices with values equal to integer +index of the partition the vertex belongs to

+
+
Returns:
+

List of sets; one set for each part in the partition

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.hypergraph_modularity.kumar(HG, delta=0.01)[source]
+

Compute a partition of the vertices in hypergraph HG as per Kumar’s algorithm [1]

+
+
Parameters:
+
    +
  • HG (Hypergraph) –

  • +
  • delta (float, optional) – convergence stopping criterion

  • +
+
+
Returns:
+

A partition of the vertices in HG

+
+
Return type:
+

list of sets

+
+
+
+ +
+
+algorithms.hypergraph_modularity.last_step(HG, L, wdc=<function linear>, delta=0.01)[source]
+

Given some initial partition L, compute a new partition of the vertices in HG as per Last-Step algorithm [2]

+
+

Note

+

This is a very simple algorithm that tries moving nodes between communities to improve hypergraph modularity. +It requires an initial non-trivial partition which can be obtained for example via graph clustering on the 2-section of HG, +or via Kumar’s algorithm.

+
+
+
Parameters:
+
    +
  • HG (Hypergraph) –

  • +
  • L (list of sets) – some initial partition of the vertices in HG

  • +
  • wdc (func, optional) – Hyperparameter for hypergraph modularity [2]

  • +
  • delta (float, optional) – convergence stopping criterion

  • +
+
+
Returns:
+

A new partition for the vertices in HG

+
+
Return type:
+

list of sets

+
+
+
+ +
+
+algorithms.hypergraph_modularity.linear(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This is the default choice for modularity() and last_step() functions.

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

c/d if c>d/2 else 0

+
+
Return type:
+

float

+
+
+
+ +
+
+algorithms.hypergraph_modularity.majority(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This corresponds to the majority rule [3]

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

1 if c>d/2 else 0

+
+
Return type:
+

bool

+
+
+
+ +
+
+algorithms.hypergraph_modularity.modularity(HG, A, wdc=<function linear>)[source]
+

Computes modularity of hypergraph HG with respect to partition A.

+
+
Parameters:
+
    +
  • HG (Hypergraph) – The hypergraph with some precomputed attributes via: precompute_attributes(HG)

  • +
  • A (list of sets) – Partition of the vertices in HG

  • +
  • wdc (func, optional) – Hyperparameter for hypergraph modularity [2]

  • +
+
+
+
+

Note

+

For ‘wdc’, any function of the format w(d,c) that returns 0 when c <= d/2 and value in [0,1] otherwise can be used. +Default is ‘linear’; other supplied choices are ‘majority’ and ‘strict’.

+
+
+
Returns:
+

The modularity function for partition A on HG

+
+
Return type:
+

float

+
+
+
+ +
+
+algorithms.hypergraph_modularity.part2dict(A)[source]
+

Given a partition (list of sets), returns a dictionary mapping the part for each vertex; inverse function +to dict2part

+
+
Parameters:
+

A (list of sets) – a partition of the vertices

+
+
Returns:
+

a dictionary with {vertex: partition index}

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.hypergraph_modularity.precompute_attributes(H)[source]
+

Precompute some values on hypergraph HG for faster computing of hypergraph modularity. +This needs to be run before calling either modularity() or last_step().

+
+

Note

+

If HG is unweighted, v.weight is set to 1 for each vertex v in HG. +The weighted degree for each vertex v is stored in v.strength. +The total edge weigths for each edge cardinality is stored in HG.d_weights. +Binomial coefficients to speed-up modularity computation are stored in HG.bin_coef. +Isolated vertices found only in edge(s) of size 1 are dropped.

+
+
+
Parameters:
+

HG (Hypergraph) –

+
+
Returns:
+

H – New hypergraph with added attributes

+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+algorithms.hypergraph_modularity.strict(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This corresponds to the strict rule [3]

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

1 if c==d else 0

+
+
Return type:
+

bool

+
+
+
+ +
+
+algorithms.hypergraph_modularity.two_section(HG)[source]
+

Creates a random walk based [1] 2-section igraph Graph with transition weights defined by the +weights of the hyperedges.

+
+
Parameters:
+

HG (Hypergraph) –

+
+
Returns:
+

The 2-section graph built from HG

+
+
Return type:
+

igraph.Graph

+
+
+
+ +
+
+

algorithms.laplacians_clustering module

+
+

Hypergraph Probability Transition Matrices, Laplacians, and Clustering

+

We contruct hypergraph random walks utilizing optional “edge-dependent vertex weights”, which are +weights associated with each vertex-hyperedge pair (i.e. cell weights on the incidence matrix). +The probability transition matrix of this random walk is used to construct a normalized Laplacian +matrix for the hypergraph. That normalized Laplacian then serves as the input for a spectral clustering +algorithm. This spectral clustering algorithm, as well as the normalized Laplacian and other details of +this methodology are described in

+

K. Hayashi, S. Aksoy, C. Park, H. Park, “Hypergraph random walks, Laplacians, and clustering”, +Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020. +https://doi.org/10.1145/3340531.3412034

+

Please direct any inquiries concerning the clustering module to Sinan Aksoy, sinan.aksoy@pnnl.gov

+
+
+
+algorithms.laplacians_clustering.get_pi(P)[source]
+

Returns the eigenvector corresponding to the largest eigenvalue (in magnitude), +normalized so its entries sum to 1. Intended for the probability transition matrix +of a random walk on a (connected) hypergraph, in which case the output can +be interpreted as the stationary distribution.

+
+
Parameters:
+

P (csr matrix) – Probability transition matrix

+
+
Returns:
+

pi – Stationary distribution of random walk defined by P

+
+
Return type:
+

numpy.ndarray

+
+
+
+ +
+
+algorithms.laplacians_clustering.norm_lap(H, weights=False, index=True)[source]
+

Normalized Laplacian matrix of the hypergraph. Symmetrizes the probability transition +matrix of a hypergraph random walk using the stationary distribution, using the digraph +Laplacian defined in:

+

Chung, Fan. “Laplacians and the Cheeger inequality for directed graphs.” +Annals of Combinatorics 9.1 (2005): 1-19.

+

and studied in the context of hypergraphs in:

+

Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. +Hypergraph random walks, laplacians, and clustering. +In Proceedings of CIKM 2020, (2020): 495-504.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • weight (bool, optional, default : False) – Uses cell_weights, if False, uniform weights are utilized.

  • +
  • index (bool, optional) – Whether to return matrix-index to vertex-label mapping

  • +
+
+
Returns:
+

    +
  • P (scipy.sparse.csr.csr_matrix) – Probability transition matrix of the random walk on the hypergraph

  • +
  • id (list) – contains list of index of node ids for rows

  • +
+

+
+
+
+ +
+
+algorithms.laplacians_clustering.prob_trans(H, weights=False, index=True, check_connected=True)[source]
+

The probability transition matrix of a random walk on the vertices of a hypergraph. +At each step in the walk, the next vertex is chosen by:

+
    +
  1. Selecting a hyperedge e containing the vertex with probability proportional to w(e)

  2. +
  3. Selecting a vertex v within e with probability proportional to a gamma(v,e)

  4. +
+

If weights are not specified, then all weights are uniform and the walk is equivalent +to a simple random walk. +If weights are specified, the hyperedge weights w(e) are determined from the weights +gamma(v,e).

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • weights (bool, optional, default : False) – Use the cell_weights associated with the hypergraph +If False, uniform weights are utilized.

  • +
  • index (bool, optional) – Whether to return matrix index to vertex label mapping

  • +
+
+
Returns:
+

    +
  • P (scipy.sparse.csr.csr_matrix) – Probability transition matrix of the random walk on the hypergraph

  • +
  • index (list) – contains list of index of node ids for rows

  • +
+

+
+
+
+ +
+
+algorithms.laplacians_clustering.spec_clus(H, k, existing_lap=None, weights=False)[source]
+

Hypergraph spectral clustering of the vertex set into k disjoint clusters +using the normalized hypergraph Laplacian. Equivalent to the “RDC-Spec” +Algorithm 1 in:

+

Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. +Hypergraph random walks, laplacians, and clustering. +In Proceedings of CIKM 2020, (2020): 495-504.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • k (int) – Number of clusters

  • +
  • existing_lap (csr matrix, optional) – Whether to use an existing Laplacian; otherwise, normalized hypergraph Laplacian +will be utilized

  • +
  • weights (bool, optional) – Use the cell_weights of the hypergraph. If False uniform weights are used.

  • +
+
+
Returns:
+

clusters – Vertex cluster dictionary, keyed by integers 0,…,k-1, with lists of +vertices as values.

+
+
Return type:
+

dict

+
+
+
+ +
+
+

algorithms.s_centrality_measures module

+
+

S-Centrality Measures

+

We generalize graph metrics to s-metrics for a hypergraph by using its s-connected +components. This is accomplished by computing the s edge-adjacency matrix and +constructing the corresponding graph of the matrix. We then use existing graph metrics +on this representation of the hypergraph. In essence we construct an s-line graph +corresponding to the hypergraph on which to apply our methods.

+

S-Metrics for hypergraphs are discussed in depth in: +Aksoy, S.G., Joslyn, C., Ortiz Marrero, C. et al. Hypernetwork science via high-order hypergraph walks. +EPJ Data Sci. 9, 16 (2020). https://doi.org/10.1140/epjds/s13688-020-00231-0

+
+
+
+algorithms.s_centrality_measures.s_betweenness_centrality(H, s=1, edges=True, normalized=True, return_singletons=True)[source]
+

A centrality measure for an s-edge(node) subgraph of H based on shortest paths. +Equals the betweenness centrality of vertices in the edge(node) s-linegraph.

+

In a graph (2-uniform hypergraph) the betweenness centrality of a vertex \(v\) +is the ratio of the number of non-trivial shortest paths between any pair of +vertices in the graph that pass through \(v\) divided by the total number of +non-trivial shortest paths in the graph.

+

The centrality of edge to all shortest s-edge paths +\(V\) = the set of vertices in the linegraph. +\(\sigma(s,t)\) = the number of shortest paths between vertices \(s\) and \(t\). +\(\sigma(s,t|v)\) = the number of those paths that pass through vertex \(v\).

+
+\[c_B(v) = \sum_{s \neq t \in V} \frac{\sigma(s, t|v)}{\sigma(s,t)}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int) – s connectedness requirement

  • +
  • edges (bool, optional) – determines if edge or node linegraph

  • +
  • normalized – bool, default=False, +If true the betweenness values are normalized by 2/((n-1)(n-2)), +where n is the number of edges in H

  • +
  • return_singletons (bool, optional) – if False will ignore singleton components of linegraph

  • +
+
+
Returns:
+

A dictionary of s-betweenness centrality value of the edges.

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.s_centrality_measures.s_closeness_centrality(H, s=1, edges=True, return_singletons=True, source=None)[source]
+

In a connected component the reciprocal of the sum of the distance between an +edge(node) and all other edges(nodes) in the component times the number of edges(nodes) +in the component minus 1.

+

\(V\) = the set of vertices in the linegraph. +\(n = |V|\) +\(d\) = shortest path distance

+
+\[C(u) = \frac{n - 1}{\sum_{v \neq u \in V} d(v, u)}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-closeness centrality value of the edges(nodes). +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_centrality_measures.s_eccentricity(H, s=1, edges=True, source=None, return_singletons=True)[source]
+

The length of the longest shortest path from a vertex \(u\) to every other vertex in +the s-linegraph. +\(V\) = set of vertices in the s-linegraph +\(d\) = shortest path distance

+
+\[\text{s-ecc}(u) = \text{max}\{d(u,v): v \in V\}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-eccentricity value of the edges(nodes). +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned. +If the s-linegraph is disconnected, np.inf is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_centrality_measures.s_harmonic_centrality(H, s=1, edges=True, source=None, normalized=False, return_singletons=True)[source]
+

A centrality measure for an s-edge subgraph of H. A value equal to 1 means the s-edge +intersects every other s-edge in H. All values range between 0 and 1. +Edges of size less than s return 0. If H contains only one s-edge a 0 is returned.

+

The denormalized reciprocal of the harmonic mean of all distances from \(u\) to all other vertices. +\(V\) = the set of vertices in the linegraph. +\(d\) = shortest path distance

+
+\[C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)}\]
+

Normalized this becomes: +$$C(u) = sum_{v neq u in V} frac{1}{d(v, u)}cdotfrac{2}{(n-1)(n-2)}$$ +where \(n\) is the number vertices.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-harmonic closeness centrality value of the edges, a number between 0 and 1 inclusive. +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_centrality_measures.s_harmonic_closeness_centrality(H, s=1, edge=None)[source]
+
+ +
+
+

Module contents

+
+
+algorithms.Gillespie_SIR(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, initial_recovereds=None, rho=None, tmin=0, tmax=inf, **args)[source]
+

A continuous-time SIR model for hypergraphs similar to the model in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +implemented for networks in the EoN package by Joel C. Miller +https://epidemicsonnetworks.readthedocs.io/en/latest/

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • initial_recovereds (list or numpy array, default: None) – An iterable of initially recovered node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: float('Inf')) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

t, S, I, R – time (t), number of susceptible (S), infected (I), and recovered (R) at each time.

+
+
Return type:
+

numpy arrays

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> t, S, I, R = contagion.Gillespie_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax)
+
+
+
+ +
+
+algorithms.Gillespie_SIS(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, rho=None, tmin=0, tmax=inf, return_full_data=False, sim_kwargs=None, **args)[source]
+

A continuous-time SIS model for hypergraphs similar to the model in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +implemented for networks in the EoN package by Joel C. Miller +https://epidemicsonnetworks.readthedocs.io/en/latest/

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: 100) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

t, S, I – time (t), number of susceptible (S), and infected (I) at each time.

+
+
Return type:
+

numpy arrays

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> t, S, I = contagion.Gillespie_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax)
+
+
+
+ +
+
+algorithms.add_to_column(M, i, j)[source]
+

Replaces column i (of M) with logical xor between column i and j

+
+
Parameters:
+
    +
  • M (np.array) – matrix

  • +
  • i (int) – index of column being altered

  • +
  • j (int) – index of column being added to altered

  • +
+
+
Returns:
+

N

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.add_to_row(M, i, j)[source]
+

Replaces row i with logical xor between row i and j

+
+
Parameters:
+
    +
  • M (np.array) –

  • +
  • i (int) – index of row being altered

  • +
  • j (int) – index of row being added to altered

  • +
+
+
Returns:
+

N

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.betti(bd, k=None)[source]
+

Generate the kth-betti numbers for a chain complex with boundary +matrices given by bd

+
+
Parameters:
+
    +
  • bd (dict of k-boundary matrices keyed on dimension of domain) –

  • +
  • k (int, list or tuple, optional, default=None) – list must be min value and max value of k values inclusive +if None, then all betti numbers for dimensions of existing cells will be +computed.

  • +
+
+
Returns:
+

betti – Description

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.betti_numbers(h, k=None)[source]
+

Return the kth betti numbers for the simplicial homology of the ASC +associated to h

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) – Hypergraph to compute the betti numbers from

  • +
  • k (int or list, optional, default=None) – list must be min value and max value of k values inclusive +if None, then all betti numbers for dimensions of existing cells will be +computed.

  • +
+
+
Returns:
+

betti – A dictionary of betti numbers keyed by dimension

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.bkMatrix(km1basis, kbasis)[source]
+

Compute the boundary map from \(C_{k-1}\)-basis to \(C_k\) basis with +respect to \(Z_2\)

+
+
Parameters:
+
    +
  • km1basis (indexable iterable) – Ordered list of \(k-1\) dimensional cell

  • +
  • kbasis (indexable iterable) – Ordered list of \(k\) dimensional cells

  • +
+
+
Returns:
+

bk – boundary matrix in \(Z_2\) stored as boolean

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.boundary_group(image_basis)[source]
+

Returns a csr_matrix with rows corresponding to the elements of the +group generated by image basis over \(\mathbb{Z}_2\)

+
+
Parameters:
+

image_basis (numpy.ndarray or scipy.sparse.csr_matrix) – 2d-array of basis elements

+
+
Return type:
+

scipy.sparse.csr_matrix

+
+
+
+ +
+
+algorithms.chain_complex(h, k=None)[source]
+

Compute the k-chains and k-boundary maps required to compute homology +for all values in k

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int or list of length 2, optional, default=None) – k must be an integer greater than 0 or a list of +length 2 indicating min and max dimensions to be +computed. eg. if k = [1,2] then 0,1,2,3-chains +and boundary maps for k=1,2,3 will be returned, +if None than k = [1,max dimension of edge in h]

  • +
+
+
Returns:
+

C, bd – C is a dictionary of lists +bd is a dictionary of numpy arrays

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.chung_lu_hypergraph(k1, k2)[source]
+

A function to generate an extension of Chung-Lu hypergraph as implemented by Mirah Shi and described for +bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001

+
+
Parameters:
+
    +
  • k1 (dictionary) – This a dictionary where the keys are node ids and the values are node degrees.

  • +
  • k2 (dictionary) – This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes.

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Notes

+

The sums of k1 and k2 should be roughly the same. If they are not the same, this function returns a warning but still runs. +The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported.

+

Example:

+
>>> import hypernetx.algorithms.generative_models as gm
+>>> import random
+>>> n = 100
+>>> k1 = {i : random.randint(1, 100) for i in range(n)}
+>>> k2 = {i : sorted(k1.values())[i] for i in range(n)}
+>>> H = gm.chung_lu_hypergraph(k1, k2)
+
+
+
+ +
+
+algorithms.collective_contagion(node, status, edge)[source]
+

The collective contagion mechanism described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034

+
+
Parameters:
+
    +
  • node (hashable) – the node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> collective_contagion(0, status, (0, 1, 2))
+    True
+>>> collective_contagion(1, status, (0, 1, 2))
+    False
+>>> collective_contagion(3, status, (0, 1, 2))
+    False
+
+
+
+ +
+
+algorithms.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=1)[source]
+

A function to animate discrete-time contagion models for hypergraphs. Currently only supports a circular layout.

+
+
Parameters:
+
    +
  • fig (matplotlib Figure object) –

  • +
  • H (HyperNetX Hypergraph object) –

  • +
  • transition_events (dictionary) – The dictionary that is output from the discrete_SIS and discrete_SIR functions with return_full_data=True

  • +
  • node_state_color_dict (dictionary) – Dictionary which specifies the colors of each node state. All node states must be specified.

  • +
  • edge_state_color_dict (dictionary) – Dictionary with keys that are edge states and values which specify the colors of each edge state +(can specify an alpha parameter). All edge-dependent transition states must be specified +(most common is “I”) and there must be a a default “OFF” setting.

  • +
  • node_radius (float, default: 1) – The radius of the nodes to draw

  • +
  • fps (int > 0, default: 1) – Frames per second of the animation

  • +
+
+
Return type:
+

matplotlib Animation object

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> import matplotlib.pyplot as plt
+>>> from IPython.display import HTML
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> transition_events = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt, return_full_data=True)
+>>> node_state_color_dict = {"S":"green", "I":"red", "R":"blue"}
+>>> edge_state_color_dict = {"S":(0, 1, 0, 0.3), "I":(1, 0, 0, 0.3), "R":(0, 0, 1, 0.3), "OFF": (1, 1, 1, 0)}
+>>> fps = 1
+>>> fig = plt.figure()
+>>> animation = contagion.contagion_animation(fig, H, transition_events, node_state_color_dict, edge_state_color_dict, node_radius=1, fps=fps)
+>>> HTML(animation.to_jshtml())
+
+
+
+ +
+
+algorithms.dcsbm_hypergraph(k1, k2, g1, g2, omega)[source]
+

A function to generate an extension of DCSBM hypergraph as implemented by Mirah Shi and described for +bipartite networks by Larremore et al. in https://doi.org/10.1103/PhysRevE.90.012805

+
+
Parameters:
+
    +
  • k1 (dictionary) – This a dictionary where the keys are node ids and the values are node degrees.

  • +
  • k2 (dictionary) – This a dictionary where the keys are edge ids and the values are edge degrees also known as edge sizes.

  • +
  • g1 (dictionary) – This a dictionary where the keys are node ids and the values are the group ids to which the node belongs. +The keys must match the keys of k1.

  • +
  • g2 (dictionary) – This a dictionary where the keys are edge ids and the values are the group ids to which the edge belongs. +The keys must match the keys of k2.

  • +
  • omega (2D numpy array) – This is a matrix with entries which specify the number of edges between a given node community and edge community. +The number of rows must match the number of node communities and the number of columns +must match the number of edge communities.

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Notes

+

The sums of k1 and k2 should be the same. If they are not the same, this function returns a warning but still runs. +The sum of k1 (and k2) and omega should be the same. If they are not the same, this function returns a warning +but still runs and the number of entries in the incidence matrix is determined by the omega matrix.

+

The output currently is a static Hypergraph object. Dynamic hypergraphs are not currently supported.

+

Example:

+
>>> n = 100
+>>> k1 = {i : random.randint(1, 100) for i in range(n)}
+>>> k2 = {i : sorted(k1.values())[i] for i in range(n)}
+>>> g1 = {i : random.choice([0, 1]) for i in range(n)}
+>>> g2 = {i : random.choice([0, 1]) for i in range(n)}
+>>> omega = np.array([[100, 10], [10, 100]])
+>>> H = gm.dcsbm_hypergraph(k1, k2, g1, g2, omega)
+
+
+
+ +
+
+algorithms.dict2part(D)[source]
+

Given a dictionary mapping the part for each vertex, return a partition as a list of sets; inverse function to part2dict

+
+
Parameters:
+

D (dict) – Dictionary keyed by vertices with values equal to integer +index of the partition the vertex belongs to

+
+
Returns:
+

List of sets; one set for each part in the partition

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.discrete_SIR(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, initial_recovereds=None, rho=None, tmin=0, tmax=inf, dt=1.0, return_full_data=False, **args)[source]
+

A discrete-time SIR model for hypergraphs similar to the construction described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +“Simplicial models of social contagion” by Iacopini et al. +https://doi.org/10.1038/s41467-019-10431-6

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • initial_recovereds (list or numpy array, default: None) – An iterable of initially recovered node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: float('Inf')) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • dt (float > 0, default: 1.0) – Step forward in time that the simulation takes at each step.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

    +
  • if return_full_data

    +
    +
    dictionary

    Time as the keys and events that happen as the values.

    +
    +
    +
  • +
  • else

    +
    +
    t, S, I, Rnumpy arrays

    time (t), number of susceptible (S), infected (I), and recovered (R) at each time.

    +
    +
    +
  • +
+

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> t, S, I, R = contagion.discrete_SIR(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt)
+
+
+
+ +
+
+algorithms.discrete_SIS(H, tau, gamma, transmission_function=<function threshold>, initial_infecteds=None, rho=None, tmin=0, tmax=100, dt=1.0, return_full_data=False, **args)[source]
+

A discrete-time SIS model for hypergraphs as implemented in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034 and +“Simplicial models of social contagion” by Iacopini et al. +https://doi.org/10.1038/s41467-019-10431-6

+
+
Parameters:
+
    +
  • H (HyperNetX Hypergraph object) –

  • +
  • tau (dictionary) – Edge sizes as keys (must account for all edge sizes present) and rates of infection for each size (float)

  • +
  • gamma (float) – The healing rate

  • +
  • transmission_function (lambda function, default: threshold) – A lambda function that has required arguments (node, status, edge) and optional arguments

  • +
  • initial_infecteds (list or numpy array, default: None) – Iterable of initially infected node uids

  • +
  • rho (float from 0 to 1, default: None) – The fraction of initially infected individuals. Both rho and initially infected cannot be specified.

  • +
  • tmin (float, default: 0) – Time at the start of the simulation

  • +
  • tmax (float, default: 100) – Time at which the simulation should be terminated if it hasn’t already.

  • +
  • dt (float > 0, default: 1.0) – Step forward in time that the simulation takes at each step.

  • +
  • return_full_data (bool, default: False) – This returns all the infection and recovery events at each time if True.

  • +
  • **args (Optional arguments to transmission function) – This allows user-defined transmission functions with extra parameters.

  • +
+
+
Returns:
+

    +
  • if return_full_data

    +
    +
    dictionary

    Time as the keys and events that happen as the values.

    +
    +
    +
  • +
  • else

    +
    +
    t, S, Inumpy arrays

    time (t), number of susceptible (S), and infected (I) at each time.

    +
    +
    +
  • +
+

+
+
+

Notes

+

Example:

+
>>> import hypernetx.algorithms.contagion as contagion
+>>> import random
+>>> import hypernetx as hnx
+>>> n = 1000
+>>> m = 10000
+>>> hyperedgeList = [random.sample(range(n), k=random.choice([2,3])) for i in range(m)]
+>>> H = hnx.Hypergraph(hyperedgeList)
+>>> tau = {2:0.1, 3:0.1}
+>>> gamma = 0.1
+>>> tmax = 100
+>>> dt = 0.1
+>>> t, S, I = contagion.discrete_SIS(H, tau, gamma, rho=0.1, tmin=0, tmax=tmax, dt=dt)
+
+
+
+ +
+
+algorithms.erdos_renyi_hypergraph(n, m, p, node_labels=None, edge_labels=None)[source]
+

A function to generate an Erdos-Renyi hypergraph as implemented by Mirah Shi and described for +bipartite networks by Aksoy et al. in https://doi.org/10.1093/comnet/cnx001

+
+
Parameters:
+
    +
  • n (int) – Number of nodes

  • +
  • m (int) – Number of edges

  • +
  • p (float) – The probability that a bipartite edge is created

  • +
  • node_labels (list, default=None) – Vertex labels

  • +
  • edge_labels (list, default=None) – Hyperedge labels

  • +
+
+
Return type:
+

HyperNetX Hypergraph object

+
+
+

Example:

+
>>> import hypernetx.algorithms.generative_models as gm
+>>> n = 1000
+>>> m = n
+>>> p = 0.01
+>>> H = gm.erdos_renyi_hypergraph(n, m, p)
+
+
+
+ +
+
+algorithms.get_pi(P)[source]
+

Returns the eigenvector corresponding to the largest eigenvalue (in magnitude), +normalized so its entries sum to 1. Intended for the probability transition matrix +of a random walk on a (connected) hypergraph, in which case the output can +be interpreted as the stationary distribution.

+
+
Parameters:
+

P (csr matrix) – Probability transition matrix

+
+
Returns:
+

pi – Stationary distribution of random walk defined by P

+
+
Return type:
+

numpy.ndarray

+
+
+
+ +
+
+algorithms.homology_basis(bd, k=None, boundary=False, **kwargs)[source]
+

Compute a basis for the kth-simplicial homology group, \(H_k\), defined by a +chain complex \(C\) with boundary maps given by bd \(= \{k:\partial_k \}\)

+
+
Parameters:
+
    +
  • bd (dict) – dict of boundary matrices on k-chains to k-1 chains keyed on k +if krange is a tuple then all boundary matrices k in [krange[0],..,krange[1]] +inclusive must be in the dictionary

  • +
  • k (int or list of ints, optional, default=None) – k must be a positive integer or a list of +2 integers indicating min and max dimensions to be +computed, if none given all homology groups will be computed from +available boundary matrices in bd

  • +
  • boundary (bool) – option to return a basis for the boundary group from each dimension. +Needed to compute the shortest generators in the homology group.

  • +
+
+
Returns:
+

    +
  • basis (dict) – dict of generators as 0-1 tuples keyed by dim +basis for dimension k will be returned only if bd[k] and bd[k+1] have +been provided.

  • +
  • im (dict) – dict of boundary group generators keyed by dim

  • +
+

+
+
+
+ +
+
+algorithms.hypergraph_homology_basis(h, k=None, shortest=False, interpreted=True)[source]
+

Computes the kth-homology groups mod 2 for the ASC +associated with the hypergraph h for k in krange inclusive

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int or list of length 2, optional, default = None) – k must be an integer greater than 0 or a list of +length 2 indicating min and max dimensions to be +computed

  • +
  • shortest (bool, optional, default=False) – option to look for shortest representative for each coset in the +homology group, only good for relatively small examples

  • +
  • interpreted (bool, optional, default = True) – if True will return an explicit basis in terms of the k-chains

  • +
+
+
Returns:
+

    +
  • basis (list) – list of generators as k-chains as boolean vectors

  • +
  • interpreted_basis – lists of kchains in basis

  • +
+

+
+
+
+ +
+
+algorithms.individual_contagion(node, status, edge)[source]
+

The individual contagion mechanism described in +“The effect of heterogeneity on hypergraph contagion models” by Landry and Restrepo +https://doi.org/10.1063/5.0020034

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> individual_contagion(0, status, (0, 1, 3))
+    True
+>>> individual_contagion(1, status, (0, 1, 2))
+    False
+>>> collective_contagion(3, status, (0, 3, 4))
+    False
+
+
+
+ +
+
+algorithms.interpret(Ck, arr, labels=None)[source]
+

Returns the data as represented in Ck associated with the arr

+
+
Parameters:
+
    +
  • Ck (list) – a list of k-cells being referenced by arr

  • +
  • arr (np.array) – array of 0-1 vectors

  • +
  • labels (dict, optional) – dictionary of labels to associate to the nodes in the cells

  • +
+
+
Returns:
+

list of k-cells referenced by data in Ck

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.kchainbasis(h, k)[source]
+

Compute the set of k dimensional cells in the abstract simplicial +complex associated with the hypergraph.

+
+
Parameters:
+
    +
  • h (hnx.Hypergraph) –

  • +
  • k (int) – dimension of cell

  • +
+
+
Returns:
+

an ordered list of kchains represented as tuples of length k+1

+
+
Return type:
+

list

+
+
+
+

See also

+

hnx.hypergraph.toplexes

+
+

Notes

+
    +
  • Method works best if h is simple [Berge], i.e. no edge contains another and there are no duplicate edges (toplexes).

  • +
  • Hypergraph node uids must be sortable.

  • +
+
+ +
+
+algorithms.kumar(HG, delta=0.01)[source]
+

Compute a partition of the vertices in hypergraph HG as per Kumar’s algorithm [1]

+
+
Parameters:
+
    +
  • HG (Hypergraph) –

  • +
  • delta (float, optional) – convergence stopping criterion

  • +
+
+
Returns:
+

A partition of the vertices in HG

+
+
Return type:
+

list of sets

+
+
+
+ +
+
+algorithms.last_step(HG, L, wdc=<function linear>, delta=0.01)[source]
+

Given some initial partition L, compute a new partition of the vertices in HG as per Last-Step algorithm [2]

+
+

Note

+

This is a very simple algorithm that tries moving nodes between communities to improve hypergraph modularity. +It requires an initial non-trivial partition which can be obtained for example via graph clustering on the 2-section of HG, +or via Kumar’s algorithm.

+
+
+
Parameters:
+
    +
  • HG (Hypergraph) –

  • +
  • L (list of sets) – some initial partition of the vertices in HG

  • +
  • wdc (func, optional) – Hyperparameter for hypergraph modularity [2]

  • +
  • delta (float, optional) – convergence stopping criterion

  • +
+
+
Returns:
+

A new partition for the vertices in HG

+
+
Return type:
+

list of sets

+
+
+
+ +
+
+algorithms.linear(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This is the default choice for modularity() and last_step() functions.

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

c/d if c>d/2 else 0

+
+
Return type:
+

float

+
+
+
+ +
+
+algorithms.logical_dot(ar1, ar2)[source]
+

Returns the boolean equivalent of the dot product mod 2 on two 1-d arrays of +the same length.

+
+
Parameters:
+
    +
  • ar1 (numpy.ndarray) – 1-d array

  • +
  • ar2 (numpy.ndarray) – 1-d array

  • +
+
+
Returns:
+

boolean value associated with dot product mod 2

+
+
Return type:
+

bool

+
+
Raises:
+

HyperNetXError – If arrays are not of the same length an error will be raised.

+
+
+
+ +
+
+algorithms.logical_matadd(mat1, mat2)[source]
+

Returns the boolean equivalent of matrix addition mod 2 on two +binary arrays stored as type boolean

+
+
Parameters:
+
    +
  • mat1 (np.ndarray) – 2-d array of boolean values

  • +
  • mat2 (np.ndarray) – 2-d array of boolean values

  • +
+
+
Returns:
+

mat – boolean matrix equivalent to the mod 2 matrix addition of the +matrices as matrices over Z/2Z

+
+
Return type:
+

np.ndarray

+
+
Raises:
+

HyperNetXError – If dimensions are not equal an error will be raised.

+
+
+
+ +
+
+algorithms.logical_matmul(mat1, mat2)[source]
+

Returns the boolean equivalent of matrix multiplication mod 2 on two +binary arrays stored as type boolean

+
+
Parameters:
+
    +
  • mat1 (np.ndarray) – 2-d array of boolean values

  • +
  • mat2 (np.ndarray) – 2-d array of boolean values

  • +
+
+
Returns:
+

mat – boolean matrix equivalent to the mod 2 matrix multiplication of the +matrices as matrices over Z/2Z

+
+
Return type:
+

np.ndarray

+
+
Raises:
+

HyperNetXError – If inner dimensions are not equal an error will be raised.

+
+
+
+ +
+
+algorithms.majority(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This corresponds to the majority rule [3]

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

1 if c>d/2 else 0

+
+
Return type:
+

bool

+
+
+
+ +
+
+algorithms.majority_vote(node, status, edge)[source]
+

The majority vote contagion mechanism. If a majority of neighbors are contagious, +it is possible for an individual to change their opinion. If opinions are divided equally, +choose randomly.

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> majority_vote(0, status, (0, 1, 2))
+    True
+>>> majority_vote(0, status, (0, 1, 2, 3))
+    True
+>>> majority_vote(1, status, (0, 1, 2))
+    False
+>>> majority_vote(3, status, (0, 1, 2))
+    False
+
+
+
+ +
+
+algorithms.matmulreduce(arr, reverse=False)[source]
+

Recursively applies a ‘logical multiplication’ to a list of boolean arrays.

+

For arr = [arr[0],arr[1],arr[2]…arr[n]] returns product arr[0]arr[1]…arr[n] +If reverse = True, returns product arr[n]arr[n-1]…arr[0]

+
+
Parameters:
+
    +
  • arr (list of np.array) – list of nxm matrices represented as np.array

  • +
  • reverse (bool, optional) – order to multiply the matrices

  • +
+
+
Returns:
+

P – Product of matrices in the list

+
+
Return type:
+

np.array

+
+
+
+ +
+
+algorithms.modularity(HG, A, wdc=<function linear>)[source]
+

Computes modularity of hypergraph HG with respect to partition A.

+
+
Parameters:
+
    +
  • HG (Hypergraph) – The hypergraph with some precomputed attributes via: precompute_attributes(HG)

  • +
  • A (list of sets) – Partition of the vertices in HG

  • +
  • wdc (func, optional) – Hyperparameter for hypergraph modularity [2]

  • +
+
+
+
+

Note

+

For ‘wdc’, any function of the format w(d,c) that returns 0 when c <= d/2 and value in [0,1] otherwise can be used. +Default is ‘linear’; other supplied choices are ‘majority’ and ‘strict’.

+
+
+
Returns:
+

The modularity function for partition A on HG

+
+
Return type:
+

float

+
+
+
+ +
+
+algorithms.norm_lap(H, weights=False, index=True)[source]
+

Normalized Laplacian matrix of the hypergraph. Symmetrizes the probability transition +matrix of a hypergraph random walk using the stationary distribution, using the digraph +Laplacian defined in:

+

Chung, Fan. “Laplacians and the Cheeger inequality for directed graphs.” +Annals of Combinatorics 9.1 (2005): 1-19.

+

and studied in the context of hypergraphs in:

+

Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. +Hypergraph random walks, laplacians, and clustering. +In Proceedings of CIKM 2020, (2020): 495-504.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • weight (bool, optional, default : False) – Uses cell_weights, if False, uniform weights are utilized.

  • +
  • index (bool, optional) – Whether to return matrix-index to vertex-label mapping

  • +
+
+
Returns:
+

    +
  • P (scipy.sparse.csr.csr_matrix) – Probability transition matrix of the random walk on the hypergraph

  • +
  • id (list) – contains list of index of node ids for rows

  • +
+

+
+
+
+ +
+
+algorithms.part2dict(A)[source]
+

Given a partition (list of sets), returns a dictionary mapping the part for each vertex; inverse function +to dict2part

+
+
Parameters:
+

A (list of sets) – a partition of the vertices

+
+
Returns:
+

a dictionary with {vertex: partition index}

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.precompute_attributes(H)[source]
+

Precompute some values on hypergraph HG for faster computing of hypergraph modularity. +This needs to be run before calling either modularity() or last_step().

+
+

Note

+

If HG is unweighted, v.weight is set to 1 for each vertex v in HG. +The weighted degree for each vertex v is stored in v.strength. +The total edge weigths for each edge cardinality is stored in HG.d_weights. +Binomial coefficients to speed-up modularity computation are stored in HG.bin_coef. +Isolated vertices found only in edge(s) of size 1 are dropped.

+
+
+
Parameters:
+

HG (Hypergraph) –

+
+
Returns:
+

H – New hypergraph with added attributes

+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+algorithms.prob_trans(H, weights=False, index=True, check_connected=True)[source]
+

The probability transition matrix of a random walk on the vertices of a hypergraph. +At each step in the walk, the next vertex is chosen by:

+
    +
  1. Selecting a hyperedge e containing the vertex with probability proportional to w(e)

  2. +
  3. Selecting a vertex v within e with probability proportional to a gamma(v,e)

  4. +
+

If weights are not specified, then all weights are uniform and the walk is equivalent +to a simple random walk. +If weights are specified, the hyperedge weights w(e) are determined from the weights +gamma(v,e).

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • weights (bool, optional, default : False) – Use the cell_weights associated with the hypergraph +If False, uniform weights are utilized.

  • +
  • index (bool, optional) – Whether to return matrix index to vertex label mapping

  • +
+
+
Returns:
+

    +
  • P (scipy.sparse.csr.csr_matrix) – Probability transition matrix of the random walk on the hypergraph

  • +
  • index (list) – contains list of index of node ids for rows

  • +
+

+
+
+
+ +
+
+algorithms.reduced_row_echelon_form_mod2(M)[source]
+

Computes the invertible transformation matrices needed to compute +the reduced row echelon form of M modulo 2

+
+
Parameters:
+

M (np.array) – a rectangular matrix with elements in \(Z_2\)

+
+
Returns:
+

L, S, Linv – LM = S where S is the reduced echelon form of M +and M = LinvS

+
+
Return type:
+

np.arrays

+
+
+
+ +
+
+algorithms.s_betweenness_centrality(H, s=1, edges=True, normalized=True, return_singletons=True)[source]
+

A centrality measure for an s-edge(node) subgraph of H based on shortest paths. +Equals the betweenness centrality of vertices in the edge(node) s-linegraph.

+

In a graph (2-uniform hypergraph) the betweenness centrality of a vertex \(v\) +is the ratio of the number of non-trivial shortest paths between any pair of +vertices in the graph that pass through \(v\) divided by the total number of +non-trivial shortest paths in the graph.

+

The centrality of edge to all shortest s-edge paths +\(V\) = the set of vertices in the linegraph. +\(\sigma(s,t)\) = the number of shortest paths between vertices \(s\) and \(t\). +\(\sigma(s,t|v)\) = the number of those paths that pass through vertex \(v\).

+
+\[c_B(v) = \sum_{s \neq t \in V} \frac{\sigma(s, t|v)}{\sigma(s,t)}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int) – s connectedness requirement

  • +
  • edges (bool, optional) – determines if edge or node linegraph

  • +
  • normalized – bool, default=False, +If true the betweenness values are normalized by 2/((n-1)(n-2)), +where n is the number of edges in H

  • +
  • return_singletons (bool, optional) – if False will ignore singleton components of linegraph

  • +
+
+
Returns:
+

A dictionary of s-betweenness centrality value of the edges.

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.s_closeness_centrality(H, s=1, edges=True, return_singletons=True, source=None)[source]
+

In a connected component the reciprocal of the sum of the distance between an +edge(node) and all other edges(nodes) in the component times the number of edges(nodes) +in the component minus 1.

+

\(V\) = the set of vertices in the linegraph. +\(n = |V|\) +\(d\) = shortest path distance

+
+\[C(u) = \frac{n - 1}{\sum_{v \neq u \in V} d(v, u)}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-closeness centrality value of the edges(nodes). +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_eccentricity(H, s=1, edges=True, source=None, return_singletons=True)[source]
+

The length of the longest shortest path from a vertex \(u\) to every other vertex in +the s-linegraph. +\(V\) = set of vertices in the s-linegraph +\(d\) = shortest path distance

+
+\[\text{s-ecc}(u) = \text{max}\{d(u,v): v \in V\}\]
+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-eccentricity value of the edges(nodes). +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned. +If the s-linegraph is disconnected, np.inf is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_harmonic_centrality(H, s=1, edges=True, source=None, normalized=False, return_singletons=True)[source]
+

A centrality measure for an s-edge subgraph of H. A value equal to 1 means the s-edge +intersects every other s-edge in H. All values range between 0 and 1. +Edges of size less than s return 0. If H contains only one s-edge a 0 is returned.

+

The denormalized reciprocal of the harmonic mean of all distances from \(u\) to all other vertices. +\(V\) = the set of vertices in the linegraph. +\(d\) = shortest path distance

+
+\[C(u) = \sum_{v \neq u \in V} \frac{1}{d(v, u)}\]
+

Normalized this becomes: +$$C(u) = sum_{v neq u in V} frac{1}{d(v, u)}cdotfrac{2}{(n-1)(n-2)}$$ +where \(n\) is the number vertices.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) –

  • +
  • s (int, optional) –

  • +
  • edges (bool, optional) – Indicates if method should compute edge linegraph (default) or node linegraph.

  • +
  • return_singletons (bool, optional) – Indicates if method should return values for singleton components.

  • +
  • source (str, optional) – Identifier of node or edge of interest for computing centrality

  • +
+
+
Returns:
+

returns the s-harmonic closeness centrality value of the edges, a number between 0 and 1 inclusive. +If source=None a dictionary of values for each s-edge in H is returned. +If source then a single value is returned.

+
+
Return type:
+

dict or float

+
+
+
+ +
+
+algorithms.s_harmonic_closeness_centrality(H, s=1, edge=None)[source]
+
+ +
+
+algorithms.smith_normal_form_mod2(M)[source]
+

Computes the invertible transformation matrices needed to compute the +Smith Normal Form of M modulo 2

+
+
Parameters:
+
    +
  • M (np.array) – a rectangular matrix with data type bool

  • +
  • track (bool) – if track=True will print out the transformation as Z/2Z matrix as it +discovers L[i] and R[j]

  • +
+
+
Returns:
+

L, R, S, Linv – LMR = S is the Smith Normal Form of the matrix M.

+
+
Return type:
+

np.arrays

+
+
+
+

Note

+

Given a mxn matrix \(M\) with +entries in \(Z_2\) we start with the equation: \(L M R = S\), where +\(L = I_m\), and \(R=I_n\) are identity matrices and \(S = M\). We +repeatedly apply actions to the left and right side of the equation +to transform S into a diagonal matrix. +For each action applied to the left side we apply its inverse +action to the right side of I_m to generate \(L^{-1}\). +Finally we verify: +\(L M R = S\) and \(LLinv = I_m\).

+
+
+ +
+
+algorithms.spec_clus(H, k, existing_lap=None, weights=False)[source]
+

Hypergraph spectral clustering of the vertex set into k disjoint clusters +using the normalized hypergraph Laplacian. Equivalent to the “RDC-Spec” +Algorithm 1 in:

+

Hayashi, K., Aksoy, S. G., Park, C. H., & Park, H. +Hypergraph random walks, laplacians, and clustering. +In Proceedings of CIKM 2020, (2020): 495-504.

+
+
Parameters:
+
    +
  • H (hnx.Hypergraph) – The hypergraph must be connected, meaning there is a path linking any two +vertices

  • +
  • k (int) – Number of clusters

  • +
  • existing_lap (csr matrix, optional) – Whether to use an existing Laplacian; otherwise, normalized hypergraph Laplacian +will be utilized

  • +
  • weights (bool, optional) – Use the cell_weights of the hypergraph. If False uniform weights are used.

  • +
+
+
Returns:
+

clusters – Vertex cluster dictionary, keyed by integers 0,…,k-1, with lists of +vertices as values.

+
+
Return type:
+

dict

+
+
+
+ +
+
+algorithms.strict(d, c)[source]
+

Hyperparameter for hypergraph modularity [2] for d-edge with c vertices in the majority class. +This corresponds to the strict rule [3]

+
+
Parameters:
+
    +
  • d (int) – Number of vertices in an edge

  • +
  • c (int) – Number of vertices in the majority class

  • +
+
+
Returns:
+

1 if c==d else 0

+
+
Return type:
+

bool

+
+
+
+ +
+
+algorithms.swap_columns(i, j, *args)[source]
+

Swaps ith and jth column of each matrix in args +Returns a list of new matrices

+
+
Parameters:
+
    +
  • i (int) –

  • +
  • j (int) –

  • +
  • args (np.arrays) –

  • +
+
+
Returns:
+

list of copies of args with ith and jth row swapped

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.swap_rows(i, j, *args)[source]
+

Swaps ith and jth row of each matrix in args +Returns a list of new matrices

+
+
Parameters:
+
    +
  • i (int) –

  • +
  • j (int) –

  • +
  • args (np.arrays) –

  • +
+
+
Returns:
+

list of copies of args with ith and jth row swapped

+
+
Return type:
+

list

+
+
+
+ +
+
+algorithms.threshold(node, status, edge, tau=0.1)[source]
+

The threshold contagion mechanism

+
+
Parameters:
+
    +
  • node (hashable) – The node uid to infect (If it doesn’t have status “S”, it will automatically return False)

  • +
  • status (dictionary) – The nodes are keys and the values are statuses (The infected state denoted with “I”)

  • +
  • edge (iterable) – Iterable of node ids (node must be in the edge or it will automatically return False)

  • +
  • tau (float between 0 and 1, default: 0.1) – The fraction of nodes in an edge that must be infected for the edge to be able to transmit to the node

  • +
+
+
Returns:
+

False if there is no potential to infect and True if there is.

+
+
Return type:
+

bool

+
+
+

Notes

+

Example:

+
>>> status = {0:"S", 1:"I", 2:"I", 3:"S", 4:"R"}
+>>> threshold(0, status, (0, 2, 3, 4), tau=0.2)
+    True
+>>> threshold(0, status, (0, 2, 3, 4), tau=0.5)
+    False
+>>> threshold(3, status, (1, 2, 3), tau=1)
+    False
+
+
+
+ +
+
+algorithms.two_section(HG)[source]
+

Creates a random walk based [1] 2-section igraph Graph with transition weights defined by the +weights of the hyperedges.

+
+
Parameters:
+

HG (Hypergraph) –

+
+
Returns:
+

The 2-section graph built from HG

+
+
Return type:
+

igraph.Graph

+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/algorithms/modules.html b/algorithms/modules.html new file mode 100644 index 00000000..67271098 --- /dev/null +++ b/algorithms/modules.html @@ -0,0 +1,271 @@ + + + + + + + algorithms — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

algorithms

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/classes/classes.html b/classes/classes.html new file mode 100644 index 00000000..0d38191d --- /dev/null +++ b/classes/classes.html @@ -0,0 +1,5818 @@ + + + + + + + classes package — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

classes package

+
+

Submodules

+
+
+

classes.entity module

+
+
+class classes.entity.Entity(entity: DataFrame | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | None = None, data_cols: Sequence[T] = [0, 1], data: ndarray | None = None, static: bool = False, labels: OrderedDict[T, Sequence[T]] | None = None, uid: Hashable | None = None, weight_col: str | int | None = 'cell_weights', weights: Sequence[float] | float | int | str | None = 1, aggregateby: str | dict | None = 'sum', properties: DataFrame | dict[int, dict[T, dict[Any, Any]]] | None = None, misc_props_col: str = 'properties', level_col: str = 'level', id_col: str = 'id')[source]
+

Bases: object

+

Base class for handling N-dimensional data when building network-like models, +i.e., Hypergraph

+
+
Parameters:
+
    +
  • entity (pandas.DataFrame, dict of lists or sets, list of lists or sets, optional) – If a DataFrame with N columns, +represents N-dimensional entity data (data table). +Otherwise, represents 2-dimensional entity data (system of sets). +TODO: Test for compatibility with list of Entities and update docs

  • +
  • data (numpy.ndarray, optional) – 2D M x N ndarray of ints (data table); +sparse representation of an N-dimensional incidence tensor with M nonzero cells. +Ignored if entity is provided.

  • +
  • static (bool, default=True) – If True, entity data may not be altered, +and the state_dict will never be cleared. +Otherwise, rows may be added to and removed from the data table, +and updates will clear the state_dict.

  • +
  • labels (collections.OrderedDict of lists, optional) – User-specified labels in corresponding order to ints in data. +Ignored if entity is provided or data is not provided.

  • +
  • uid (hashable, optional) – A unique identifier for the object

  • +
  • weights (str or sequence of float, optional) –

    User-specified cell weights corresponding to entity data. +If sequence of floats and entity or data defines a data table,

    +
    +

    length must equal the number of rows.

    +
    +
    +
    If sequence of floats and entity defines a system of sets,

    length must equal the total sum of the sizes of all sets.

    +
    +
    If str and entity is a DataFrame,

    must be the name of a column in entity.

    +
    +
    +

    Otherwise, weight for all cells is assumed to be 1.

    +

  • +
  • aggregateby ({'sum', 'last', count', 'mean','median', max', 'min', 'first', None}) – Name of function to use for aggregating cell weights of duplicate rows when +entity or data defines a data table, default is “sum”. +If None, duplicate rows will be dropped without aggregating cell weights. +Effectively ignored if entity defines a system of sets.

  • +
  • properties (pandas.DataFrame or doubly-nested dict, optional) – User-specified properties to be assigned to individual items in the data, i.e., +cell entries in a data table; sets or set elements in a system of sets. +See Notes for detailed explanation. +If DataFrame, each row gives +[optional item level, item label, optional named properties, +{property name: property value}] +(order of columns does not matter; see note for an example). +If doubly-nested dict, +{item level: {item label: {property name: property value}}}.

  • +
  • misc_props_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
  • level_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
  • id_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
+
+
+

Notes

+

A property is a named attribute assigned to a single item in the data.

+

You can pass a table of properties to properties as a DataFrame:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Level +(optional)

ID

[explicit +property type]

[…]

misc. properties

0

level 0 +item

property value

{property name: +property value}

1

level 1 +item

property value

{property name: +property value}

N

level N +item

property value

{property name: +property value}

+

The Level column is optional. If not provided, properties will be assigned by ID +(i.e., if an ID appears at multiple levels, the same properties will be assigned to +all occurrences).

+

The names of the Level (if provided) and ID columns must be specified by level_col +and id_col. misc_props_col can be used to specify the name of the column to be used +for miscellaneous properties; if no column by that name is found, +a new column will be created and populated with empty dicts. +All other columns will be considered explicit property types. +The order of the columns does not matter.

+

This method assumes that there are no rows with the same (Level, ID); +if duplicates are found, all but the first occurrence will be dropped.

+
+
+add(*args)[source]
+

Updates the underlying data table with new entity data from multiple sources

+
+
Parameters:
+

*args – variable length argument list of Entity and/or representations of entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

Warning

+

Adding an element directly to an Entity will not add the +element to any Hypergraphs constructed from that Entity, and will cause an error. Use +Hypergraph.add_edge or +Hypergraph.add_node_to_edge instead.

+
+
+

See also

+
+
add_element

update from a single source

+
+
+

Hypergraph.add_edge, Hypergraph.add_node_to_edge

+
+
+ +
+
+add_element(data)[source]
+

Updates the underlying data table with new entity data

+

Supports adding from either an existing Entity or a representation of entity +(data table or labeled system of sets are both supported representations)

+
+
Parameters:
+

data (Entity, pandas.DataFrame, or dict of lists or sets) – new entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

Warning

+

Adding an element directly to an Entity will not add the +element to any Hypergraphs constructed from that Entity, and will cause an error. Use +Hypergraph.add_edge or Hypergraph.add_node_to_edge instead.

+
+
+

See also

+
+
add

takes multiple sources of new entity data as variable length argument list

+
+
+

Hypergraph.add_edge, Hypergraph.add_node_to_edge

+
+
+ +
+
+add_elements_from(arg_set)[source]
+

Adds arguments from an iterable to the data table one at a time

+
+
..deprecated:: 2.0.0

Duplicates add

+
+
+
+
Parameters:
+

arg_set (iterable) – list of Entity and/or representations of entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+ +
+
+assign_properties(props: DataFrame | dict[int, dict[T, dict[Any, Any]]], misc_col: str | None = None, level_col=0, id_col=1) None[source]
+

Assign new properties to items in the data table, update properties

+
+
Parameters:
+
    +
  • props (pandas.DataFrame or doubly-nested dict) – See documentation of the properties parameter in Entity

  • +
  • level_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
  • id_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
  • misc_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
+
+
+
+

See also

+

properties

+
+
+ +
+
+property cell_weights
+

Cell weights corresponding to each row of the underlying data table

+
+
Returns:
+

dict of {tuple – Keyed by row of data table (as a tuple)

+
+
Return type:
+

int or float}

+
+
+
+ +
+
+property children
+

Labels of all items in level 1 (second column) of the underlying data table

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
+

uidset_by_level, uidset_by_column

+
+
+ +
+
+property data
+

Sparse representation of the data table as an incidence tensor

+

This can also be thought of as an encoding of dataframe, where items in each column of +the data table are translated to their int position in the self.labels[column] list +:returns: 2D array of ints representing rows of the underlying data table as indices in an incidence tensor +:rtype: numpy.ndarray

+
+

See also

+

labels, dataframe

+
+
+ +
+
+property dataframe
+

The underlying data table stored by the Entity

+
+
Return type:
+

pandas.DataFrame

+
+
+
+ +
+
+property dimensions
+

Dimensions of data i.e., the number of distinct items in each level (column) of the underlying data table

+
+
Returns:
+

Length and order corresponds to columns of self.dataframe (excluding cell weight column)

+
+
Return type:
+

tuple of ints

+
+
+
+ +
+
+property dimsize
+

Number of levels (columns) in the underlying data table

+
+
Returns:
+

Equal to length of self.dimensions

+
+
Return type:
+

int

+
+
+
+ +
+
+property elements
+

System of sets representation of the first two levels (columns) of the underlying data table

+

Each item in level 0 (first column) defines a set containing all the level 1 +(second column) items with which it appears in the same row of the underlying +data table

+
+
Returns:
+

System of sets representation as dict of {level 0 item : AttrList(level 1 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
incidence_dict

same data as dict of list

+
+
memberships

dual of this representation, i.e., each item in level 1 (second column) defines a set

+
+
+

elements_by_level, elements_by_column

+
+
+ +
+
+elements_by_column(col1, col2)[source]
+

System of sets representation of two columns (levels) of the underlying data table

+

Each item in col1 defines a set containing all the col2 items +with which it appears in the same row of the underlying data table

+

Properties can be accessed and assigned to items in col1

+
+
Parameters:
+
    +
  • col1 (Hashable) – name of column whose items define sets

  • +
  • col2 (Hashable) – name of column whose items are elements in the system of sets

  • +
+
+
Returns:
+

System of sets representation as dict of {col1 item : AttrList(col2 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+

elements, memberships

+
+
elements_by_level

same functionality, takes level indices instead of column names

+
+
+
+
+ +
+
+elements_by_level(level1, level2)[source]
+

System of sets representation of two levels (columns) of the underlying data table

+

Each item in level1 defines a set containing all the level2 items +with which it appears in the same row of the underlying data table

+

Properties can be accessed and assigned to items in level1

+
+
Parameters:
+
    +
  • level1 (int) – index of level whose items define sets

  • +
  • level2 (int) – index of level whose items are elements in the system of sets

  • +
+
+
Returns:
+

System of sets representation as dict of {level1 item : AttrList(level2 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+

elements, memberships

+
+
elements_by_column

same functionality, takes column names instead of level indices

+
+
+
+
+ +
+
+property empty
+

Whether the underlying data table is empty or not

+
+
Return type:
+

bool

+
+
+
+

See also

+
+
is_empty

for checking whether a specified level (column) is empty

+
+
dimsize

0 if empty

+
+
+
+
+ +
+
+encode(data)[source]
+

Encode dataframe to numpy array

+
+
Parameters:
+

data (dataframe) –

+
+
Return type:
+

numpy.array

+
+
+
+ +
+
+get_properties(item: T, level: int | None = None) dict[Any, Any][source]
+

Get all properties of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • level (int, optional) – level index of the item

  • +
+
+
Returns:
+

prop_vals{named property: property value, ..., +misc. property column name: {property name: property value}}

+
+
Return type:
+

dict

+
+
Raises:
+

KeyError – if (level, item) is not in properties, +or if level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+
+

See also

+

get_property, set_property

+
+
+ +
+
+get_property(item: T, prop_name: Any, level: int | None = None) Any[source]
+

Get a property of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • prop_name (hashable) – name of the property to get

  • +
  • level (int, optional) – level index of the item

  • +
+
+
Returns:
+

prop_val – value of the property

+
+
Return type:
+

any

+
+
Raises:
+

KeyError – if (level, item) is not in properties, +or if level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+ +
+ +
+
+property incidence_dict: dict[T, list[T]]
+

System of sets representation of the first two levels (columns) of the underlying data table

+
+
Returns:
+

System of sets representation as dict of {level 0 item : AttrList(level 1 items)}

+
+
Return type:
+

dict of list

+
+
+
+

See also

+
+
elements

same data as dict of AttrList

+
+
+
+
+ +
+
+incidence_matrix(level1=0, level2=1, weights=False, aggregateby=None, index=False) csr_matrix | None[source]
+

Incidence matrix representation for two levels (columns) of the underlying data table

+

If level1 and level2 contain N and M distinct items, respectively, the incidence matrix will be M x N. +In other words, the items in level1 and level2 correspond to the columns and rows of the incidence matrix, +respectively, in the order in which they appear in self.labels[column1] and self.labels[column2] +(column1 and column2 are the column labels of level1 and level2)

+
+
Parameters:
+
    +
  • level1 (int, default=0) – index of first level (column)

  • +
  • level2 (int, default=1) – index of second level

  • +
  • weights (bool or dict, default=False) – If False all nonzero entries are 1. +If True all nonzero entries are filled by self.cell_weight +dictionary values, use aggregateby to specify how duplicate +entries should have weights aggregated. +If dict of {(level1 item, level2 item): weight value} form; +only nonzero cells in the incidence matrix will be updated by dictionary, +i.e., level1 item and level2 item must appear in the same row at least once in the underlying data table

  • +
  • aggregateby ({'last', count', 'sum', 'mean','median', max', 'min', 'first', 'last', None}, default='count') –

    +
    Method to aggregate weights of duplicate rows in data table.

    If None, then all cell weights will be set to 1.

    +
    +
    +

  • +
  • index (bool, optional) – Not used

  • +
+
+
Returns:
+

sparse representation of incidence matrix (i.e. Compressed Sparse Row matrix)

+
+
Return type:
+

scipy.sparse.csr.csr_matrix

+
+
+
+

Note

+

In the context of Hypergraphs, think level1 = edges, level2 = nodes

+
+
+ +
+
+index(column, value=None)[source]
+

Get level index corresponding to a column and (optionally) the index of a value in that column

+

The index of value is its position in the list given by self.labels[column], which is used +in the integer encoding of the data table self.data

+
+
Parameters:
+
    +
  • column (str) – name of a column in self.dataframe

  • +
  • value (str, optional) – label of an item in the specified column

  • +
+
+
Returns:
+

level index corresponding to column, index of value if provided

+
+
Return type:
+

int or (int, int)

+
+
+
+

See also

+
+
indices

for finding indices of multiple values in a column

+
+
level

same functionality, search for the value without specifying column

+
+
+
+
+ +
+
+indices(column, values)[source]
+

Get indices of one or more value(s) in a column

+
+
Parameters:
+
    +
  • column (str) –

  • +
  • values (str or iterable of str) –

  • +
+
+
Returns:
+

indices of values

+
+
Return type:
+

list of int

+
+
+
+

See also

+
+
index

for finding level index of a column and index of a single value

+
+
+
+
+ +
+
+is_empty(level=0)[source]
+

Whether a specified level (column) of the underlying data table is empty or not

+
+
Return type:
+

bool

+
+
+
+

See also

+
+
empty

for checking whether the underlying data table is empty

+
+
size

number of items in a level (columns); 0 if level is empty

+
+
+
+
+ +
+
+property isstatic
+

Whether to treat the underlying data as static or not

+

If True, the underlying data may not be altered, and the state_dict will never be cleared +Otherwise, rows may be added to and removed from the data table, and updates will clear the state_dict

+
+
Return type:
+

bool

+
+
+
+ +
+
+property labels
+

Labels of all items in each column of the underlying data table

+
+
Returns:
+

dict of {column name: [item labels]} +The order of [item labels] corresponds to the int encoding of each item in self.data.

+
+
Return type:
+

dict of lists

+
+
+
+

See also

+

data, dataframe

+
+
+ +
+
+level(item, min_level=0, max_level=None, return_index=True)[source]
+

First level containing the given item label

+

Order of levels corresponds to order of columns in self.dataframe

+
+
Parameters:
+
    +
  • item (str) –

  • +
  • min_level (int, optional) – inclusive bounds on range of levels to search for item

  • +
  • max_level (int, optional) – inclusive bounds on range of levels to search for item

  • +
  • return_index (bool, default=True) – If True, return index of item within the level

  • +
+
+
Returns:
+

index of first level containing the item, index of item if return_index=True +returns None if item is not found

+
+
Return type:
+

int, (int, int), or None

+
+
+
+

See also

+

index, indices

+
+
+ +
+
+property memberships
+

System of sets representation of the first two levels (columns) of the +underlying data table

+

Each item in level 1 (second column) defines a set containing all the level 0 +(first column) items with which it appears in the same row of the underlying +data table

+
+
Returns:
+

System of sets representation as dict of {level 1 item : AttrList(level 0 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
elements

dual of this representation i.e., each item in level 0 (first column) defines a set

+
+
+

elements_by_level, elements_by_column

+
+
+ +
+
+property properties: DataFrame
+

Properties assigned to items in the underlying data table

+
+
Return type:
+

pandas.DataFrame

+
+
+
+ +
+
+remove(*args)[source]
+

Removes all rows containing specified item(s) from the underlying data table

+
+
Parameters:
+

*args – variable length argument list of item labels

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

See also

+
+
remove_element

remove all rows containing a single specified item

+
+
+
+
+ +
+
+remove_element(item)[source]
+

Removes all rows containing a specified item from the underlying data table

+
+
Parameters:
+

item – item label

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

See also

+
+
remove

same functionality, accepts variable length argument list of item labels

+
+
+
+
+ +
+
+remove_elements_from(arg_set)[source]
+

Removes all rows containing specified item(s) from the underlying data table

+
+
..deprecated: 2.0.0

Duplicates remove

+
+
+
+
Parameters:
+

arg_set (iterable) – list of item labels

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+ +
+
+restrict_to_indices(indices, level=0, **kwargs)[source]
+

Create a new Entity by restricting the data table to rows containing specific items in a given level

+
+
Parameters:
+
    +
  • indices (int or iterable of int) – indices of item label(s) in level to restrict to

  • +
  • level (int, default=0) – level index

  • +
  • **kwargs – Extra arguments to Entity constructor

  • +
+
+
Return type:
+

Entity

+
+
+
+ +
+
+restrict_to_levels(levels: int | Iterable[int], weights: bool = False, aggregateby: str | None = 'sum', **kwargs) Entity[source]
+

Create a new Entity by restricting to a subset of levels (columns) in the +underlying data table

+
+
Parameters:
+
    +
  • levels (array-like of int) – indices of a subset of levels (columns) of data

  • +
  • weights (bool, default=False) – If True, aggregate existing cell weights to get new cell weights +Otherwise, all new cell weights will be 1

  • +
  • aggregateby ({'sum', 'first', 'last', 'count', 'mean', 'median', 'max', 'min', None}, optional) – Method to aggregate weights of duplicate rows in data table +If None or `weights`=False then all new cell weights will be 1

  • +
  • **kwargs – Extra arguments to Entity constructor

  • +
+
+
Return type:
+

Entity

+
+
Raises:
+

KeyError – If levels contains any invalid values

+
+
+
+

See also

+

EntitySet

+
+
+ +
+
+set_property(item: T, prop_name: Any, prop_val: Any, level: int | None = None) None[source]
+

Set a property of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • prop_name (hashable) – name of the property to set

  • +
  • prop_val (any) – value of the property to set

  • +
  • level (int, optional) – level index of the item; +required if item is not already in properties

  • +
+
+
Raises:
+

ValueError – If level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+ +
+ +
+
+size(level=0)[source]
+

The number of items in a level of the underlying data table

+

Equivalent to self.dimensions[level]

+
+
Parameters:
+

level (int, default=0) –

+
+
Return type:
+

int

+
+
+
+

See also

+

dimensions

+
+
+ +
+
+translate(level, index)[source]
+

Given indices of a level and value(s), return the corresponding value label(s)

+
+
Parameters:
+
    +
  • level (int) – level index

  • +
  • index (int or list of int) – value index or indices

  • +
+
+
Returns:
+

label(s) corresponding to value index or indices

+
+
Return type:
+

str or list of str

+
+
+
+

See also

+
+
translate_arr

translate a full row of value indices across all levels (columns)

+
+
+
+
+ +
+
+translate_arr(coords)[source]
+

Translate a full encoded row of the data table e.g., a row of self.data

+
+
Parameters:
+

coords (tuple of ints) – encoded value indices, with one value index for each level of the data

+
+
Returns:
+

full row of translated value labels

+
+
Return type:
+

list of str

+
+
+
+ +
+
+property uid
+

User-defined unique identifier for the Entity

+
+
Return type:
+

hashable

+
+
+
+ +
+
+property uidset
+

Labels of all items in level 0 (first column) of the underlying data table

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
children

Labels of all items in level 1 (second column)

+
+
+

uidset_by_level, uidset_by_column

+
+
+ +
+
+uidset_by_column(column)[source]
+

Labels of all items in a particular column (level) of the underlying data table

+
+
Parameters:
+

column (Hashable) – Name of a column in self.dataframe

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
children

Labels of all items in level 1 (second column)

+
+
uidset_by_level

Same functionality, takes the level index instead of column name

+
+
+
+
+ +
+
+uidset_by_level(level)[source]
+

Labels of all items in a particular level (column) of the underlying data table

+
+
Parameters:
+

level (int) –

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
children

Labels of all items in level 1 (second column)

+
+
uidset_by_column

Same functionality, takes the column name instead of level index

+
+
+
+
+ +
+ +
+
+

classes.entityset module

+
+
+class classes.entityset.EntitySet(entity: pd.DataFrame | np.ndarray | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | Mapping[T, Mapping[T, Mapping[T, Any]]] | None = None, data: np.ndarray | None = None, labels: OrderedDict[T, Sequence[T]] | None = None, level1: str | int = 0, level2: str | int = 1, weight_col: str | int = 'cell_weights', weights: Sequence[float] | float | int | str = 1, cell_properties: Sequence[T] | pd.DataFrame | dict[T, dict[T, dict[Any, Any]]] | None = None, misc_cell_props_col: str = 'cell_properties', uid: Hashable | None = None, aggregateby: str | None = 'sum', properties: pd.DataFrame | dict[int, dict[T, dict[Any, Any]]] | None = None, misc_props_col: str = 'properties', **kwargs)[source]
+

Bases: Entity

+

Class for handling 2-dimensional (i.e., system of sets, bipartite) data when +building network-like models, i.e., Hypergraph

+
+
Parameters:
+
    +
  • entity (Entity, pandas.DataFrame, dict of lists or sets, or list of lists or sets, optional) – If an Entity with N levels or a DataFrame with N columns, +represents N-dimensional entity data (data table). +If N > 2, only considers levels (columns) level1 and level2. +Otherwise, represents 2-dimensional entity data (system of sets).

  • +
  • data (numpy.ndarray, optional) – 2D M x N ndarray of ints (data table); +sparse representation of an N-dimensional incidence tensor with M nonzero cells. +If N > 2, only considers levels (columns) level1 and level2. +Ignored if entity is provided.

  • +
  • labels (collections.OrderedDict of lists, optional) – User-specified labels in corresponding order to ints in data. +For M x N data, N > 2, labels must contain either 2 or N keys. +If N keys, only considers labels for levels (columns) level1 and level2. +Ignored if entity is provided or data is not provided.

  • +
  • level1 (str or int, default=0,1) – Each item in level1 defines a set containing all the level2 items with which +it appears in the same row of the underlying data table. +If int, gives the index of a level; +if str, gives the name of a column in entity. +Ignored if entity, data (if entity not provided), and labels all (if +provided) represent 1- or 2-dimensional data (set or system of sets).

  • +
  • level2 (str or int, default=0,1) – Each item in level1 defines a set containing all the level2 items with which +it appears in the same row of the underlying data table. +If int, gives the index of a level; +if str, gives the name of a column in entity. +Ignored if entity, data (if entity not provided), and labels all (if +provided) represent 1- or 2-dimensional data (set or system of sets).

  • +
  • weights (str or sequence of float, optional) –

    User-specified cell weights corresponding to entity data. +If sequence of floats and entity or data defines a data table,

    +
    +

    length must equal the number of rows.

    +
    +
    +
    If sequence of floats and entity defines a system of sets,

    length must equal the total sum of the sizes of all sets.

    +
    +
    If str and entity is a DataFrame,

    must be the name of a column in entity.

    +
    +
    +

    Otherwise, weight for all cells is assumed to be 1. +Ignored if entity is an Entity and `keep_weights`=True.

    +

  • +
  • keep_weights (bool, default=True) – Whether to preserve any existing cell weights; +ignored if entity is not an Entity.

  • +
  • cell_properties (str, list of str, pandas.DataFrame, or doubly-nested dict, optional) – User-specified properties to be assigned to cells of the incidence matrix, i.e., +rows in a data table; pairs of (set, element of set) in a system of sets. +See Notes for detailed explanation. +Ignored if underlying data is 1-dimensional (set). +If doubly-nested dict, +{level1 item: {level2 item: {cell property name: cell property value}}}.

  • +
  • misc_cell_props_col (str, default='cell_properties') – Column name for miscellaneous cell properties; see Notes for explanation.

  • +
  • kwargs – Keyword arguments passed to the Entity constructor, e.g., static, +uid, aggregateby, properties, etc. See Entity for documentation +of these parameters.

  • +
+
+
+

Notes

+

A cell property is a named attribute assigned jointly to a set and one of its +elements, i.e, a cell of the incidence matrix.

+

When an Entity or DataFrame is passed to the entity parameter of the +constructor, it should represent a data table:

+ + + + + + + + + + + + + + + + + + + + + + + +

Column_1

Column_2

Column_3

[…]

Column_N

level 1 item

level 2 item

level 3 item

level N item

+

Assuming the default values for parameters level1, level2, the data table will +be restricted to the set system defined by Column 1 and Column 2. +Since each row of the data table represents an incidence or cell, values from other +columns may contain data that should be converted to cell properties.

+

By passing a column name or list of column names as cell_properties, each +given column will be preserved in the cell_properties as an explicit cell +property type. An additional column in cell_properties will be created to +store a dict of miscellaneous cell properties, which will store cell properties +of types that have not been explicitly defined and do not have a dedicated column +(which may be assigned after construction). The name of the miscellaneous column is +determined by misc_cell_props_col.

+

You can also pass a pre-constructed table to cell_properties as a +DataFrame:

+ + + + + + + + + + + + + + + + + + + + + + + +

Column_1

Column_2

[explicit cell prop. type]

[…]

misc. cell properties

level 1 +item

level 2 +item

cell property value

{cell property name: +cell property value}

+

Column 1 and Column 2 must have the same names as the corresponding columns in the +entity data table, and misc_cell_props_col can be used to specify the name of the +column to be used for miscellaneous cell properties. If no column by that name is +found, a new column will be created and populated with empty dicts. All other +columns will be considered explicit cell property types. The order of the columns +does not matter.

+

Both of these methods assume that there are no row duplicates in the tables passed +to entity and/or cell_properties; if duplicates are found, all but the first +occurrence will be dropped.

+
+
+assign_cell_properties(cell_props: DataFrame | dict[T, dict[T, dict[Any, Any]]], misc_col: str | None = None, replace: bool = False) None[source]
+

Assign new properties to cells of the incidence matrix and update +properties

+
+
Parameters:
+
    +
  • cell_props (pandas.DataFrame, dict of iterables, or doubly-nested dict, optional) – See documentation of the cell_properties parameter in EntitySet

  • +
  • misc_col (str, optional) – name of column to be used for miscellaneous cell property dicts

  • +
  • replace (bool, default=False) – If True, replace existing cell_properties with result; +otherwise update with new values from result

  • +
+
+
Raises:
+

AttributeError – Not supported for :attr:`dimsize`=1

+
+
+
+ +
+
+property cell_properties: DataFrame | None
+

Properties assigned to cells of the incidence matrix

+
+
Returns:
+

Returns None if dimsize < 2

+
+
Return type:
+

pandas.Series, optional

+
+
+
+ +
+
+collapse_identical_elements(return_equivalence_classes: bool = False, **kwargs) EntitySet | tuple[classes.entityset.EntitySet, dict[str, list[str]]][source]
+

Create a new EntitySet by collapsing sets with the same set elements

+

Each item in level 0 (first column) defines a set containing all the level 1 +(second column) items with which it appears in the same row of the underlying +data table.

+
+
Parameters:
+
    +
  • return_equivalence_classes (bool, default=False) – If True, return a dictionary of equivalence classes keyed by new edge names

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Returns:
+

    +
  • new_entity (EntitySet) – new EntitySet with identical sets collapsed; +if all sets are unique, the system of sets will be the same as the original.

  • +
  • equivalence_classes (dict of lists, optional) – if return_equivalence_classes`=True, +``{collapsed set label: [level 0 item labels]}`.

  • +
+

+
+
+
+ +
+
+get_cell_properties(item1: T, item2: T) dict[Any, Any][source]
+

Get all properties of a cell, i.e., incidence between items of different +levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
+
+
Returns:
+

{named cell property: cell property value, ..., misc. cell property column +name: {cell property name: cell property value}}

+
+
Return type:
+

dict

+
+
+ +
+ +
+
+get_cell_property(item1: T, item2: T, prop_name: Any) Any[source]
+

Get a property of a cell i.e., incidence between items of different levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
  • prop_name (hashable) – name of the cell property to get

  • +
+
+
Returns:
+

prop_val – value of the cell property

+
+
Return type:
+

any

+
+
+ +
+ +
+
+property memberships: dict[str, hypernetx.classes.helpers.AttrList[str]]
+

Extends Entity.memberships

+

Each item in level 1 (second column) defines a set containing all the level 0 +(first column) items with which it appears in the same row of the underlying +data table.

+
+
Returns:
+

System of sets representation as dict of +{level 1 item: AttrList(level 0 items)}.

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
elements

dual of this representation, i.e., each item in level 0 (first column) defines a set

+
+
restrict_to_levels

for more information on how memberships work for 1-dimensional (set) data

+
+
+
+
+ +
+
+restrict_to(indices: int | Iterable[int], **kwargs) EntitySet[source]
+

Alias of restrict_to_indices() with default parameter `level`=0

+
+
Parameters:
+
    +
  • indices (array_like of int) – indices of item label(s) in level to restrict to

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Return type:
+

EntitySet

+
+
+
+

See also

+

restrict_to_indices

+
+
+ +
+
+restrict_to_levels(levels: int | Iterable[int], weights: bool = False, aggregateby: str | None = 'sum', keep_memberships: bool = True, **kwargs) EntitySet[source]
+

Extends Entity.restrict_to_levels()

+
+
Parameters:
+
    +
  • levels (array-like of int) – indices of a subset of levels (columns) of data

  • +
  • weights (bool, default=False) – If True, aggregate existing cell weights to get new cell weights. +Otherwise, all new cell weights will be 1.

  • +
  • aggregateby ({'sum', 'first', 'last', 'count', 'mean', 'median', 'max', 'min', None}, optional) – Method to aggregate weights of duplicate rows in data table +If None or `weights`=False then all new cell weights will be 1

  • +
  • keep_memberships (bool, default=True) – Whether to preserve membership information for the discarded level when +the new EntitySet is restricted to a single level

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Return type:
+

EntitySet

+
+
Raises:
+

KeyError – If levels contains any invalid values

+
+
+
+ +
+
+set_cell_property(item1: T, item2: T, prop_name: Any, prop_val: Any) None[source]
+

Set a property of a cell i.e., incidence between items of different levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
  • prop_name (hashable) – name of the cell property to set

  • +
  • prop_val (any) – value of the cell property to set

  • +
+
+
+ +
+ +
+ +
+
+

classes.helpers module

+
+
+class classes.helpers.AttrList(entity: Entity, key: tuple[int, str | int], initlist: list | None = None)[source]
+

Bases: UserList

+

Custom list wrapper for integrated property storage in Entity

+
+
Parameters:
+
    +
  • entity (hypernetx.Entity) –

  • +
  • key (tuple of (int, str or int)) – (level, item)

  • +
  • initlist (list, optional) – list of elements, passed to UserList constructor

  • +
+
+
+
+ +
+
+classes.helpers.assign_weights(df, weights=1, weight_col='cell_weights')[source]
+
+
Parameters:
+
    +
  • df (pandas.DataFrame) – A DataFrame to assign a weight column to

  • +
  • weights (array-like or Hashable, optional) – If numpy.ndarray with the same length as df, create a new weight column with +these values. +If Hashable, must be the name of a column of df to assign as the weight column +Otherwise, create a new weight column assigning a weight of 1 to every row

  • +
  • weight_col (Hashable) – Name for new column if one is created (not used if the name of an existing +column is passed as weights)

  • +
+
+
Returns:
+

    +
  • df (pandas.DataFrame) – The original DataFrame with a new column added if needed

  • +
  • weight_col (str) – Name of the column assigned to hold weights

  • +
+

+
+
+
+

Note

+

TODO: move logic for default weights inside this method

+
+
+ +
+
+classes.helpers.create_properties(props: DataFrame | dict[str | int, collections.abc.Iterable[str | int]] | dict[str | int, dict[str | int, dict[Any, Any]]] | None, index_cols: list[str], misc_col: str) DataFrame[source]
+

Helper function for initializing properties and cell properties

+
+
Parameters:
+
    +
  • props (pandas.DataFrame, dict of iterables, doubly-nested dict, or None) – See documentation of the properties parameter in Entity, +cell_properties parameter in EntitySet

  • +
  • index_cols (list of str) – names of columns to be used as levels of the MultiIndex

  • +
  • misc_col (str) – name of column to be used for miscellaneous property dicts

  • +
+
+
Returns:
+

with MultiIndex on index_cols; +each entry of the miscellaneous column holds dict of +{property name: property value}

+
+
Return type:
+

pandas.DataFrame

+
+
+
+ +
+
+classes.helpers.dict_depth(dic, level=0)[source]
+
+ +
+
+classes.helpers.encode(data: DataFrame)[source]
+

Encode dataframe to numpy array

+
+
Parameters:
+

data (dataframe) –

+
+
Return type:
+

numpy.array

+
+
+
+ +
+
+classes.helpers.merge_nested_dicts(a, b, path=None)[source]
+

merges b into a

+
+ +
+
+classes.helpers.remove_row_duplicates(df, data_cols, weights=1, weight_col='cell_weights', aggregateby=None)[source]
+

Removes and aggregates duplicate rows of a DataFrame using groupby

+
+
Parameters:
+
    +
  • df (pandas.DataFrame) – A DataFrame to remove or aggregate duplicate rows from

  • +
  • data_cols (list) – A list of column names in df to perform the groupby on / remove duplicates from

  • +
  • weights (array-like or Hashable, optional) – Argument passed to assign_weights

  • +
  • aggregateby (str, optional, default='sum') – A valid aggregation method for pandas groupby +If None, drop duplicates without aggregating weights

  • +
+
+
Returns:
+

    +
  • df (pandas.DataFrame) – The DataFrame with duplicate rows removed or aggregated

  • +
  • weight_col (Hashable) – The name of the column holding aggregated weights, or None if aggregateby=None

  • +
+

+
+
+
+ +
+
+

classes.hypergraph module

+
+
+class classes.hypergraph.Hypergraph(setsystem: DataFrame | ndarray | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | Mapping[T, Mapping[T, Mapping[str, Any]]] | None = None, edge_col: str | int = 0, node_col: str | int = 1, cell_weight_col: str | int | None = 'cell_weights', cell_weights: Sequence[float] | float = 1.0, cell_properties: Sequence[str | int] | Mapping[T, Mapping[T, Mapping[str, Any]]] | None = None, misc_cell_properties_col: str | int | None = None, aggregateby: str | dict[str, str] = 'first', edge_properties: DataFrame | dict[T, dict[Any, Any]] | None = None, node_properties: DataFrame | dict[T, dict[Any, Any]] | None = None, properties: DataFrame | dict[T, dict[Any, Any]] | dict[T, dict[T, dict[Any, Any]]] | None = None, misc_properties_col: str | int | None = None, edge_weight_prop_col: str | int = 'weight', node_weight_prop_col: str | int = 'weight', weight_prop_col: str | int = 'weight', default_edge_weight: float | None = None, default_node_weight: float | None = None, default_weight: float = 1.0, name: str | None = None, **kwargs)[source]
+

Bases: object

+
+
Parameters:
+
    +
  • setsystem ((optional) dict of iterables, dict of dicts,iterable of iterables,) – pandas.DataFrame, numpy.ndarray, default = None +See SetSystem above for additional setsystem requirements.

  • +
  • edge_col ((optional) str | int, default = 0) – column index (or name) in pandas.dataframe or numpy.ndarray, +used for (hyper)edge ids. Will be used to reference edgeids for +all set systems.

  • +
  • node_col ((optional) str | int, default = 1) – column index (or name) in pandas.dataframe or numpy.ndarray, +used for node ids. Will be used to reference nodeids for all set systems.

  • +
  • cell_weight_col ((optional) str | int, default = None) – column index (or name) in pandas.dataframe or numpy.ndarray used for +referencing cell weights. For a dict of dicts references key in cell +property dicts.

  • +
  • cell_weights ((optional) Sequence[float,int] | int | float , default = 1.0) – User specified cell_weights or default cell weight. +Sequential values are only used if setsystem is a +dataframe or ndarray in which case the sequence must +have the same length and order as these objects. +Sequential values are ignored for dataframes if cell_weight_col is already +a column in the data frame. +If cell_weights is assigned a single value +then it will be used as default for missing values or when no cell_weight_col +is given.

  • +
  • cell_properties ((optional) Sequence[int | str] | Mapping[T,Mapping[T,Mapping[str,Any]]],) – default = None +Column names from pd.DataFrame to use as cell properties +or a dict assigning cell_property to incidence pairs of edges and +nodes. Will generate a misc_cell_properties, which may have variable lengths per cell.

  • +
  • misc_cell_properties ((optional) str | int, default = None) – Column name of dataframe corresponding to a column of variable +length property dictionaries for the cell. Ignored for other setsystem +types.

  • +
  • aggregateby ((optional) str, dict, default = 'first') – By default duplicate edge,node incidences will be dropped unless +specified with aggregateby. +See pandas.DataFrame.agg() methods for additional syntax and usage +information.

  • +
  • edge_properties ((optional) pd.DataFrame | dict, default = None) – Properties associated with edge ids. +First column of dataframe or keys of dict link to edge ids in +setsystem.

  • +
  • node_properties ((optional) pd.DataFrame | dict, default = None) – Properties associated with node ids. +First column of dataframe or keys of dict link to node ids in +setsystem.

  • +
  • properties ((optional) pd.DataFrame | dict, default = None) – Concatenation/union of edge_properties and node_properties. +By default, the object id is used and should be the first column of +the dataframe, or key in the dict. If there are nodes and edges +with the same ids and different properties then use the edge_properties +and node_properties keywords.

  • +
  • misc_properties ((optional) int | str, default = None) – Column of property dataframes with dtype=dict. Intended for variable +length property dictionaries for the objects.

  • +
  • edge_weight_prop ((optional) str, default = None,) – Name of property in edge_properties to use for weight.

  • +
  • node_weight_prop ((optional) str, default = None,) – Name of property in node_properties to use for weight.

  • +
  • weight_prop ((optional) str, default = None) – Name of property in properties to use for ‘weight’

  • +
  • default_edge_weight ((optional) int | float, default = 1) – Used when edge weight property is missing or undefined.

  • +
  • default_node_weight ((optional) int | float, default = 1) – Used when node weight property is missing or undefined

  • +
  • name ((optional) str, default = None) – Name assigned to hypergraph

  • +
+
+
+
+

Hypergraphs in HNX 2.0

+

An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: +V = nodes (vertices) and E = (hyper)edges.

+

HNX allows for multi-edges by distinguishing edges by +their identifiers instead of their contents. For example, if +V = {1,2,3} and E = {e1,e2,e3}, +where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, +the edges e1 and e2 contain the same set of nodes and yet +are distinct and are distinguishable within H = (V,E).

+

New as of version 2.0, HNX provides methods to easily store and +access additional metadata such as cell, edge, and node weights. +Metadata associated with (edge,node) incidences +are referenced as cell_properties. +Metadata associated with a single edge or node is referenced +as its properties.

+

The fundamental object needed to create a hypergraph is a setsystem. The +setsystem defines the many-to-many relationships between edges and nodes in +the hypergraph. Cell properties for the incidence pairs can be defined within +the setsystem or in a separate pandas.Dataframe or dict. +Edge and node properties are defined with a pandas.DataFrame or dict.

+
+

SetSystems

+

There are five types of setsystems currently accepted by the library.

+
    +
  1. iterable of iterables : Barebones hypergraph uses Pandas default +indexing to generate hyperedge ids. Elements must be hashable.:

    +
    >>> H = Hypergraph([{1,2},{1,2},{1,2,3}])
    +
    +
    +
  2. +
  3. dictionary of iterables : the most basic way to express many-to-many +relationships providing edge ids. The elements of the iterables must be +hashable):

    +
    >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]})
    +
    +
    +
  4. +
  5. dictionary of dictionaries : allows cell properties to be assigned +to a specific (edge, node) incidence. This is particularly useful when +there are variable length dictionaries assigned to each pair:

    +
    >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.1, 'name': 'related_to',
    +>>>                 'startdate': '05.13.2020'}},
    +>>>      'e2':{ 1: {'w':0.52, 'name': 'owned_by'},
    +>>>             2: {'w':0.2}},
    +>>>      'e3':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.2, 'name': 'owner_of'},
    +>>>             3: {'w':1, 'type': 'relationship'}}
    +
    +
    +
    >>> H = Hypergraph(d, cell_weight_col='w')
    +
    +
    +
  6. +
  7. pandas.DataFrame For large datasets and for datasets with cell +properties it is most efficient to construct a hypergraph directly from +a pandas.DataFrame. Incidence pairs are in the first two columns. +Cell properties shared by all incidence pairs can be placed in their own +column of the dataframe. Variable length dictionaries of cell properties +particular to only some of the incidence pairs may be placed in a single +column of the dataframe. Representing the data above as a dataframe df:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

    col1

    col2

    w

    col3

    e1

    1

    0.5

    {‘name’:’related_to’}

    e1

    2

    0.1

    +
    {“name”:”related_to”,

    “startdate”:”05.13.2020”}

    +
    +
    +

    e2

    1

    0.52

    {“name”:”owned_by”}

    e2

    2

    0.2

    {…}

    +

    The first row of the dataframe is used to reference each column.

    +
    >>> H = Hypergraph(df,edge_col="col1",node_col="col2",
    +>>>                 cell_weight_col="w",misc_cell_properties="col3")
    +
    +
    +
  8. +
  9. numpy.ndarray For homogeneous datasets given in an ndarray a +pandas dataframe is generated and column names are added from the +edge_col and node_col arguments. Cell properties containing multiple data +types are added with a separate dataframe or dict and passed through the +cell_properties keyword.

    +
    >>> arr = np.array([['e1','1'],['e1','2'],
    +>>>                 ['e2','1'],['e2','2'],
    +>>>                 ['e3','1'],['e3','2'],['e3','3']])
    +>>> H = hnx.Hypergraph(arr, column_names=['col1','col2'])
    +
    +
    +
  10. +
+
+
+

Edge and Node Properties

+

Properties specific to a single edge or node are passed through the +keywords: edge_properties, node_properties, properties. +Properties may be passed as dataframes or dicts. +The first column or index of the dataframe or keys of the dict keys +correspond to the edge and/or node identifiers. +If identifiers are shared among edges and nodes, or are distinct +for edges and nodes, properties may be combined into a single +object and passed to the properties keyword. For example:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

id

weight

properties

e1

5.0

{‘type’:’event’}

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

3

1.0

{}

+

A properties dictionary should have the format:

+
dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... }
+
+
+

A properties dataframe may be used for nodes and edges sharing ids +but differing in cell properties by adding a level index using 0 +for edges and 1 for nodes:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

level

id

weight

properties

0

e1

5.0

{‘type’:’event’}

0

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

{…}

+
+
+

Weights

+

The default key for cell and object weights is “weight”. The default value +is 1. Weights may be assigned and/or a new default prescribed in the +constructor using cell_weight_col and cell_weights for incidence pairs, +and using edge_weight_prop, node_weight_prop, weight_prop, +default_edge_weight, and default_node_weight for node and edge weights.

+
+
+adjacency_matrix(s=1, index=False, remove_empty_rows=False)[source]
+

The s-adjacency matrix for the hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default = 1) –

  • +
  • index (boolean, optional, default = False) – if True, will return the index of ids for rows and columns

  • +
  • remove_empty_rows (boolean, optional, default = False) –

  • +
+
+
Returns:
+

    +
  • adjacency_matrix (scipy.sparse.csr.csr_matrix)

  • +
  • node_index (list) – index of ids for rows and columns

  • +
+

+
+
+
+ +
+
+auxiliary_matrix(s=1, node=True, index=False)[source]
+

The unweighted s-edge or node auxiliary matrix for hypergraph

+
+
Parameters:
+
    +
  • s (int, optional, default = 1) –

  • +
  • node (bool, optional, default = True) – whether to return based on node or edge adjacencies

  • +
+
+
Returns:
+

    +
  • auxiliary_matrix (scipy.sparse.csr.csr_matrix) – Node/Edge adjacency matrix with empty rows and columns +removed

  • +
  • index (np.array) – row and column index of userids

  • +
+

+
+
+
+ +
+
+bipartite()[source]
+

Constructs the networkX bipartite graph associated to hypergraph.

+
+
Returns:
+

bipartite

+
+
Return type:
+

nx.Graph()

+
+
+

Notes

+

Creates a bipartite networkx graph from hypergraph. +The nodes and (hyper)edges of hypergraph become the nodes of bipartite +graph. For every (hyper)edge e in the hypergraph and node n in e there +is an edge (n,e) in the graph.

+
+ +
+
+collapse_edges(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Constructs a new hypergraph gotten by identifying edges containing the +same nodes

+
+
Parameters:
+
    +
  • name (hashable, optional, default = None) –

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of edge equivalence classes keyed by frozen +sets of nodes

  • +
+
+
Returns:
+

    +
  • new hypergraph (Hypergraph) – Equivalent edges are collapsed to a single edge named by a +representative of the equivalent edges followed by a colon and the +number of edges it represents.

  • +
  • equivalence_classes (dict) – A dictionary keyed by representative edge names with values equal +to the edges in its equivalence class

  • +
+

+
+
+

Notes

+

Two edges are identified if their respective elements are the same. +Using this as an equivalence relation, the uids of the edges are +partitioned into equivalence classes.

+

A single edge from the collapsed edges followed by a colon and the +number of elements in its equivalence class as uid for the new edge

+
+ +
+
+collapse_nodes(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Constructs a new hypergraph gotten by identifying nodes contained by +the same edges

+
+
Parameters:
+
    +
  • name (str, optional, default = None) –

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of node equivalence classes keyed by frozen +sets of edges

  • +
  • use_reps (boolean, optional, default = False - Deprecated, this no) – longer works and will be removed. Choose a single element from the +collapsed nodes as uid for the new node, otherwise uses a frozen +set of the uids of nodes in the equivalence class

  • +
  • return_counts (boolean, - Deprecated, this no longer works and will be) – removed if use_reps is True the new nodes have uids given by a +tuple of the rep and the count

  • +
+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

Two nodes are identified if their respective memberships are the same. +Using this as an equivalence relation, the uids of the nodes are +partitioned into equivalence classes. A single member of the +equivalence class is chosen to represent the class followed by the +number of members of the class.

+

Example

+
>>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', /
+                            ['a','b']),Entity('E2',['a','b'])]))
+>>> h.incidence_dict
+{'E1': {'a', 'b'}, 'E2': {'a', 'b'}}
+>>> h.collapse_nodes().incidence_dict
+{'E1': {frozenset({'a', 'b'})}, 'E2': {frozenset({'a', 'b'})}}
+### Fix this
+>>> h.collapse_nodes(use_reps=True).incidence_dict
+{'E1': {('a', 2)}, 'E2': {('a', 2)}}
+
+
+
+ +
+
+collapse_nodes_and_edges(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Returns a new hypergraph by collapsing nodes and edges.

+
+
Parameters:
+
    +
  • name (str, optional, default = None) –

  • +
  • use_reps (boolean, optional, default = False) – Choose a single element from the collapsed elements as a +representative

  • +
  • return_counts (boolean, optional, default = True) – if use_reps is True the new elements are keyed by a tuple of the +rep and the count

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of edge equivalence classes keyed by frozen +sets of nodes

  • +
+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

Collapses the Nodes and Edges EntitySets. Two nodes(edges) are +duplicates if their respective memberships(elements) are the same. +Using this as an equivalence relation, the uids of the nodes(edges) +are partitioned into equivalence classes. A single member of the +equivalence class is chosen to represent the class followed by the +number of members of the class.

+

Example

+
>>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', /
+                               ['a','b']),Entity('E2',['a','b'])]))
+>>> h.incidence_dict
+{'E1': {'a', 'b'}, 'E2': {'a', 'b'}}
+>>> h.collapse_nodes_and_edges().incidence_dict   ### Fix this
+{('E1', 2): {('a', 2)}}
+
+
+
+ +
+
+component_subgraphs(return_singletons=False, name=None)[source]
+

Same as s_components_subgraphs() with s=1. Returns iterator.

+
+

See also

+

s_component_subgraphs

+
+
+ +
+
+components(edges=False)[source]
+

Same as s_connected_components() with s=1, but nodes are returned +by default. Return iterator.

+ +
+ +
+
+connected_component_subgraphs(return_singletons=True, name=None)[source]
+

Same as s_component_subgraphs() with s=1. Returns iterator

+
+

See also

+

s_component_subgraphs

+
+
+ +
+
+connected_components(edges=False)[source]
+

Same as s_connected_components() with s=1, but nodes are returned +by default. Return iterator.

+ +
+ +
+
+property dataframe
+

Returns dataframe of incidence pairs and their properties.

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+degree(node, s=1, max_size=None)[source]
+

The number of edges of size s that contain node.

+
+
Parameters:
+
    +
  • node (hashable) – identifier for the node.

  • +
  • s (positive integer, optional, default 1) – smallest size of edge to consider in degree

  • +
  • max_size (positive integer or None, optional, default = None) – largest size of edge to consider in degree

  • +
+
+
Return type:
+

int

+
+
+
+ +
+
+diameter(s=1)[source]
+

Returns the length of the longest shortest s-walk between nodes in +hypergraph

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

diameter

+
+
Return type:
+

int

+
+
Raises:
+

HyperNetXError – If hypergraph is not s-edge-connected

+
+
+

Notes

+

Two nodes are s-adjacent if they share s edges. +Two nodes v_start and v_end are s-walk connected if there is a +sequence of nodes v_start, v_1, v_2, … v_n-1, v_end such that +consecutive nodes are s-adjacent. If the graph is not connected, +an error will be raised.

+
+ +
+
+dim(edge)[source]
+

Same as size(edge)-1.

+
+ +
+
+distance(source, target, s=1)[source]
+

Returns the shortest s-walk distance between two nodes in the +hypergraph.

+
+
Parameters:
+
    +
  • source (node.uid or node) – a node in the hypergraph

  • +
  • target (node.uid or node) – a node in the hypergraph

  • +
  • s (positive integer) – the number of edges

  • +
+
+
Returns:
+

s-walk distance

+
+
Return type:
+

int

+
+
+
+

See also

+

edge_distance

+
+

Notes

+

The s-distance is the shortest s-walk length between the nodes. +An s-walk between nodes is a sequence of nodes that pairwise share +at least s edges. The length of the shortest s-walk is 1 less than +the number of nodes in the path sequence.

+

Uses the networkx shortest_path_length method on the graph +generated by the s-adjacency matrix.

+
+ +
+
+dual(name=None, switch_names=True)[source]
+

Constructs a new hypergraph with roles of edges and nodes of hypergraph +reversed.

+
+
Parameters:
+
    +
  • name (hashable, optional) –

  • +
  • switch_names (bool, optional, default = True) – reverses edge_col and node_col names +unless edge_col = ‘edges’ and node_col = ‘nodes’

  • +
+
+
Return type:
+

hypergraph

+
+
+
+ +
+
+edge_adjacency_matrix(s=1, index=False)[source]
+

The s-adjacency matrix for the dual hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • index (boolean, optional, default = False) – if True, will return the index of ids for rows and columns

  • +
+
+
Returns:
+

    +
  • edge_adjacency_matrix (scipy.sparse.csr.csr_matrix)

  • +
  • edge_index (list) – index of ids for rows and columns

  • +
+

+
+
+

Notes

+

This is also the adjacency matrix for the line graph. +Two edges are s-adjacent if they share at least s nodes. +If remove_zeros is True will return the auxillary matrix

+
+ +
+
+edge_diameter(s=1)[source]
+

Returns the length of the longest shortest s-walk between edges in +hypergraph

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

edge_diameter

+
+
Return type:
+

int

+
+
Raises:
+

HyperNetXError – If hypergraph is not s-edge-connected

+
+
+

Notes

+

Two edges are s-adjacent if they share s nodes. +Two nodes e_start and e_end are s-walk connected if there is a +sequence of edges e_start, e_1, e_2, … e_n-1, e_end such that +consecutive edges are s-adjacent. If the graph is not connected, an +error will be raised.

+
+ +
+
+edge_diameters(s=1)[source]
+

Returns the edge diameters of the s_edge_connected component subgraphs +in hypergraph.

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

    +
  • maximum diameter (int)

  • +
  • list of diameters (list) – List of edge_diameters for s-edge component subgraphs in hypergraph

  • +
  • list of component (list) – List of the edge uids in the s-edge component subgraphs.

  • +
+

+
+
+
+ +
+
+edge_distance(source, target, s=1)[source]
+

XX TODO: still need to return path and translate into user defined +nodes and edges Returns the shortest s-walk distance between two edges +in the hypergraph.

+
+
Parameters:
+
    +
  • source (edge.uid or edge) – an edge in the hypergraph

  • +
  • target (edge.uid or edge) – an edge in the hypergraph

  • +
  • s (positive integer) – the number of intersections between pairwise consecutive edges

  • +
  • TODO (add edge weights) –

  • +
  • weight (None or string, optional, default = None) – if None then all edges have weight 1. If string then edge attribute +string is used if available.

  • +
+
+
Returns:
+

s- walk distance – A shortest s-walk is computed as a sequence of edges, +the s-walk distance is the number of edges in the sequence +minus 1. If no such path exists returns np.inf.

+
+
Return type:
+

the shortest s-walk edge distance

+
+
+
+

See also

+

distance

+
+

Notes

+

The s-distance is the shortest s-walk length between the edges. +An s-walk between edges is a sequence of edges such that +consecutive pairwise edges intersect in at least s nodes. The +length of the shortest s-walk is 1 less than the number of edges +in the path sequence.

+

Uses the networkx shortest_path_length method on the graph +generated by the s-edge_adjacency matrix.

+
+ +
+
+edge_neighbors(edge, s=1)[source]
+

The edges in hypergraph which share s nodes(s) with edge.

+
+
Parameters:
+
    +
  • edge (hashable or Entity) – uid for a edge in hypergraph or the edge Entity

  • +
  • s (int, list, optional, default = 1) – Minimum number of nodes shared by neighbors edge node.

  • +
+
+
Returns:
+

List of edge neighbors

+
+
Return type:
+

list

+
+
+
+ +
+
+property edge_props
+

Dataframe of edge properties +indexed on edge ids

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+edge_size_dist()[source]
+

Returns the size for each edge

+
+
Return type:
+

np.array

+
+
+
+ +
+
+property edges
+

Object associated with self._edges.

+
+
Return type:
+

EntitySet

+
+
+
+ +
+
+classmethod from_bipartite(B, set_names=('edges', 'nodes'), name=None, **kwargs)[source]
+

Static method creates a Hypergraph from a bipartite graph.

+
+
Parameters:
+
    +
  • B (nx.Graph()) – A networkx bipartite graph. Each node in the graph has a property +‘bipartite’ taking the value of 0 or 1 indicating a 2-coloring of +the graph.

  • +
  • set_names (iterable of length 2, optional, default = ['edges','nodes']) – Category names assigned to the graph nodes associated to each +bipartite set

  • +
  • name (hashable, optional) –

  • +
+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

A partition for the nodes in a bipartite graph generates a hypergraph.

+
>>> import networkx as nx
+>>> B = nx.Graph()
+>>> B.add_nodes_from([1, 2, 3, 4], bipartite=0)
+>>> B.add_nodes_from(['a', 'b', 'c'], bipartite=1)
+>>> B.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'), (2, 'c'), /
+    (3, 'c'), (4, 'a')])
+>>> H = Hypergraph.from_bipartite(B)
+>>> H.nodes, H.edges
+# output: (EntitySet(_:Nodes,[1, 2, 3, 4],{}), /
+# EntitySet(_:Edges,['b', 'c', 'a'],{}))
+
+
+
+ +
+
+classmethod from_incidence_dataframe(df, columns=None, rows=None, edge_col: str = 'edges', node_col: str = 'nodes', name=None, fillna=0, transpose=False, transforms=[], key=None, return_only_dataframe=False, **kwargs)[source]
+

Create a hypergraph from a Pandas Dataframe object, which has values equal +to the incidence matrix of a hypergraph. Its index will identify the nodes +and its columns will identify its edges.

+
+
Parameters:
+
    +
  • df (Pandas.Dataframe) – a real valued dataframe with a single index

  • +
  • columns ((optional) list, default = None) – restricts df to the columns with headers in this list.

  • +
  • rows ((optional) list, default = None) – restricts df to the rows indexed by the elements in this list.

  • +
  • name ((optional) string, default = None) –

  • +
  • fillna (float, default = 0) – a real value to place in empty cell, all-zero columns will not +generate an edge.

  • +
  • transpose ((optional) bool, default = False) – option to transpose the dataframe, in this case df.Index will +identify the edges and df.columns will identify the nodes, transpose is +applied before transforms and key

  • +
  • transforms ((optional) list, default = []) – optional list of transformations to apply to each column, +of the dataframe using pd.DataFrame.apply(). +Transformations are applied in the order they are +given (ex. abs). To apply transforms to rows or for additional +functionality, consider transforming df using pandas.DataFrame +methods prior to generating the hypergraph.

  • +
  • key ((optional) function, default = None) – boolean function to be applied to dataframe. will be applied to +entire dataframe.

  • +
  • return_only_dataframe ((optional) bool, default = False) – to use the incidence_dataframe with cell_properties or properties, set this +to true and use it as the setsystem in the Hypergraph constructor.

  • +
+
+
+
+

See also

+

from_numpy_array

+
+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+classmethod from_incidence_matrix(M, node_names=None, edge_names=None, node_label='nodes', edge_label='edges', name=None, key=None, **kwargs)[source]
+

Same as from_numpy_array.

+
+ +
+
+classmethod from_numpy_array(M, node_names=None, edge_names=None, node_label='nodes', edge_label='edges', name=None, key=None, **kwargs)[source]
+

Create a hypergraph from a real valued matrix represented as a 2 dimensionsl numpy array. +The matrix is converted to a matrix of 0’s and 1’s so that any truthy cells are converted to 1’s and +all others to 0’s.

+
+
Parameters:
+
    +
  • M (real valued array-like object, 2 dimensions) – representing a real valued matrix with rows corresponding to nodes and columns to edges

  • +
  • node_names (object, array-like, default=None) – List of node names must be the same length as M.shape[0]. +If None then the node names correspond to row indices with ‘v’ prepended.

  • +
  • edge_names (object, array-like, default=None) – List of edge names must have the same length as M.shape[1]. +If None then the edge names correspond to column indices with ‘e’ prepended.

  • +
  • name (hashable) –

  • +
  • key ((optional) function) – boolean function to be evaluated on each cell of the array, +must be applicable to numpy.array

  • +
+
+
Return type:
+

Hypergraph

+
+
+
+

Note

+

The constructor does not generate empty edges. +All zero columns in M are removed and the names corresponding to these +edges are discarded.

+
+
+ +
+
+get_cell_properties(edge: str, node: str, prop_name: str | None = None) Any | dict[str, Any][source]
+

Get cell properties on a specified edge and node

+
+
Parameters:
+
    +
  • edge (str) – edgeid

  • +
  • node (str) – nodeid

  • +
  • prop_name (str, optional) – name of a cell property; if None, all cell properties will be returned

  • +
+
+
Returns:
+

cell property value if prop_name is provided, otherwise dict of all +cell properties and values

+
+
Return type:
+

int or str or dict of {str: any}

+
+
+
+ +
+
+get_linegraph(s=1, edges=True)[source]
+

Creates an ::term::s-linegraph for the Hypergraph. +If edges=True (default)then the edges will be the vertices of the line +graph. Two vertices are connected by an s-line-graph edge if the +corresponding hypergraph edges intersect in at least s hypergraph nodes. +If edges=False, the hypergraph nodes will be the vertices of the line +graph. Two vertices are connected if the nodes they correspond to share +at least s incident hyper edges.

+
+
Parameters:
+
    +
  • s (int) – The width of the connections.

  • +
  • edges (bool, optional, default = True) – Determine if edges or nodes will be the vertices in the linegraph.

  • +
+
+
Returns:
+

A NetworkX graph.

+
+
Return type:
+

nx.Graph

+
+
+
+ +
+
+get_properties(id, level=None, prop_name=None)[source]
+

Returns an object’s specific property or all properties

+
+
Parameters:
+
    +
  • id (hashable) – edge or node id

  • +
  • level (int | None , optional, default = None) – if separate edge and node properties then enter 0 for edges +and 1 for nodes.

  • +
  • prop_name (str | None, optional, default = None) – if None then all properties associated with the object will be +returned.

  • +
+
+
Returns:
+

single property or dictionary of properties

+
+
Return type:
+

str or dict

+
+
+
+ +
+
+incidence_dataframe(sort_rows=False, sort_columns=False, cell_weights=True)[source]
+

Returns a pandas dataframe for hypergraph indexed by the nodes and +with column headers given by the edge names.

+
+
Parameters:
+
    +
  • sort_rows (bool, optional, default =True) – sort rows based on hashable node names

  • +
  • sort_columns (bool, optional, default =True) – sort columns based on hashable edge names

  • +
  • cell_weights (bool, optional, default =True) –

  • +
+
+
+
+ +
+
+property incidence_dict
+

Dictionary keyed by edge uids with values the uids of nodes in each +edge

+
+
Return type:
+

dict

+
+
+
+ +
+
+incidence_matrix(weights=False, index=False)[source]
+

An incidence matrix for the hypergraph indexed by nodes x edges.

+
+
Parameters:
+
    +
  • weights (bool, default =False) – If False all nonzero entries are 1. +If True and self.static all nonzero entries are filled by +self.edges.cell_weights dictionary values.

  • +
  • index (boolean, optional, default = False) – If True return will include a dictionary of node uid : row number +and edge uid : column number

  • +
+
+
Returns:
+

    +
  • incidence_matrix (scipy.sparse.csr.csr_matrix or np.ndarray)

  • +
  • row_index (list) – index of node ids for rows

  • +
  • col_index (list) – index of edge ids for columns

  • +
+

+
+
+
+ +
+
+is_connected(s=1, edges=False)[source]
+

Determines if hypergraph is s-connected.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, default = False) – If True, will determine if s-edge-connected. +For s=1 s-edge-connected is the same as s-connected.

  • +
+
+
Returns:
+

is_connected

+
+
Return type:
+

boolean

+
+
+

Notes

+

A hypergraph is s node connected if for any two nodes v0,vn +there exists a sequence of nodes v0,v1,v2,…,v(n-1),vn +such that every consecutive pair of nodes v(i),v(i+1) +share at least s edges.

+

A hypergraph is s edge connected if for any two edges e0,en +there exists a sequence of edges e0,e1,e2,…,e(n-1),en +such that every consecutive pair of edges e(i),e(i+1) +share at least s nodes.

+
+ +
+
+neighbors(node, s=1)[source]
+

The nodes in hypergraph which share s edge(s) with node.

+
+
Parameters:
+
    +
  • node (hashable or Entity) – uid for a node in hypergraph or the node Entity

  • +
  • s (int, list, optional, default = 1) – Minimum number of edges shared by neighbors with node.

  • +
+
+
Returns:
+

neighbors – s-neighbors share at least s edges in the hypergraph

+
+
Return type:
+

list

+
+
+
+ +
+
+node_diameters(s=1)[source]
+

Returns the node diameters of the connected components in hypergraph.

+
+
Parameters:
+
    +
  • and (list of the diameters of the s-components) –

  • +
  • nodes (list of the s-component) –

  • +
+
+
+
+ +
+
+property node_props
+

Dataframe of node properties +indexed on node ids

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+property nodes
+

Object associated with self._nodes.

+
+
Return type:
+

EntitySet

+
+
+
+ +
+
+number_of_edges(edgeset=None)[source]
+

The number of edges in edgeset belonging to hypergraph.

+
+
Parameters:
+

edgeset (an iterable of Entities, optional, default = None) – If None, then return the number of edges in hypergraph.

+
+
Returns:
+

number_of_edges

+
+
Return type:
+

int

+
+
+
+ +
+
+number_of_nodes(nodeset=None)[source]
+

The number of nodes in nodeset belonging to hypergraph.

+
+
Parameters:
+

nodeset (an interable of Entities, optional, default = None) – If None, then return the number of nodes in hypergraph.

+
+
Returns:
+

number_of_nodes

+
+
Return type:
+

int

+
+
+
+ +
+
+order()[source]
+

The number of nodes in hypergraph.

+
+
Returns:
+

order

+
+
Return type:
+

int

+
+
+
+ +
+
+property properties
+

Returns dataframe of edge and node properties.

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+remove(keys, level=None, name=None)[source]
+

Creates a new hypergraph with nodes and/or edges indexed by keys +removed. More efficient for creating a restricted hypergraph if the +restricted set is greater than what is being removed.

+
+
Parameters:
+
    +
  • keys (list | tuple | set | Hashable) – node and/or edge id(s) to restrict to

  • +
  • level (None, optional) – Enter 0 to remove edges with ids in keys. +Enter 1 to remove nodes with ids in keys. +If None then all objects in nodes and edges with the id will +be removed.

  • +
  • name (str, optional) – Name of new hypergraph

  • +
+
+
Return type:
+

hnx.Hypergraph

+
+
+
+ +
+
+remove_edges(keys, name=None)[source]
+
+ +
+
+remove_nodes(keys, name=None)[source]
+
+ +
+
+remove_singletons(name=None)[source]
+

Constructs clone of hypergraph with singleton edges removed.

+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+restrict_to_edges(edges, name=None)[source]
+

New hypergraph gotten by restricting to edges

+
+
Parameters:
+

edges (Iterable) – edgeids to restrict to

+
+
Return type:
+

hnx.Hypergraph

+
+
+
+ +
+
+restrict_to_nodes(nodes, name=None)[source]
+

New hypergraph gotten by restricting to nodes

+
+
Parameters:
+

nodes (Iterable) – nodeids to restrict to

+
+
Return type:
+

hnx. Hypergraph

+
+
+
+ +
+
+s_component_subgraphs(s=1, edges=True, return_singletons=False, name=None)[source]
+

Returns a generator for the induced subgraphs of s_connected +components. Removes singletons unless return_singletons is set to True. +Computed using s-linegraph generated either by the hypergraph +(edges=True) or its dual (edges = False)

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, edges=False) – Determines if edge or node components are desired. Returns +subgraphs equal to the hypergraph restricted to each set of +nodes(edges) in the s-connected components or s-edge-connected +components

  • +
  • return_singletons (bool, optional) –

  • +
+
+
Yields:
+

s_component_subgraphs (iterator) – Iterator returns subgraphs generated by the edges (or nodes) in the +s-edge(node) components of hypergraph.

+
+
+
+ +
+
+s_components(s=1, edges=True, return_singletons=True)[source]
+

Same as s_connected_components

+ +
+ +
+
+s_connected_components(s=1, edges=True, return_singletons=False)[source]
+

Returns a generator for the s-edge-connected components +or the s-node-connected components of the hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, default = True) – If True will return edge components, if False will return node +components

  • +
  • return_singletons (bool, optional, default = False) –

  • +
+
+
+

Notes

+

If edges=True, this method returns the s-edge-connected components as +lists of lists of edge uids. +An s-edge-component has the property that for any two edges e1 and e2 +there is a sequence of edges starting with e1 and ending with e2 +such that pairwise adjacent edges in the sequence intersect in at least +s nodes. If s=1 these are the path components of the hypergraph.

+

If edges=False this method returns s-node-connected components. +A list of sets of uids of the nodes which are s-walk connected. +Two nodes v1 and v2 are s-walk-connected if there is a +sequence of nodes starting with v1 and ending with v2 such that +pairwise adjacent nodes in the sequence share s edges. If s=1 these +are the path components of the hypergraph.

+

Example

+
>>> S = {'A':{1,2,3},'B':{2,3,4},'C':{5,6},'D':{6}}
+>>> H = Hypergraph(S)
+
+
+
>>> list(H.s_components(edges=True))
+[{'C', 'D'}, {'A', 'B'}]
+>>> list(H.s_components(edges=False))
+[{1, 2, 3, 4}, {5, 6}]
+
+
+
+
Yields:
+

s_connected_components (iterator) – Iterator returns sets of uids of the edges (or nodes) in the +s-edge(node) components of hypergraph.

+
+
+
+ +
+
+set_state(**kwargs)[source]
+

Allow state_dict updates from outside of class. Use with caution.

+
+
Parameters:
+

**kwargs – key=value pairs to save in state dictionary

+
+
+
+ +
+
+property shape
+

(number of nodes, number of edges)

+
+
Return type:
+

tuple

+
+
+
+ +
+
+singletons()[source]
+

Returns a list of singleton edges. A singleton edge is an edge of +size 1 with a node of degree 1.

+
+
Returns:
+

singles – A list of edge uids.

+
+
Return type:
+

list

+
+
+
+ +
+
+size(edge, nodeset=None)[source]
+

The number of nodes in nodeset that belong to edge. +If nodeset is None then returns the size of edge

+
+
Parameters:
+

edge (hashable) – The uid of an edge in the hypergraph

+
+
Returns:
+

size

+
+
Return type:
+

int

+
+
+
+ +
+
+toplexes(name=None)[source]
+

Returns a simple hypergraph corresponding to self.

+
+

Warning

+

Collapsing is no longer supported inside the toplexes method. Instead +generate a new collapsed hypergraph and compute the toplexes of the +new hypergraph.

+
+
+
Parameters:
+

name (str, optional, default = None) –

+
+
+
+ +
+
+
+ +
+
+

Module contents

+
+
+class classes.Entity(entity: DataFrame | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | None = None, data_cols: Sequence[T] = [0, 1], data: ndarray | None = None, static: bool = False, labels: OrderedDict[T, Sequence[T]] | None = None, uid: Hashable | None = None, weight_col: str | int | None = 'cell_weights', weights: Sequence[float] | float | int | str | None = 1, aggregateby: str | dict | None = 'sum', properties: DataFrame | dict[int, dict[T, dict[Any, Any]]] | None = None, misc_props_col: str = 'properties', level_col: str = 'level', id_col: str = 'id')[source]
+

Bases: object

+

Base class for handling N-dimensional data when building network-like models, +i.e., Hypergraph

+
+
Parameters:
+
    +
  • entity (pandas.DataFrame, dict of lists or sets, list of lists or sets, optional) – If a DataFrame with N columns, +represents N-dimensional entity data (data table). +Otherwise, represents 2-dimensional entity data (system of sets). +TODO: Test for compatibility with list of Entities and update docs

  • +
  • data (numpy.ndarray, optional) – 2D M x N ndarray of ints (data table); +sparse representation of an N-dimensional incidence tensor with M nonzero cells. +Ignored if entity is provided.

  • +
  • static (bool, default=True) – If True, entity data may not be altered, +and the state_dict will never be cleared. +Otherwise, rows may be added to and removed from the data table, +and updates will clear the state_dict.

  • +
  • labels (collections.OrderedDict of lists, optional) – User-specified labels in corresponding order to ints in data. +Ignored if entity is provided or data is not provided.

  • +
  • uid (hashable, optional) – A unique identifier for the object

  • +
  • weights (str or sequence of float, optional) –

    User-specified cell weights corresponding to entity data. +If sequence of floats and entity or data defines a data table,

    +
    +

    length must equal the number of rows.

    +
    +
    +
    If sequence of floats and entity defines a system of sets,

    length must equal the total sum of the sizes of all sets.

    +
    +
    If str and entity is a DataFrame,

    must be the name of a column in entity.

    +
    +
    +

    Otherwise, weight for all cells is assumed to be 1.

    +

  • +
  • aggregateby ({'sum', 'last', count', 'mean','median', max', 'min', 'first', None}) – Name of function to use for aggregating cell weights of duplicate rows when +entity or data defines a data table, default is “sum”. +If None, duplicate rows will be dropped without aggregating cell weights. +Effectively ignored if entity defines a system of sets.

  • +
  • properties (pandas.DataFrame or doubly-nested dict, optional) – User-specified properties to be assigned to individual items in the data, i.e., +cell entries in a data table; sets or set elements in a system of sets. +See Notes for detailed explanation. +If DataFrame, each row gives +[optional item level, item label, optional named properties, +{property name: property value}] +(order of columns does not matter; see note for an example). +If doubly-nested dict, +{item level: {item label: {property name: property value}}}.

  • +
  • misc_props_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
  • level_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
  • id_col (str, default="properties", "level, "id") – Column names for miscellaneous properties, level index, and item name in +properties; see Notes for explanation.

  • +
+
+
+

Notes

+

A property is a named attribute assigned to a single item in the data.

+

You can pass a table of properties to properties as a DataFrame:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Level +(optional)

ID

[explicit +property type]

[…]

misc. properties

0

level 0 +item

property value

{property name: +property value}

1

level 1 +item

property value

{property name: +property value}

N

level N +item

property value

{property name: +property value}

+

The Level column is optional. If not provided, properties will be assigned by ID +(i.e., if an ID appears at multiple levels, the same properties will be assigned to +all occurrences).

+

The names of the Level (if provided) and ID columns must be specified by level_col +and id_col. misc_props_col can be used to specify the name of the column to be used +for miscellaneous properties; if no column by that name is found, +a new column will be created and populated with empty dicts. +All other columns will be considered explicit property types. +The order of the columns does not matter.

+

This method assumes that there are no rows with the same (Level, ID); +if duplicates are found, all but the first occurrence will be dropped.

+
+
+add(*args)[source]
+

Updates the underlying data table with new entity data from multiple sources

+
+
Parameters:
+

*args – variable length argument list of Entity and/or representations of entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

Warning

+

Adding an element directly to an Entity will not add the +element to any Hypergraphs constructed from that Entity, and will cause an error. Use +Hypergraph.add_edge or +Hypergraph.add_node_to_edge instead.

+
+
+

See also

+
+
add_element

update from a single source

+
+
+

Hypergraph.add_edge, Hypergraph.add_node_to_edge

+
+
+ +
+
+add_element(data)[source]
+

Updates the underlying data table with new entity data

+

Supports adding from either an existing Entity or a representation of entity +(data table or labeled system of sets are both supported representations)

+
+
Parameters:
+

data (Entity, pandas.DataFrame, or dict of lists or sets) – new entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

Warning

+

Adding an element directly to an Entity will not add the +element to any Hypergraphs constructed from that Entity, and will cause an error. Use +Hypergraph.add_edge or Hypergraph.add_node_to_edge instead.

+
+
+

See also

+
+
add

takes multiple sources of new entity data as variable length argument list

+
+
+

Hypergraph.add_edge, Hypergraph.add_node_to_edge

+
+
+ +
+
+add_elements_from(arg_set)[source]
+

Adds arguments from an iterable to the data table one at a time

+
+
..deprecated:: 2.0.0

Duplicates add

+
+
+
+
Parameters:
+

arg_set (iterable) – list of Entity and/or representations of entity data

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+ +
+
+assign_properties(props: DataFrame | dict[int, dict[T, dict[Any, Any]]], misc_col: str | None = None, level_col=0, id_col=1) None[source]
+

Assign new properties to items in the data table, update properties

+
+
Parameters:
+
    +
  • props (pandas.DataFrame or doubly-nested dict) – See documentation of the properties parameter in Entity

  • +
  • level_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
  • id_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
  • misc_col (str, optional) – column names corresponding to the levels, items, and misc. properties; +if None, default to _level_col, _id_col, _misc_props_col, +respectively.

  • +
+
+
+
+

See also

+

properties

+
+
+ +
+
+property cell_weights
+

Cell weights corresponding to each row of the underlying data table

+
+
Returns:
+

dict of {tuple – Keyed by row of data table (as a tuple)

+
+
Return type:
+

int or float}

+
+
+
+ +
+
+property children
+

Labels of all items in level 1 (second column) of the underlying data table

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
+

uidset_by_level, uidset_by_column

+
+
+ +
+
+property data
+

Sparse representation of the data table as an incidence tensor

+

This can also be thought of as an encoding of dataframe, where items in each column of +the data table are translated to their int position in the self.labels[column] list +:returns: 2D array of ints representing rows of the underlying data table as indices in an incidence tensor +:rtype: numpy.ndarray

+
+

See also

+

labels, dataframe

+
+
+ +
+
+property dataframe
+

The underlying data table stored by the Entity

+
+
Return type:
+

pandas.DataFrame

+
+
+
+ +
+
+property dimensions
+

Dimensions of data i.e., the number of distinct items in each level (column) of the underlying data table

+
+
Returns:
+

Length and order corresponds to columns of self.dataframe (excluding cell weight column)

+
+
Return type:
+

tuple of ints

+
+
+
+ +
+
+property dimsize
+

Number of levels (columns) in the underlying data table

+
+
Returns:
+

Equal to length of self.dimensions

+
+
Return type:
+

int

+
+
+
+ +
+
+property elements
+

System of sets representation of the first two levels (columns) of the underlying data table

+

Each item in level 0 (first column) defines a set containing all the level 1 +(second column) items with which it appears in the same row of the underlying +data table

+
+
Returns:
+

System of sets representation as dict of {level 0 item : AttrList(level 1 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
incidence_dict

same data as dict of list

+
+
memberships

dual of this representation, i.e., each item in level 1 (second column) defines a set

+
+
+

elements_by_level, elements_by_column

+
+
+ +
+
+elements_by_column(col1, col2)[source]
+

System of sets representation of two columns (levels) of the underlying data table

+

Each item in col1 defines a set containing all the col2 items +with which it appears in the same row of the underlying data table

+

Properties can be accessed and assigned to items in col1

+
+
Parameters:
+
    +
  • col1 (Hashable) – name of column whose items define sets

  • +
  • col2 (Hashable) – name of column whose items are elements in the system of sets

  • +
+
+
Returns:
+

System of sets representation as dict of {col1 item : AttrList(col2 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+

elements, memberships

+
+
elements_by_level

same functionality, takes level indices instead of column names

+
+
+
+
+ +
+
+elements_by_level(level1, level2)[source]
+

System of sets representation of two levels (columns) of the underlying data table

+

Each item in level1 defines a set containing all the level2 items +with which it appears in the same row of the underlying data table

+

Properties can be accessed and assigned to items in level1

+
+
Parameters:
+
    +
  • level1 (int) – index of level whose items define sets

  • +
  • level2 (int) – index of level whose items are elements in the system of sets

  • +
+
+
Returns:
+

System of sets representation as dict of {level1 item : AttrList(level2 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+

elements, memberships

+
+
elements_by_column

same functionality, takes column names instead of level indices

+
+
+
+
+ +
+
+property empty
+

Whether the underlying data table is empty or not

+
+
Return type:
+

bool

+
+
+
+

See also

+
+
is_empty

for checking whether a specified level (column) is empty

+
+
dimsize

0 if empty

+
+
+
+
+ +
+
+encode(data)[source]
+

Encode dataframe to numpy array

+
+
Parameters:
+

data (dataframe) –

+
+
Return type:
+

numpy.array

+
+
+
+ +
+
+get_properties(item: T, level: int | None = None) dict[Any, Any][source]
+

Get all properties of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • level (int, optional) – level index of the item

  • +
+
+
Returns:
+

prop_vals{named property: property value, ..., +misc. property column name: {property name: property value}}

+
+
Return type:
+

dict

+
+
Raises:
+

KeyError – if (level, item) is not in properties, +or if level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+
+

See also

+

get_property, set_property

+
+
+ +
+
+get_property(item: T, prop_name: Any, level: int | None = None) Any[source]
+

Get a property of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • prop_name (hashable) – name of the property to get

  • +
  • level (int, optional) – level index of the item

  • +
+
+
Returns:
+

prop_val – value of the property

+
+
Return type:
+

any

+
+
Raises:
+

KeyError – if (level, item) is not in properties, +or if level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+ +
+ +
+
+property incidence_dict: dict[T, list[T]]
+

System of sets representation of the first two levels (columns) of the underlying data table

+
+
Returns:
+

System of sets representation as dict of {level 0 item : AttrList(level 1 items)}

+
+
Return type:
+

dict of list

+
+
+
+

See also

+
+
elements

same data as dict of AttrList

+
+
+
+
+ +
+
+incidence_matrix(level1=0, level2=1, weights=False, aggregateby=None, index=False) csr_matrix | None[source]
+

Incidence matrix representation for two levels (columns) of the underlying data table

+

If level1 and level2 contain N and M distinct items, respectively, the incidence matrix will be M x N. +In other words, the items in level1 and level2 correspond to the columns and rows of the incidence matrix, +respectively, in the order in which they appear in self.labels[column1] and self.labels[column2] +(column1 and column2 are the column labels of level1 and level2)

+
+
Parameters:
+
    +
  • level1 (int, default=0) – index of first level (column)

  • +
  • level2 (int, default=1) – index of second level

  • +
  • weights (bool or dict, default=False) – If False all nonzero entries are 1. +If True all nonzero entries are filled by self.cell_weight +dictionary values, use aggregateby to specify how duplicate +entries should have weights aggregated. +If dict of {(level1 item, level2 item): weight value} form; +only nonzero cells in the incidence matrix will be updated by dictionary, +i.e., level1 item and level2 item must appear in the same row at least once in the underlying data table

  • +
  • aggregateby ({'last', count', 'sum', 'mean','median', max', 'min', 'first', 'last', None}, default='count') –

    +
    Method to aggregate weights of duplicate rows in data table.

    If None, then all cell weights will be set to 1.

    +
    +
    +

  • +
  • index (bool, optional) – Not used

  • +
+
+
Returns:
+

sparse representation of incidence matrix (i.e. Compressed Sparse Row matrix)

+
+
Return type:
+

scipy.sparse.csr.csr_matrix

+
+
+
+

Note

+

In the context of Hypergraphs, think level1 = edges, level2 = nodes

+
+
+ +
+
+index(column, value=None)[source]
+

Get level index corresponding to a column and (optionally) the index of a value in that column

+

The index of value is its position in the list given by self.labels[column], which is used +in the integer encoding of the data table self.data

+
+
Parameters:
+
    +
  • column (str) – name of a column in self.dataframe

  • +
  • value (str, optional) – label of an item in the specified column

  • +
+
+
Returns:
+

level index corresponding to column, index of value if provided

+
+
Return type:
+

int or (int, int)

+
+
+
+

See also

+
+
indices

for finding indices of multiple values in a column

+
+
level

same functionality, search for the value without specifying column

+
+
+
+
+ +
+
+indices(column, values)[source]
+

Get indices of one or more value(s) in a column

+
+
Parameters:
+
    +
  • column (str) –

  • +
  • values (str or iterable of str) –

  • +
+
+
Returns:
+

indices of values

+
+
Return type:
+

list of int

+
+
+
+

See also

+
+
index

for finding level index of a column and index of a single value

+
+
+
+
+ +
+
+is_empty(level=0)[source]
+

Whether a specified level (column) of the underlying data table is empty or not

+
+
Return type:
+

bool

+
+
+
+

See also

+
+
empty

for checking whether the underlying data table is empty

+
+
size

number of items in a level (columns); 0 if level is empty

+
+
+
+
+ +
+
+property isstatic
+

Whether to treat the underlying data as static or not

+

If True, the underlying data may not be altered, and the state_dict will never be cleared +Otherwise, rows may be added to and removed from the data table, and updates will clear the state_dict

+
+
Return type:
+

bool

+
+
+
+ +
+
+property labels
+

Labels of all items in each column of the underlying data table

+
+
Returns:
+

dict of {column name: [item labels]} +The order of [item labels] corresponds to the int encoding of each item in self.data.

+
+
Return type:
+

dict of lists

+
+
+
+

See also

+

data, dataframe

+
+
+ +
+
+level(item, min_level=0, max_level=None, return_index=True)[source]
+

First level containing the given item label

+

Order of levels corresponds to order of columns in self.dataframe

+
+
Parameters:
+
    +
  • item (str) –

  • +
  • min_level (int, optional) – inclusive bounds on range of levels to search for item

  • +
  • max_level (int, optional) – inclusive bounds on range of levels to search for item

  • +
  • return_index (bool, default=True) – If True, return index of item within the level

  • +
+
+
Returns:
+

index of first level containing the item, index of item if return_index=True +returns None if item is not found

+
+
Return type:
+

int, (int, int), or None

+
+
+
+

See also

+

index, indices

+
+
+ +
+
+property memberships
+

System of sets representation of the first two levels (columns) of the +underlying data table

+

Each item in level 1 (second column) defines a set containing all the level 0 +(first column) items with which it appears in the same row of the underlying +data table

+
+
Returns:
+

System of sets representation as dict of {level 1 item : AttrList(level 0 items)}

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
elements

dual of this representation i.e., each item in level 0 (first column) defines a set

+
+
+

elements_by_level, elements_by_column

+
+
+ +
+
+property properties: DataFrame
+

Properties assigned to items in the underlying data table

+
+
Return type:
+

pandas.DataFrame

+
+
+
+ +
+
+remove(*args)[source]
+

Removes all rows containing specified item(s) from the underlying data table

+
+
Parameters:
+

*args – variable length argument list of item labels

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

See also

+
+
remove_element

remove all rows containing a single specified item

+
+
+
+
+ +
+
+remove_element(item)[source]
+

Removes all rows containing a specified item from the underlying data table

+
+
Parameters:
+

item – item label

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+

See also

+
+
remove

same functionality, accepts variable length argument list of item labels

+
+
+
+
+ +
+
+remove_elements_from(arg_set)[source]
+

Removes all rows containing specified item(s) from the underlying data table

+
+
..deprecated: 2.0.0

Duplicates remove

+
+
+
+
Parameters:
+

arg_set (iterable) – list of item labels

+
+
Returns:
+

self

+
+
Return type:
+

Entity

+
+
+
+ +
+
+restrict_to_indices(indices, level=0, **kwargs)[source]
+

Create a new Entity by restricting the data table to rows containing specific items in a given level

+
+
Parameters:
+
    +
  • indices (int or iterable of int) – indices of item label(s) in level to restrict to

  • +
  • level (int, default=0) – level index

  • +
  • **kwargs – Extra arguments to Entity constructor

  • +
+
+
Return type:
+

Entity

+
+
+
+ +
+
+restrict_to_levels(levels: int | Iterable[int], weights: bool = False, aggregateby: str | None = 'sum', **kwargs) Entity[source]
+

Create a new Entity by restricting to a subset of levels (columns) in the +underlying data table

+
+
Parameters:
+
    +
  • levels (array-like of int) – indices of a subset of levels (columns) of data

  • +
  • weights (bool, default=False) – If True, aggregate existing cell weights to get new cell weights +Otherwise, all new cell weights will be 1

  • +
  • aggregateby ({'sum', 'first', 'last', 'count', 'mean', 'median', 'max', 'min', None}, optional) – Method to aggregate weights of duplicate rows in data table +If None or `weights`=False then all new cell weights will be 1

  • +
  • **kwargs – Extra arguments to Entity constructor

  • +
+
+
Return type:
+

Entity

+
+
Raises:
+

KeyError – If levels contains any invalid values

+
+
+
+

See also

+

EntitySet

+
+
+ +
+
+set_property(item: T, prop_name: Any, prop_val: Any, level: int | None = None) None[source]
+

Set a property of an item

+
+
Parameters:
+
    +
  • item (hashable) – name of an item

  • +
  • prop_name (hashable) – name of the property to set

  • +
  • prop_val (any) – value of the property to set

  • +
  • level (int, optional) – level index of the item; +required if item is not already in properties

  • +
+
+
Raises:
+

ValueError – If level is not provided and item is not in properties

+
+
Warns:
+

UserWarning – If level is not provided and item appears in multiple levels, +assumes the first (closest to 0)

+
+
+ +
+ +
+
+size(level=0)[source]
+

The number of items in a level of the underlying data table

+

Equivalent to self.dimensions[level]

+
+
Parameters:
+

level (int, default=0) –

+
+
Return type:
+

int

+
+
+
+

See also

+

dimensions

+
+
+ +
+
+translate(level, index)[source]
+

Given indices of a level and value(s), return the corresponding value label(s)

+
+
Parameters:
+
    +
  • level (int) – level index

  • +
  • index (int or list of int) – value index or indices

  • +
+
+
Returns:
+

label(s) corresponding to value index or indices

+
+
Return type:
+

str or list of str

+
+
+
+

See also

+
+
translate_arr

translate a full row of value indices across all levels (columns)

+
+
+
+
+ +
+
+translate_arr(coords)[source]
+

Translate a full encoded row of the data table e.g., a row of self.data

+
+
Parameters:
+

coords (tuple of ints) – encoded value indices, with one value index for each level of the data

+
+
Returns:
+

full row of translated value labels

+
+
Return type:
+

list of str

+
+
+
+ +
+
+property uid
+

User-defined unique identifier for the Entity

+
+
Return type:
+

hashable

+
+
+
+ +
+
+property uidset
+

Labels of all items in level 0 (first column) of the underlying data table

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
children

Labels of all items in level 1 (second column)

+
+
+

uidset_by_level, uidset_by_column

+
+
+ +
+
+uidset_by_column(column)[source]
+

Labels of all items in a particular column (level) of the underlying data table

+
+
Parameters:
+

column (Hashable) – Name of a column in self.dataframe

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
children

Labels of all items in level 1 (second column)

+
+
uidset_by_level

Same functionality, takes the level index instead of column name

+
+
+
+
+ +
+
+uidset_by_level(level)[source]
+

Labels of all items in a particular level (column) of the underlying data table

+
+
Parameters:
+

level (int) –

+
+
Return type:
+

frozenset

+
+
+
+

See also

+
+
uidset

Labels of all items in level 0 (first column)

+
+
children

Labels of all items in level 1 (second column)

+
+
uidset_by_column

Same functionality, takes the column name instead of level index

+
+
+
+
+ +
+ +
+
+class classes.EntitySet(entity: pd.DataFrame | np.ndarray | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | Mapping[T, Mapping[T, Mapping[T, Any]]] | None = None, data: np.ndarray | None = None, labels: OrderedDict[T, Sequence[T]] | None = None, level1: str | int = 0, level2: str | int = 1, weight_col: str | int = 'cell_weights', weights: Sequence[float] | float | int | str = 1, cell_properties: Sequence[T] | pd.DataFrame | dict[T, dict[T, dict[Any, Any]]] | None = None, misc_cell_props_col: str = 'cell_properties', uid: Hashable | None = None, aggregateby: str | None = 'sum', properties: pd.DataFrame | dict[int, dict[T, dict[Any, Any]]] | None = None, misc_props_col: str = 'properties', **kwargs)[source]
+

Bases: Entity

+

Class for handling 2-dimensional (i.e., system of sets, bipartite) data when +building network-like models, i.e., Hypergraph

+
+
Parameters:
+
    +
  • entity (Entity, pandas.DataFrame, dict of lists or sets, or list of lists or sets, optional) – If an Entity with N levels or a DataFrame with N columns, +represents N-dimensional entity data (data table). +If N > 2, only considers levels (columns) level1 and level2. +Otherwise, represents 2-dimensional entity data (system of sets).

  • +
  • data (numpy.ndarray, optional) – 2D M x N ndarray of ints (data table); +sparse representation of an N-dimensional incidence tensor with M nonzero cells. +If N > 2, only considers levels (columns) level1 and level2. +Ignored if entity is provided.

  • +
  • labels (collections.OrderedDict of lists, optional) – User-specified labels in corresponding order to ints in data. +For M x N data, N > 2, labels must contain either 2 or N keys. +If N keys, only considers labels for levels (columns) level1 and level2. +Ignored if entity is provided or data is not provided.

  • +
  • level1 (str or int, default=0,1) – Each item in level1 defines a set containing all the level2 items with which +it appears in the same row of the underlying data table. +If int, gives the index of a level; +if str, gives the name of a column in entity. +Ignored if entity, data (if entity not provided), and labels all (if +provided) represent 1- or 2-dimensional data (set or system of sets).

  • +
  • level2 (str or int, default=0,1) – Each item in level1 defines a set containing all the level2 items with which +it appears in the same row of the underlying data table. +If int, gives the index of a level; +if str, gives the name of a column in entity. +Ignored if entity, data (if entity not provided), and labels all (if +provided) represent 1- or 2-dimensional data (set or system of sets).

  • +
  • weights (str or sequence of float, optional) –

    User-specified cell weights corresponding to entity data. +If sequence of floats and entity or data defines a data table,

    +
    +

    length must equal the number of rows.

    +
    +
    +
    If sequence of floats and entity defines a system of sets,

    length must equal the total sum of the sizes of all sets.

    +
    +
    If str and entity is a DataFrame,

    must be the name of a column in entity.

    +
    +
    +

    Otherwise, weight for all cells is assumed to be 1. +Ignored if entity is an Entity and `keep_weights`=True.

    +

  • +
  • keep_weights (bool, default=True) – Whether to preserve any existing cell weights; +ignored if entity is not an Entity.

  • +
  • cell_properties (str, list of str, pandas.DataFrame, or doubly-nested dict, optional) – User-specified properties to be assigned to cells of the incidence matrix, i.e., +rows in a data table; pairs of (set, element of set) in a system of sets. +See Notes for detailed explanation. +Ignored if underlying data is 1-dimensional (set). +If doubly-nested dict, +{level1 item: {level2 item: {cell property name: cell property value}}}.

  • +
  • misc_cell_props_col (str, default='cell_properties') – Column name for miscellaneous cell properties; see Notes for explanation.

  • +
  • kwargs – Keyword arguments passed to the Entity constructor, e.g., static, +uid, aggregateby, properties, etc. See Entity for documentation +of these parameters.

  • +
+
+
+

Notes

+

A cell property is a named attribute assigned jointly to a set and one of its +elements, i.e, a cell of the incidence matrix.

+

When an Entity or DataFrame is passed to the entity parameter of the +constructor, it should represent a data table:

+ + + + + + + + + + + + + + + + + + + + + + + +

Column_1

Column_2

Column_3

[…]

Column_N

level 1 item

level 2 item

level 3 item

level N item

+

Assuming the default values for parameters level1, level2, the data table will +be restricted to the set system defined by Column 1 and Column 2. +Since each row of the data table represents an incidence or cell, values from other +columns may contain data that should be converted to cell properties.

+

By passing a column name or list of column names as cell_properties, each +given column will be preserved in the cell_properties as an explicit cell +property type. An additional column in cell_properties will be created to +store a dict of miscellaneous cell properties, which will store cell properties +of types that have not been explicitly defined and do not have a dedicated column +(which may be assigned after construction). The name of the miscellaneous column is +determined by misc_cell_props_col.

+

You can also pass a pre-constructed table to cell_properties as a +DataFrame:

+ + + + + + + + + + + + + + + + + + + + + + + +

Column_1

Column_2

[explicit cell prop. type]

[…]

misc. cell properties

level 1 +item

level 2 +item

cell property value

{cell property name: +cell property value}

+

Column 1 and Column 2 must have the same names as the corresponding columns in the +entity data table, and misc_cell_props_col can be used to specify the name of the +column to be used for miscellaneous cell properties. If no column by that name is +found, a new column will be created and populated with empty dicts. All other +columns will be considered explicit cell property types. The order of the columns +does not matter.

+

Both of these methods assume that there are no row duplicates in the tables passed +to entity and/or cell_properties; if duplicates are found, all but the first +occurrence will be dropped.

+
+
+assign_cell_properties(cell_props: DataFrame | dict[T, dict[T, dict[Any, Any]]], misc_col: str | None = None, replace: bool = False) None[source]
+

Assign new properties to cells of the incidence matrix and update +properties

+
+
Parameters:
+
    +
  • cell_props (pandas.DataFrame, dict of iterables, or doubly-nested dict, optional) – See documentation of the cell_properties parameter in EntitySet

  • +
  • misc_col (str, optional) – name of column to be used for miscellaneous cell property dicts

  • +
  • replace (bool, default=False) – If True, replace existing cell_properties with result; +otherwise update with new values from result

  • +
+
+
Raises:
+

AttributeError – Not supported for :attr:`dimsize`=1

+
+
+
+ +
+
+property cell_properties: DataFrame | None
+

Properties assigned to cells of the incidence matrix

+
+
Returns:
+

Returns None if dimsize < 2

+
+
Return type:
+

pandas.Series, optional

+
+
+
+ +
+
+collapse_identical_elements(return_equivalence_classes: bool = False, **kwargs) EntitySet | tuple[hypernetx.classes.entityset.EntitySet, dict[str, list[str]]][source]
+

Create a new EntitySet by collapsing sets with the same set elements

+

Each item in level 0 (first column) defines a set containing all the level 1 +(second column) items with which it appears in the same row of the underlying +data table.

+
+
Parameters:
+
    +
  • return_equivalence_classes (bool, default=False) – If True, return a dictionary of equivalence classes keyed by new edge names

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Returns:
+

    +
  • new_entity (EntitySet) – new EntitySet with identical sets collapsed; +if all sets are unique, the system of sets will be the same as the original.

  • +
  • equivalence_classes (dict of lists, optional) – if return_equivalence_classes`=True, +``{collapsed set label: [level 0 item labels]}`.

  • +
+

+
+
+
+ +
+
+get_cell_properties(item1: T, item2: T) dict[Any, Any][source]
+

Get all properties of a cell, i.e., incidence between items of different +levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
+
+
Returns:
+

{named cell property: cell property value, ..., misc. cell property column +name: {cell property name: cell property value}}

+
+
Return type:
+

dict

+
+
+ +
+ +
+
+get_cell_property(item1: T, item2: T, prop_name: Any) Any[source]
+

Get a property of a cell i.e., incidence between items of different levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
  • prop_name (hashable) – name of the cell property to get

  • +
+
+
Returns:
+

prop_val – value of the cell property

+
+
Return type:
+

any

+
+
+ +
+ +
+
+property memberships: dict[str, hypernetx.classes.helpers.AttrList[str]]
+

Extends Entity.memberships

+

Each item in level 1 (second column) defines a set containing all the level 0 +(first column) items with which it appears in the same row of the underlying +data table.

+
+
Returns:
+

System of sets representation as dict of +{level 1 item: AttrList(level 0 items)}.

+
+
Return type:
+

dict of AttrList

+
+
+
+

See also

+
+
elements

dual of this representation, i.e., each item in level 0 (first column) defines a set

+
+
restrict_to_levels

for more information on how memberships work for 1-dimensional (set) data

+
+
+
+
+ +
+
+restrict_to(indices: int | Iterable[int], **kwargs) EntitySet[source]
+

Alias of restrict_to_indices() with default parameter `level`=0

+
+
Parameters:
+
    +
  • indices (array_like of int) – indices of item label(s) in level to restrict to

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Return type:
+

EntitySet

+
+
+
+

See also

+

restrict_to_indices

+
+
+ +
+
+restrict_to_levels(levels: int | Iterable[int], weights: bool = False, aggregateby: str | None = 'sum', keep_memberships: bool = True, **kwargs) EntitySet[source]
+

Extends Entity.restrict_to_levels()

+
+
Parameters:
+
    +
  • levels (array-like of int) – indices of a subset of levels (columns) of data

  • +
  • weights (bool, default=False) – If True, aggregate existing cell weights to get new cell weights. +Otherwise, all new cell weights will be 1.

  • +
  • aggregateby ({'sum', 'first', 'last', 'count', 'mean', 'median', 'max', 'min', None}, optional) – Method to aggregate weights of duplicate rows in data table +If None or `weights`=False then all new cell weights will be 1

  • +
  • keep_memberships (bool, default=True) – Whether to preserve membership information for the discarded level when +the new EntitySet is restricted to a single level

  • +
  • **kwargs – Extra arguments to EntitySet constructor

  • +
+
+
Return type:
+

EntitySet

+
+
Raises:
+

KeyError – If levels contains any invalid values

+
+
+
+ +
+
+set_cell_property(item1: T, item2: T, prop_name: Any, prop_val: Any) None[source]
+

Set a property of a cell i.e., incidence between items of different levels

+
+
Parameters:
+
    +
  • item1 (hashable) – name of an item in level 0

  • +
  • item2 (hashable) – name of an item in level 1

  • +
  • prop_name (hashable) – name of the cell property to set

  • +
  • prop_val (any) – value of the cell property to set

  • +
+
+
+ +
+ +
+ +
+
+class classes.Hypergraph(setsystem: DataFrame | ndarray | Mapping[T, Iterable[T]] | Iterable[Iterable[T]] | Mapping[T, Mapping[T, Mapping[str, Any]]] | None = None, edge_col: str | int = 0, node_col: str | int = 1, cell_weight_col: str | int | None = 'cell_weights', cell_weights: Sequence[float] | float = 1.0, cell_properties: Sequence[str | int] | Mapping[T, Mapping[T, Mapping[str, Any]]] | None = None, misc_cell_properties_col: str | int | None = None, aggregateby: str | dict[str, str] = 'first', edge_properties: DataFrame | dict[T, dict[Any, Any]] | None = None, node_properties: DataFrame | dict[T, dict[Any, Any]] | None = None, properties: DataFrame | dict[T, dict[Any, Any]] | dict[T, dict[T, dict[Any, Any]]] | None = None, misc_properties_col: str | int | None = None, edge_weight_prop_col: str | int = 'weight', node_weight_prop_col: str | int = 'weight', weight_prop_col: str | int = 'weight', default_edge_weight: float | None = None, default_node_weight: float | None = None, default_weight: float = 1.0, name: str | None = None, **kwargs)[source]
+

Bases: object

+
+
Parameters:
+
    +
  • setsystem ((optional) dict of iterables, dict of dicts,iterable of iterables,) – pandas.DataFrame, numpy.ndarray, default = None +See SetSystem above for additional setsystem requirements.

  • +
  • edge_col ((optional) str | int, default = 0) – column index (or name) in pandas.dataframe or numpy.ndarray, +used for (hyper)edge ids. Will be used to reference edgeids for +all set systems.

  • +
  • node_col ((optional) str | int, default = 1) – column index (or name) in pandas.dataframe or numpy.ndarray, +used for node ids. Will be used to reference nodeids for all set systems.

  • +
  • cell_weight_col ((optional) str | int, default = None) – column index (or name) in pandas.dataframe or numpy.ndarray used for +referencing cell weights. For a dict of dicts references key in cell +property dicts.

  • +
  • cell_weights ((optional) Sequence[float,int] | int | float , default = 1.0) – User specified cell_weights or default cell weight. +Sequential values are only used if setsystem is a +dataframe or ndarray in which case the sequence must +have the same length and order as these objects. +Sequential values are ignored for dataframes if cell_weight_col is already +a column in the data frame. +If cell_weights is assigned a single value +then it will be used as default for missing values or when no cell_weight_col +is given.

  • +
  • cell_properties ((optional) Sequence[int | str] | Mapping[T,Mapping[T,Mapping[str,Any]]],) – default = None +Column names from pd.DataFrame to use as cell properties +or a dict assigning cell_property to incidence pairs of edges and +nodes. Will generate a misc_cell_properties, which may have variable lengths per cell.

  • +
  • misc_cell_properties ((optional) str | int, default = None) – Column name of dataframe corresponding to a column of variable +length property dictionaries for the cell. Ignored for other setsystem +types.

  • +
  • aggregateby ((optional) str, dict, default = 'first') – By default duplicate edge,node incidences will be dropped unless +specified with aggregateby. +See pandas.DataFrame.agg() methods for additional syntax and usage +information.

  • +
  • edge_properties ((optional) pd.DataFrame | dict, default = None) – Properties associated with edge ids. +First column of dataframe or keys of dict link to edge ids in +setsystem.

  • +
  • node_properties ((optional) pd.DataFrame | dict, default = None) – Properties associated with node ids. +First column of dataframe or keys of dict link to node ids in +setsystem.

  • +
  • properties ((optional) pd.DataFrame | dict, default = None) – Concatenation/union of edge_properties and node_properties. +By default, the object id is used and should be the first column of +the dataframe, or key in the dict. If there are nodes and edges +with the same ids and different properties then use the edge_properties +and node_properties keywords.

  • +
  • misc_properties ((optional) int | str, default = None) – Column of property dataframes with dtype=dict. Intended for variable +length property dictionaries for the objects.

  • +
  • edge_weight_prop ((optional) str, default = None,) – Name of property in edge_properties to use for weight.

  • +
  • node_weight_prop ((optional) str, default = None,) – Name of property in node_properties to use for weight.

  • +
  • weight_prop ((optional) str, default = None) – Name of property in properties to use for ‘weight’

  • +
  • default_edge_weight ((optional) int | float, default = 1) – Used when edge weight property is missing or undefined.

  • +
  • default_node_weight ((optional) int | float, default = 1) – Used when node weight property is missing or undefined

  • +
  • name ((optional) str, default = None) – Name assigned to hypergraph

  • +
+
+
+
+

Hypergraphs in HNX 2.0

+

An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: +V = nodes (vertices) and E = (hyper)edges.

+

HNX allows for multi-edges by distinguishing edges by +their identifiers instead of their contents. For example, if +V = {1,2,3} and E = {e1,e2,e3}, +where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, +the edges e1 and e2 contain the same set of nodes and yet +are distinct and are distinguishable within H = (V,E).

+

New as of version 2.0, HNX provides methods to easily store and +access additional metadata such as cell, edge, and node weights. +Metadata associated with (edge,node) incidences +are referenced as cell_properties. +Metadata associated with a single edge or node is referenced +as its properties.

+

The fundamental object needed to create a hypergraph is a setsystem. The +setsystem defines the many-to-many relationships between edges and nodes in +the hypergraph. Cell properties for the incidence pairs can be defined within +the setsystem or in a separate pandas.Dataframe or dict. +Edge and node properties are defined with a pandas.DataFrame or dict.

+
+

SetSystems

+

There are five types of setsystems currently accepted by the library.

+
    +
  1. iterable of iterables : Barebones hypergraph uses Pandas default +indexing to generate hyperedge ids. Elements must be hashable.:

    +
    >>> H = Hypergraph([{1,2},{1,2},{1,2,3}])
    +
    +
    +
  2. +
  3. dictionary of iterables : the most basic way to express many-to-many +relationships providing edge ids. The elements of the iterables must be +hashable):

    +
    >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]})
    +
    +
    +
  4. +
  5. dictionary of dictionaries : allows cell properties to be assigned +to a specific (edge, node) incidence. This is particularly useful when +there are variable length dictionaries assigned to each pair:

    +
    >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.1, 'name': 'related_to',
    +>>>                 'startdate': '05.13.2020'}},
    +>>>      'e2':{ 1: {'w':0.52, 'name': 'owned_by'},
    +>>>             2: {'w':0.2}},
    +>>>      'e3':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.2, 'name': 'owner_of'},
    +>>>             3: {'w':1, 'type': 'relationship'}}
    +
    +
    +
    >>> H = Hypergraph(d, cell_weight_col='w')
    +
    +
    +
  6. +
  7. pandas.DataFrame For large datasets and for datasets with cell +properties it is most efficient to construct a hypergraph directly from +a pandas.DataFrame. Incidence pairs are in the first two columns. +Cell properties shared by all incidence pairs can be placed in their own +column of the dataframe. Variable length dictionaries of cell properties +particular to only some of the incidence pairs may be placed in a single +column of the dataframe. Representing the data above as a dataframe df:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

    col1

    col2

    w

    col3

    e1

    1

    0.5

    {‘name’:’related_to’}

    e1

    2

    0.1

    +
    {“name”:”related_to”,

    “startdate”:”05.13.2020”}

    +
    +
    +

    e2

    1

    0.52

    {“name”:”owned_by”}

    e2

    2

    0.2

    {…}

    +

    The first row of the dataframe is used to reference each column.

    +
    >>> H = Hypergraph(df,edge_col="col1",node_col="col2",
    +>>>                 cell_weight_col="w",misc_cell_properties="col3")
    +
    +
    +
  8. +
  9. numpy.ndarray For homogeneous datasets given in an ndarray a +pandas dataframe is generated and column names are added from the +edge_col and node_col arguments. Cell properties containing multiple data +types are added with a separate dataframe or dict and passed through the +cell_properties keyword.

    +
    >>> arr = np.array([['e1','1'],['e1','2'],
    +>>>                 ['e2','1'],['e2','2'],
    +>>>                 ['e3','1'],['e3','2'],['e3','3']])
    +>>> H = hnx.Hypergraph(arr, column_names=['col1','col2'])
    +
    +
    +
  10. +
+
+
+

Edge and Node Properties

+

Properties specific to a single edge or node are passed through the +keywords: edge_properties, node_properties, properties. +Properties may be passed as dataframes or dicts. +The first column or index of the dataframe or keys of the dict keys +correspond to the edge and/or node identifiers. +If identifiers are shared among edges and nodes, or are distinct +for edges and nodes, properties may be combined into a single +object and passed to the properties keyword. For example:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

id

weight

properties

e1

5.0

{‘type’:’event’}

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

3

1.0

{}

+

A properties dictionary should have the format:

+
dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... }
+
+
+

A properties dataframe may be used for nodes and edges sharing ids +but differing in cell properties by adding a level index using 0 +for edges and 1 for nodes:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

level

id

weight

properties

0

e1

5.0

{‘type’:’event’}

0

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

{…}

+
+
+

Weights

+

The default key for cell and object weights is “weight”. The default value +is 1. Weights may be assigned and/or a new default prescribed in the +constructor using cell_weight_col and cell_weights for incidence pairs, +and using edge_weight_prop, node_weight_prop, weight_prop, +default_edge_weight, and default_node_weight for node and edge weights.

+
+
+adjacency_matrix(s=1, index=False, remove_empty_rows=False)[source]
+

The s-adjacency matrix for the hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default = 1) –

  • +
  • index (boolean, optional, default = False) – if True, will return the index of ids for rows and columns

  • +
  • remove_empty_rows (boolean, optional, default = False) –

  • +
+
+
Returns:
+

    +
  • adjacency_matrix (scipy.sparse.csr.csr_matrix)

  • +
  • node_index (list) – index of ids for rows and columns

  • +
+

+
+
+
+ +
+
+auxiliary_matrix(s=1, node=True, index=False)[source]
+

The unweighted s-edge or node auxiliary matrix for hypergraph

+
+
Parameters:
+
    +
  • s (int, optional, default = 1) –

  • +
  • node (bool, optional, default = True) – whether to return based on node or edge adjacencies

  • +
+
+
Returns:
+

    +
  • auxiliary_matrix (scipy.sparse.csr.csr_matrix) – Node/Edge adjacency matrix with empty rows and columns +removed

  • +
  • index (np.array) – row and column index of userids

  • +
+

+
+
+
+ +
+
+bipartite()[source]
+

Constructs the networkX bipartite graph associated to hypergraph.

+
+
Returns:
+

bipartite

+
+
Return type:
+

nx.Graph()

+
+
+

Notes

+

Creates a bipartite networkx graph from hypergraph. +The nodes and (hyper)edges of hypergraph become the nodes of bipartite +graph. For every (hyper)edge e in the hypergraph and node n in e there +is an edge (n,e) in the graph.

+
+ +
+
+collapse_edges(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Constructs a new hypergraph gotten by identifying edges containing the +same nodes

+
+
Parameters:
+
    +
  • name (hashable, optional, default = None) –

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of edge equivalence classes keyed by frozen +sets of nodes

  • +
+
+
Returns:
+

    +
  • new hypergraph (Hypergraph) – Equivalent edges are collapsed to a single edge named by a +representative of the equivalent edges followed by a colon and the +number of edges it represents.

  • +
  • equivalence_classes (dict) – A dictionary keyed by representative edge names with values equal +to the edges in its equivalence class

  • +
+

+
+
+

Notes

+

Two edges are identified if their respective elements are the same. +Using this as an equivalence relation, the uids of the edges are +partitioned into equivalence classes.

+

A single edge from the collapsed edges followed by a colon and the +number of elements in its equivalence class as uid for the new edge

+
+ +
+
+collapse_nodes(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Constructs a new hypergraph gotten by identifying nodes contained by +the same edges

+
+
Parameters:
+
    +
  • name (str, optional, default = None) –

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of node equivalence classes keyed by frozen +sets of edges

  • +
  • use_reps (boolean, optional, default = False - Deprecated, this no) – longer works and will be removed. Choose a single element from the +collapsed nodes as uid for the new node, otherwise uses a frozen +set of the uids of nodes in the equivalence class

  • +
  • return_counts (boolean, - Deprecated, this no longer works and will be) – removed if use_reps is True the new nodes have uids given by a +tuple of the rep and the count

  • +
+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

Two nodes are identified if their respective memberships are the same. +Using this as an equivalence relation, the uids of the nodes are +partitioned into equivalence classes. A single member of the +equivalence class is chosen to represent the class followed by the +number of members of the class.

+

Example

+
>>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', /
+                            ['a','b']),Entity('E2',['a','b'])]))
+>>> h.incidence_dict
+{'E1': {'a', 'b'}, 'E2': {'a', 'b'}}
+>>> h.collapse_nodes().incidence_dict
+{'E1': {frozenset({'a', 'b'})}, 'E2': {frozenset({'a', 'b'})}}
+### Fix this
+>>> h.collapse_nodes(use_reps=True).incidence_dict
+{'E1': {('a', 2)}, 'E2': {('a', 2)}}
+
+
+
+ +
+
+collapse_nodes_and_edges(name=None, return_equivalence_classes=False, use_reps=None, return_counts=None)[source]
+

Returns a new hypergraph by collapsing nodes and edges.

+
+
Parameters:
+
    +
  • name (str, optional, default = None) –

  • +
  • use_reps (boolean, optional, default = False) – Choose a single element from the collapsed elements as a +representative

  • +
  • return_counts (boolean, optional, default = True) – if use_reps is True the new elements are keyed by a tuple of the +rep and the count

  • +
  • return_equivalence_classes (boolean, optional, default = False) – Returns a dictionary of edge equivalence classes keyed by frozen +sets of nodes

  • +
+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

Collapses the Nodes and Edges EntitySets. Two nodes(edges) are +duplicates if their respective memberships(elements) are the same. +Using this as an equivalence relation, the uids of the nodes(edges) +are partitioned into equivalence classes. A single member of the +equivalence class is chosen to represent the class followed by the +number of members of the class.

+

Example

+
>>> h = Hypergraph(EntitySet('example',elements=[Entity('E1', /
+                               ['a','b']),Entity('E2',['a','b'])]))
+>>> h.incidence_dict
+{'E1': {'a', 'b'}, 'E2': {'a', 'b'}}
+>>> h.collapse_nodes_and_edges().incidence_dict   ### Fix this
+{('E1', 2): {('a', 2)}}
+
+
+
+ +
+
+component_subgraphs(return_singletons=False, name=None)[source]
+

Same as s_components_subgraphs() with s=1. Returns iterator.

+
+

See also

+

s_component_subgraphs

+
+
+ +
+
+components(edges=False)[source]
+

Same as s_connected_components() with s=1, but nodes are returned +by default. Return iterator.

+ +
+ +
+
+connected_component_subgraphs(return_singletons=True, name=None)[source]
+

Same as s_component_subgraphs() with s=1. Returns iterator

+
+

See also

+

s_component_subgraphs

+
+
+ +
+
+connected_components(edges=False)[source]
+

Same as s_connected_components() with s=1, but nodes are returned +by default. Return iterator.

+ +
+ +
+
+property dataframe
+

Returns dataframe of incidence pairs and their properties.

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+degree(node, s=1, max_size=None)[source]
+

The number of edges of size s that contain node.

+
+
Parameters:
+
    +
  • node (hashable) – identifier for the node.

  • +
  • s (positive integer, optional, default 1) – smallest size of edge to consider in degree

  • +
  • max_size (positive integer or None, optional, default = None) – largest size of edge to consider in degree

  • +
+
+
Return type:
+

int

+
+
+
+ +
+
+diameter(s=1)[source]
+

Returns the length of the longest shortest s-walk between nodes in +hypergraph

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

diameter

+
+
Return type:
+

int

+
+
Raises:
+

HyperNetXError – If hypergraph is not s-edge-connected

+
+
+

Notes

+

Two nodes are s-adjacent if they share s edges. +Two nodes v_start and v_end are s-walk connected if there is a +sequence of nodes v_start, v_1, v_2, … v_n-1, v_end such that +consecutive nodes are s-adjacent. If the graph is not connected, +an error will be raised.

+
+ +
+
+dim(edge)[source]
+

Same as size(edge)-1.

+
+ +
+
+distance(source, target, s=1)[source]
+

Returns the shortest s-walk distance between two nodes in the +hypergraph.

+
+
Parameters:
+
    +
  • source (node.uid or node) – a node in the hypergraph

  • +
  • target (node.uid or node) – a node in the hypergraph

  • +
  • s (positive integer) – the number of edges

  • +
+
+
Returns:
+

s-walk distance

+
+
Return type:
+

int

+
+
+
+

See also

+

edge_distance

+
+

Notes

+

The s-distance is the shortest s-walk length between the nodes. +An s-walk between nodes is a sequence of nodes that pairwise share +at least s edges. The length of the shortest s-walk is 1 less than +the number of nodes in the path sequence.

+

Uses the networkx shortest_path_length method on the graph +generated by the s-adjacency matrix.

+
+ +
+
+dual(name=None, switch_names=True)[source]
+

Constructs a new hypergraph with roles of edges and nodes of hypergraph +reversed.

+
+
Parameters:
+
    +
  • name (hashable, optional) –

  • +
  • switch_names (bool, optional, default = True) – reverses edge_col and node_col names +unless edge_col = ‘edges’ and node_col = ‘nodes’

  • +
+
+
Return type:
+

hypergraph

+
+
+
+ +
+
+edge_adjacency_matrix(s=1, index=False)[source]
+

The s-adjacency matrix for the dual hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • index (boolean, optional, default = False) – if True, will return the index of ids for rows and columns

  • +
+
+
Returns:
+

    +
  • edge_adjacency_matrix (scipy.sparse.csr.csr_matrix)

  • +
  • edge_index (list) – index of ids for rows and columns

  • +
+

+
+
+

Notes

+

This is also the adjacency matrix for the line graph. +Two edges are s-adjacent if they share at least s nodes. +If remove_zeros is True will return the auxillary matrix

+
+ +
+
+edge_diameter(s=1)[source]
+

Returns the length of the longest shortest s-walk between edges in +hypergraph

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

edge_diameter

+
+
Return type:
+

int

+
+
Raises:
+

HyperNetXError – If hypergraph is not s-edge-connected

+
+
+

Notes

+

Two edges are s-adjacent if they share s nodes. +Two nodes e_start and e_end are s-walk connected if there is a +sequence of edges e_start, e_1, e_2, … e_n-1, e_end such that +consecutive edges are s-adjacent. If the graph is not connected, an +error will be raised.

+
+ +
+
+edge_diameters(s=1)[source]
+

Returns the edge diameters of the s_edge_connected component subgraphs +in hypergraph.

+
+
Parameters:
+

s (int, optional, default 1) –

+
+
Returns:
+

    +
  • maximum diameter (int)

  • +
  • list of diameters (list) – List of edge_diameters for s-edge component subgraphs in hypergraph

  • +
  • list of component (list) – List of the edge uids in the s-edge component subgraphs.

  • +
+

+
+
+
+ +
+
+edge_distance(source, target, s=1)[source]
+

XX TODO: still need to return path and translate into user defined +nodes and edges Returns the shortest s-walk distance between two edges +in the hypergraph.

+
+
Parameters:
+
    +
  • source (edge.uid or edge) – an edge in the hypergraph

  • +
  • target (edge.uid or edge) – an edge in the hypergraph

  • +
  • s (positive integer) – the number of intersections between pairwise consecutive edges

  • +
  • TODO (add edge weights) –

  • +
  • weight (None or string, optional, default = None) – if None then all edges have weight 1. If string then edge attribute +string is used if available.

  • +
+
+
Returns:
+

s- walk distance – A shortest s-walk is computed as a sequence of edges, +the s-walk distance is the number of edges in the sequence +minus 1. If no such path exists returns np.inf.

+
+
Return type:
+

the shortest s-walk edge distance

+
+
+
+

See also

+

distance

+
+

Notes

+

The s-distance is the shortest s-walk length between the edges. +An s-walk between edges is a sequence of edges such that +consecutive pairwise edges intersect in at least s nodes. The +length of the shortest s-walk is 1 less than the number of edges +in the path sequence.

+

Uses the networkx shortest_path_length method on the graph +generated by the s-edge_adjacency matrix.

+
+ +
+
+edge_neighbors(edge, s=1)[source]
+

The edges in hypergraph which share s nodes(s) with edge.

+
+
Parameters:
+
    +
  • edge (hashable or Entity) – uid for a edge in hypergraph or the edge Entity

  • +
  • s (int, list, optional, default = 1) – Minimum number of nodes shared by neighbors edge node.

  • +
+
+
Returns:
+

List of edge neighbors

+
+
Return type:
+

list

+
+
+
+ +
+
+property edge_props
+

Dataframe of edge properties +indexed on edge ids

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+edge_size_dist()[source]
+

Returns the size for each edge

+
+
Return type:
+

np.array

+
+
+
+ +
+
+property edges
+

Object associated with self._edges.

+
+
Return type:
+

EntitySet

+
+
+
+ +
+
+classmethod from_bipartite(B, set_names=('edges', 'nodes'), name=None, **kwargs)[source]
+

Static method creates a Hypergraph from a bipartite graph.

+
+
Parameters:
+
    +
  • B (nx.Graph()) – A networkx bipartite graph. Each node in the graph has a property +‘bipartite’ taking the value of 0 or 1 indicating a 2-coloring of +the graph.

  • +
  • set_names (iterable of length 2, optional, default = ['edges','nodes']) – Category names assigned to the graph nodes associated to each +bipartite set

  • +
  • name (hashable, optional) –

  • +
+
+
Return type:
+

Hypergraph

+
+
+

Notes

+

A partition for the nodes in a bipartite graph generates a hypergraph.

+
>>> import networkx as nx
+>>> B = nx.Graph()
+>>> B.add_nodes_from([1, 2, 3, 4], bipartite=0)
+>>> B.add_nodes_from(['a', 'b', 'c'], bipartite=1)
+>>> B.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'), (2, 'c'), /
+    (3, 'c'), (4, 'a')])
+>>> H = Hypergraph.from_bipartite(B)
+>>> H.nodes, H.edges
+# output: (EntitySet(_:Nodes,[1, 2, 3, 4],{}), /
+# EntitySet(_:Edges,['b', 'c', 'a'],{}))
+
+
+
+ +
+
+classmethod from_incidence_dataframe(df, columns=None, rows=None, edge_col: str = 'edges', node_col: str = 'nodes', name=None, fillna=0, transpose=False, transforms=[], key=None, return_only_dataframe=False, **kwargs)[source]
+

Create a hypergraph from a Pandas Dataframe object, which has values equal +to the incidence matrix of a hypergraph. Its index will identify the nodes +and its columns will identify its edges.

+
+
Parameters:
+
    +
  • df (Pandas.Dataframe) – a real valued dataframe with a single index

  • +
  • columns ((optional) list, default = None) – restricts df to the columns with headers in this list.

  • +
  • rows ((optional) list, default = None) – restricts df to the rows indexed by the elements in this list.

  • +
  • name ((optional) string, default = None) –

  • +
  • fillna (float, default = 0) – a real value to place in empty cell, all-zero columns will not +generate an edge.

  • +
  • transpose ((optional) bool, default = False) – option to transpose the dataframe, in this case df.Index will +identify the edges and df.columns will identify the nodes, transpose is +applied before transforms and key

  • +
  • transforms ((optional) list, default = []) – optional list of transformations to apply to each column, +of the dataframe using pd.DataFrame.apply(). +Transformations are applied in the order they are +given (ex. abs). To apply transforms to rows or for additional +functionality, consider transforming df using pandas.DataFrame +methods prior to generating the hypergraph.

  • +
  • key ((optional) function, default = None) – boolean function to be applied to dataframe. will be applied to +entire dataframe.

  • +
  • return_only_dataframe ((optional) bool, default = False) – to use the incidence_dataframe with cell_properties or properties, set this +to true and use it as the setsystem in the Hypergraph constructor.

  • +
+
+
+
+

See also

+

from_numpy_array

+
+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+classmethod from_incidence_matrix(M, node_names=None, edge_names=None, node_label='nodes', edge_label='edges', name=None, key=None, **kwargs)[source]
+

Same as from_numpy_array.

+
+ +
+
+classmethod from_numpy_array(M, node_names=None, edge_names=None, node_label='nodes', edge_label='edges', name=None, key=None, **kwargs)[source]
+

Create a hypergraph from a real valued matrix represented as a 2 dimensionsl numpy array. +The matrix is converted to a matrix of 0’s and 1’s so that any truthy cells are converted to 1’s and +all others to 0’s.

+
+
Parameters:
+
    +
  • M (real valued array-like object, 2 dimensions) – representing a real valued matrix with rows corresponding to nodes and columns to edges

  • +
  • node_names (object, array-like, default=None) – List of node names must be the same length as M.shape[0]. +If None then the node names correspond to row indices with ‘v’ prepended.

  • +
  • edge_names (object, array-like, default=None) – List of edge names must have the same length as M.shape[1]. +If None then the edge names correspond to column indices with ‘e’ prepended.

  • +
  • name (hashable) –

  • +
  • key ((optional) function) – boolean function to be evaluated on each cell of the array, +must be applicable to numpy.array

  • +
+
+
Return type:
+

Hypergraph

+
+
+
+

Note

+

The constructor does not generate empty edges. +All zero columns in M are removed and the names corresponding to these +edges are discarded.

+
+
+ +
+
+get_cell_properties(edge: str, node: str, prop_name: str | None = None) Any | dict[str, Any][source]
+

Get cell properties on a specified edge and node

+
+
Parameters:
+
    +
  • edge (str) – edgeid

  • +
  • node (str) – nodeid

  • +
  • prop_name (str, optional) – name of a cell property; if None, all cell properties will be returned

  • +
+
+
Returns:
+

cell property value if prop_name is provided, otherwise dict of all +cell properties and values

+
+
Return type:
+

int or str or dict of {str: any}

+
+
+
+ +
+
+get_linegraph(s=1, edges=True)[source]
+

Creates an ::term::s-linegraph for the Hypergraph. +If edges=True (default)then the edges will be the vertices of the line +graph. Two vertices are connected by an s-line-graph edge if the +corresponding hypergraph edges intersect in at least s hypergraph nodes. +If edges=False, the hypergraph nodes will be the vertices of the line +graph. Two vertices are connected if the nodes they correspond to share +at least s incident hyper edges.

+
+
Parameters:
+
    +
  • s (int) – The width of the connections.

  • +
  • edges (bool, optional, default = True) – Determine if edges or nodes will be the vertices in the linegraph.

  • +
+
+
Returns:
+

A NetworkX graph.

+
+
Return type:
+

nx.Graph

+
+
+
+ +
+
+get_properties(id, level=None, prop_name=None)[source]
+

Returns an object’s specific property or all properties

+
+
Parameters:
+
    +
  • id (hashable) – edge or node id

  • +
  • level (int | None , optional, default = None) – if separate edge and node properties then enter 0 for edges +and 1 for nodes.

  • +
  • prop_name (str | None, optional, default = None) – if None then all properties associated with the object will be +returned.

  • +
+
+
Returns:
+

single property or dictionary of properties

+
+
Return type:
+

str or dict

+
+
+
+ +
+
+incidence_dataframe(sort_rows=False, sort_columns=False, cell_weights=True)[source]
+

Returns a pandas dataframe for hypergraph indexed by the nodes and +with column headers given by the edge names.

+
+
Parameters:
+
    +
  • sort_rows (bool, optional, default =True) – sort rows based on hashable node names

  • +
  • sort_columns (bool, optional, default =True) – sort columns based on hashable edge names

  • +
  • cell_weights (bool, optional, default =True) –

  • +
+
+
+
+ +
+
+property incidence_dict
+

Dictionary keyed by edge uids with values the uids of nodes in each +edge

+
+
Return type:
+

dict

+
+
+
+ +
+
+incidence_matrix(weights=False, index=False)[source]
+

An incidence matrix for the hypergraph indexed by nodes x edges.

+
+
Parameters:
+
    +
  • weights (bool, default =False) – If False all nonzero entries are 1. +If True and self.static all nonzero entries are filled by +self.edges.cell_weights dictionary values.

  • +
  • index (boolean, optional, default = False) – If True return will include a dictionary of node uid : row number +and edge uid : column number

  • +
+
+
Returns:
+

    +
  • incidence_matrix (scipy.sparse.csr.csr_matrix or np.ndarray)

  • +
  • row_index (list) – index of node ids for rows

  • +
  • col_index (list) – index of edge ids for columns

  • +
+

+
+
+
+ +
+
+is_connected(s=1, edges=False)[source]
+

Determines if hypergraph is s-connected.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, default = False) – If True, will determine if s-edge-connected. +For s=1 s-edge-connected is the same as s-connected.

  • +
+
+
Returns:
+

is_connected

+
+
Return type:
+

boolean

+
+
+

Notes

+

A hypergraph is s node connected if for any two nodes v0,vn +there exists a sequence of nodes v0,v1,v2,…,v(n-1),vn +such that every consecutive pair of nodes v(i),v(i+1) +share at least s edges.

+

A hypergraph is s edge connected if for any two edges e0,en +there exists a sequence of edges e0,e1,e2,…,e(n-1),en +such that every consecutive pair of edges e(i),e(i+1) +share at least s nodes.

+
+ +
+
+neighbors(node, s=1)[source]
+

The nodes in hypergraph which share s edge(s) with node.

+
+
Parameters:
+
    +
  • node (hashable or Entity) – uid for a node in hypergraph or the node Entity

  • +
  • s (int, list, optional, default = 1) – Minimum number of edges shared by neighbors with node.

  • +
+
+
Returns:
+

neighbors – s-neighbors share at least s edges in the hypergraph

+
+
Return type:
+

list

+
+
+
+ +
+
+node_diameters(s=1)[source]
+

Returns the node diameters of the connected components in hypergraph.

+
+
Parameters:
+
    +
  • and (list of the diameters of the s-components) –

  • +
  • nodes (list of the s-component) –

  • +
+
+
+
+ +
+
+property node_props
+

Dataframe of node properties +indexed on node ids

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+property nodes
+

Object associated with self._nodes.

+
+
Return type:
+

EntitySet

+
+
+
+ +
+
+number_of_edges(edgeset=None)[source]
+

The number of edges in edgeset belonging to hypergraph.

+
+
Parameters:
+

edgeset (an iterable of Entities, optional, default = None) – If None, then return the number of edges in hypergraph.

+
+
Returns:
+

number_of_edges

+
+
Return type:
+

int

+
+
+
+ +
+
+number_of_nodes(nodeset=None)[source]
+

The number of nodes in nodeset belonging to hypergraph.

+
+
Parameters:
+

nodeset (an interable of Entities, optional, default = None) – If None, then return the number of nodes in hypergraph.

+
+
Returns:
+

number_of_nodes

+
+
Return type:
+

int

+
+
+
+ +
+
+order()[source]
+

The number of nodes in hypergraph.

+
+
Returns:
+

order

+
+
Return type:
+

int

+
+
+
+ +
+
+property properties
+

Returns dataframe of edge and node properties.

+
+
Return type:
+

pd.DataFrame

+
+
+
+ +
+
+remove(keys, level=None, name=None)[source]
+

Creates a new hypergraph with nodes and/or edges indexed by keys +removed. More efficient for creating a restricted hypergraph if the +restricted set is greater than what is being removed.

+
+
Parameters:
+
    +
  • keys (list | tuple | set | Hashable) – node and/or edge id(s) to restrict to

  • +
  • level (None, optional) – Enter 0 to remove edges with ids in keys. +Enter 1 to remove nodes with ids in keys. +If None then all objects in nodes and edges with the id will +be removed.

  • +
  • name (str, optional) – Name of new hypergraph

  • +
+
+
Return type:
+

hnx.Hypergraph

+
+
+
+ +
+
+remove_edges(keys, name=None)[source]
+
+ +
+
+remove_nodes(keys, name=None)[source]
+
+ +
+
+remove_singletons(name=None)[source]
+

Constructs clone of hypergraph with singleton edges removed.

+
+
Returns:
+

new hypergraph

+
+
Return type:
+

Hypergraph

+
+
+
+ +
+
+restrict_to_edges(edges, name=None)[source]
+

New hypergraph gotten by restricting to edges

+
+
Parameters:
+

edges (Iterable) – edgeids to restrict to

+
+
Return type:
+

hnx.Hypergraph

+
+
+
+ +
+
+restrict_to_nodes(nodes, name=None)[source]
+

New hypergraph gotten by restricting to nodes

+
+
Parameters:
+

nodes (Iterable) – nodeids to restrict to

+
+
Return type:
+

hnx. Hypergraph

+
+
+
+ +
+
+s_component_subgraphs(s=1, edges=True, return_singletons=False, name=None)[source]
+

Returns a generator for the induced subgraphs of s_connected +components. Removes singletons unless return_singletons is set to True. +Computed using s-linegraph generated either by the hypergraph +(edges=True) or its dual (edges = False)

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, edges=False) – Determines if edge or node components are desired. Returns +subgraphs equal to the hypergraph restricted to each set of +nodes(edges) in the s-connected components or s-edge-connected +components

  • +
  • return_singletons (bool, optional) –

  • +
+
+
Yields:
+

s_component_subgraphs (iterator) – Iterator returns subgraphs generated by the edges (or nodes) in the +s-edge(node) components of hypergraph.

+
+
+
+ +
+
+s_components(s=1, edges=True, return_singletons=True)[source]
+

Same as s_connected_components

+ +
+ +
+
+s_connected_components(s=1, edges=True, return_singletons=False)[source]
+

Returns a generator for the s-edge-connected components +or the s-node-connected components of the hypergraph.

+
+
Parameters:
+
    +
  • s (int, optional, default 1) –

  • +
  • edges (boolean, optional, default = True) – If True will return edge components, if False will return node +components

  • +
  • return_singletons (bool, optional, default = False) –

  • +
+
+
+

Notes

+

If edges=True, this method returns the s-edge-connected components as +lists of lists of edge uids. +An s-edge-component has the property that for any two edges e1 and e2 +there is a sequence of edges starting with e1 and ending with e2 +such that pairwise adjacent edges in the sequence intersect in at least +s nodes. If s=1 these are the path components of the hypergraph.

+

If edges=False this method returns s-node-connected components. +A list of sets of uids of the nodes which are s-walk connected. +Two nodes v1 and v2 are s-walk-connected if there is a +sequence of nodes starting with v1 and ending with v2 such that +pairwise adjacent nodes in the sequence share s edges. If s=1 these +are the path components of the hypergraph.

+

Example

+
>>> S = {'A':{1,2,3},'B':{2,3,4},'C':{5,6},'D':{6}}
+>>> H = Hypergraph(S)
+
+
+
>>> list(H.s_components(edges=True))
+[{'C', 'D'}, {'A', 'B'}]
+>>> list(H.s_components(edges=False))
+[{1, 2, 3, 4}, {5, 6}]
+
+
+
+
Yields:
+

s_connected_components (iterator) – Iterator returns sets of uids of the edges (or nodes) in the +s-edge(node) components of hypergraph.

+
+
+
+ +
+
+set_state(**kwargs)[source]
+

Allow state_dict updates from outside of class. Use with caution.

+
+
Parameters:
+

**kwargs – key=value pairs to save in state dictionary

+
+
+
+ +
+
+property shape
+

(number of nodes, number of edges)

+
+
Return type:
+

tuple

+
+
+
+ +
+
+singletons()[source]
+

Returns a list of singleton edges. A singleton edge is an edge of +size 1 with a node of degree 1.

+
+
Returns:
+

singles – A list of edge uids.

+
+
Return type:
+

list

+
+
+
+ +
+
+size(edge, nodeset=None)[source]
+

The number of nodes in nodeset that belong to edge. +If nodeset is None then returns the size of edge

+
+
Parameters:
+

edge (hashable) – The uid of an edge in the hypergraph

+
+
Returns:
+

size

+
+
Return type:
+

int

+
+
+
+ +
+
+toplexes(name=None)[source]
+

Returns a simple hypergraph corresponding to self.

+
+

Warning

+

Collapsing is no longer supported inside the toplexes method. Instead +generate a new collapsed hypergraph and compute the toplexes of the +new hypergraph.

+
+
+
Parameters:
+

name (str, optional, default = None) –

+
+
+
+ +
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/classes/modules.html b/classes/modules.html new file mode 100644 index 00000000..e5a670db --- /dev/null +++ b/classes/modules.html @@ -0,0 +1,395 @@ + + + + + + + classes — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

classes

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/core.html b/core.html new file mode 100644 index 00000000..3668cb8d --- /dev/null +++ b/core.html @@ -0,0 +1,604 @@ + + + + + + + HyperNetX Packages — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

HyperNetX Packages

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/drawing/drawing.html b/drawing/drawing.html new file mode 100644 index 00000000..72f2b9b6 --- /dev/null +++ b/drawing/drawing.html @@ -0,0 +1,668 @@ + + + + + + + drawing package — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

drawing package

+
+

Submodules

+
+
+

drawing.rubber_band module

+
+
+drawing.rubber_band.draw(H, pos=None, with_color=True, with_node_counts=False, with_edge_counts=False, layout=<function spring_layout>, layout_kwargs={}, ax=None, node_radius=None, edges_kwargs={}, nodes_kwargs={}, edge_labels={}, edge_labels_kwargs={}, node_labels={}, node_labels_kwargs={}, with_edge_labels=True, with_node_labels=True, label_alpha=0.35, return_pos=False)[source]
+

Draw a hypergraph as a Matplotlib figure

+

By default this will draw a colorful “rubber band” like hypergraph, where +convex hulls represent edges and are drawn around the nodes they contain.

+

This is a convenience function that wraps calls with sensible parameters to +the following lower-level drawing functions:

+
    +
  • draw_hyper_edges,

  • +
  • draw_hyper_edge_labels,

  • +
  • draw_hyper_labels, and

  • +
  • draw_hyper_nodes

  • +
+

The default layout algorithm is nx.spring_layout, but other layouts can be +passed in. The Hypergraph is converted to a bipartite graph, and the layout +algorithm is passed the bipartite graph.

+

If you have a pre-determined layout, you can pass in a “pos” dictionary. +This is a dictionary mapping from node id’s to x-y coordinates. For example:

+
>>> pos = {
+>>> 'A': (0, 0),
+>>> 'B': (1, 2),
+>>> 'C': (5, -3)
+>>> }
+
+
+

will position the nodes {A, B, C} manually at the locations specified. The +coordinate system is in Matplotlib “data coordinates”, and the figure will +be centered within the figure.

+

By default, this will draw in a new figure, but the axis to render in can be +specified using ax.

+

This approach works well for small hypergraphs, and does not guarantee +a rigorously “correct” drawing. Overlapping of sets in the drawing generally +implies that the sets intersect, but sometimes sets overlap if there is no +intersection. It is not possible, in general, to draw a “correct” hypergraph +this way for an arbitrary hypergraph, in the same way that not all graphs +have planar drawings.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • with_color (bool) – set to False to disable color cycling of edges

  • +
  • with_node_counts (bool) – set to True to replace the label for collapsed nodes with the number of elements

  • +
  • with_edge_counts (bool) – set to True to label collapsed edges with number of elements

  • +
  • layout (function) – layout algorithm to compute

  • +
  • layout_kwargs (dict) – keyword arguments passed to layout function

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • edges_kwargs (dict) – keyword arguments passed to matplotlib.collections.PolyCollection for edges

  • +
  • node_radius (None, int, float, or dict) – radius of all nodes, or dictionary of node:value; the default (None) calculates radius based on number of collapsed nodes; reasonable values range between 1 and 3

  • +
  • nodes_kwargs (dict) – keyword arguments passed to matplotlib.collections.PolyCollection for nodes

  • +
  • edge_labels_kwargs (dict) – keyword arguments passed to matplotlib.annotate for edge labels

  • +
  • node_labels_kwargs (dict) – keyword argumetns passed to matplotlib.annotate for node labels

  • +
  • with_edge_labels (bool) – set to False to make edge labels invisible

  • +
  • with_node_labels (bool) – set to False to make node labels invisible

  • +
  • label_alpha (float) – the transparency (alpha) of the box behind text drawn in the figure

  • +
+
+
+
+ +
+
+drawing.rubber_band.draw_hyper_edge_labels(H, polys, labels={}, ax=None, **kwargs)[source]
+

Draws a label on the hyper edge boundary.

+

Should be passed Matplotlib PolyCollection representing the hyper-edges, see +the return value of draw_hyper_edges.

+

The label will be draw on the least curvy part of the polygon, and will be +aligned parallel to the orientation of the polygon where it is drawn.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • polys (PolyCollection) – collection of polygons returned by draw_hyper_edges

  • +
  • labels (dict) – mapping of node id to string label

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • kwargs (dict) – Keyword arguments are passed through to Matplotlib’s annotate function.

  • +
+
+
+
+ +
+
+drawing.rubber_band.draw_hyper_edges(H, pos, ax=None, node_radius={}, dr=None, **kwargs)[source]
+

Draws a convex hull around the nodes contained within each edge in H

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • node_radius (dict) – mapping of node to R^1 (radius of each node)

  • +
  • dr (float) – the spacing between concentric rings

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • kwargs (dict) – keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor

  • +
+
+
Returns:
+

a Matplotlib PolyCollection that can be further styled

+
+
Return type:
+

PolyCollection

+
+
+
+ +
+
+drawing.rubber_band.draw_hyper_labels(H, pos, node_radius={}, ax=None, labels={}, **kwargs)[source]
+

Draws text labels for the hypergraph nodes.

+

The label is drawn to the right of the node. The node radius is needed (see +draw_hyper_nodes) so the text can be offset appropriately as the node size +changes.

+

The text label can be customized by passing in a dictionary, labels, mapping +a node to its custom label. By default, the label is the string +representation of the node.

+

Keyword arguments are passed through to Matplotlib’s annotate function.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • node_radius (dict) – mapping of node to R^1 (radius of each node)

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • labels (dict) – mapping of node to text label

  • +
  • kwargs (dict) – keyword arguments passed to matplotlib.annotate

  • +
+
+
+
+ +
+
+drawing.rubber_band.draw_hyper_nodes(H, pos, node_radius={}, r0=None, ax=None, **kwargs)[source]
+

Draws a circle for each node in H.

+

The position of each node is specified by the a dictionary/list-like, pos, +where pos[v] is the xy-coordinate for the vertex. The radius of each node +can be specified as a dictionary where node_radius[v] is the radius. If a +node is missing from this dictionary, or the node_radius is not specified at +all, a sensible default radius is chosen based on distances between nodes +given by pos.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • node_radius (dict) – mapping of node to R^1 (radius of each node)

  • +
  • r0 (float) – minimum distance that concentric rings start from the node position

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • kwargs (dict) – keyword arguments, e.g., linewidth, facecolors, are passed through to the PolyCollection constructor

  • +
+
+
Returns:
+

a Matplotlib PolyCollection that can be further styled

+
+
Return type:
+

PolyCollection

+
+
+
+ +
+
+drawing.rubber_band.get_default_radius(H, pos)[source]
+

Calculate a reasonable default node radius

+

This function iterates over the hyper edges and finds the most distant +pair of points given the positions provided. Then, the node radius is a fraction +of the median of this distance take across all hyper-edges.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
+
+
Returns:
+

the recommended radius

+
+
Return type:
+

float

+
+
+
+ +
+
+drawing.rubber_band.layout_hyper_edges(H, pos, node_radius={}, dr=None)[source]
+

Draws a convex hull for each edge in H.

+

Position of the nodes in the graph is specified by the position dictionary, +pos. Convex hulls are spaced out such that if one set contains another, the +convex hull will surround the contained set. The amount of spacing added +between hulls is specified by the parameter, dr.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • node_radius (dict) – mapping of node to R^1 (radius of each node)

  • +
  • dr (float) – the spacing between concentric rings

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
+
+
Returns:
+

A mapping from hyper edge ids to paths (Nx2 numpy matrices)

+
+
Return type:
+

dict

+
+
+
+ +
+ +

Helper function to use a NetwrokX-like graph layout algorithm on a Hypergraph

+

The hypergraph is converted to a bipartite graph, allowing the usual graph layout +techniques to be applied.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • layout (function) – the layout algorithm which accepts a NetworkX graph and keyword arguments

  • +
  • kwargs (dict) – Keyword arguments are passed through to the layout algorithm

  • +
+
+
Returns:
+

mapping of node and edge positions to R^2

+
+
Return type:
+

dict

+
+
+
+ +
+
+

drawing.two_column module

+
+
+drawing.two_column.draw(H, with_node_labels=True, with_edge_labels=True, with_node_counts=False, with_edge_counts=False, with_color=True, edge_kwargs=None, ax=None)[source]
+

Draw a hypergraph using a two-collumn layout.

+

This is intended reproduce an illustrative technique for bipartite graphs +and hypergraphs that is typically used in papers and textbooks.

+

The left column is reserved for nodes and the right column is reserved for +edges. A line is drawn between a node an an edge

+

The order of nodes and edges is optimized to reduce line crossings between +the two columns. Spacing between disconnected components is adjusted to make +the diagram easier to read, by reducing the angle of the lines.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • with_node_labels (bool) – False to disable node labels

  • +
  • with_edge_labels (bool) – False to disable edge labels

  • +
  • with_node_counts (bool) – set to True to label collapsed nodes with number of elements

  • +
  • with_edge_counts (bool) – set to True to label collapsed edges with number of elements

  • +
  • with_color (bool) – set to False to disable color cycling of hyper edges

  • +
  • edge_kwargs (dict) – keyword arguments to pass to matplotlib.LineCollection

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
+
+
+
+ +
+
+drawing.two_column.draw_hyper_edges(H, pos, ax=None, **kwargs)[source]
+

Renders hyper edges for the two column layout.

+

Each node-hyper edge membership is rendered as a line connecting the node +in the left column to the edge in the right column.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • kwargs (dict) – keyword arguments passed to matplotlib.LineCollection

  • +
+
+
Returns:
+

the hyper edges

+
+
Return type:
+

LineCollection

+
+
+
+ +
+
+drawing.two_column.draw_hyper_labels(H, pos, labels={}, with_node_labels=True, with_edge_labels=True, ax=None)[source]
+

Renders hyper labels (nodes and edges) for the two column layout.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • labels (dict) – custom labels for nodes and edges can be supplied

  • +
  • with_node_labels (bool) – False to disable node labels

  • +
  • with_edge_labels (bool) – False to disable edge labels

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • kwargs (dict) – keyword arguments passed to matplotlib.LineCollection

  • +
+
+
+
+ +
+
+drawing.two_column.layout_two_column(H, spacing=2)[source]
+

Two column (bipartite) layout algorithm.

+

This algorithm first converts the hypergraph into a bipartite graph and +then computes connected components. Disonneccted components are handled +independently and then stacked together.

+

Within a connected component, the spectral ordering of the bipartite graph +provides a quick and dirty ordering that minimizes edge crossings in the +diagram.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • spacing (float) – amount of whitespace between disconnected components

  • +
+
+
+
+ +
+
+

drawing.util module

+
+
+drawing.util.get_collapsed_size(v)[source]
+
+ +
+
+drawing.util.get_frozenset_label(S, count=False, override={})[source]
+

Helper function for rendering the labels of possibly collapsed nodes and edges

+
+
Parameters:
+
    +
  • S (iterable) – list of entities to be labeled

  • +
  • count (bool) – True if labels should be counts of entities instead of list

  • +
+
+
Returns:
+

mapping of entity to its string representation

+
+
Return type:
+

dict

+
+
+
+ +
+
+drawing.util.get_line_graph(H, collapse=True)[source]
+

Computes the line graph, a directed graph, where a directed edge (u, v) +exists if the edge u is a subset of the edge v in the hypergraph.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • collapse (bool) – True if edges should be added if hyper edges are identical

  • +
+
+
Returns:
+

A directed graph

+
+
Return type:
+

networkx.DiGraph

+
+
+
+ +
+
+drawing.util.get_set_layering(H, collapse=True)[source]
+

Computes a layering of the edges in the hyper graph.

+

In this layering, each edge is assigned a level. An edge u will be above +(e.g., have a smaller level value) another edge v if v is a subset of u.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • collapse (bool) – True if edges should be added if hyper edges are identical

  • +
+
+
Returns:
+

a mapping of vertices in H to integer levels

+
+
Return type:
+

dict

+
+
+
+ +
+
+drawing.util.inflate(items, v)[source]
+
+ +
+
+drawing.util.inflate_kwargs(items, kwargs)[source]
+

Helper function to expand keyword arguments.

+
+
Parameters:
+
    +
  • n (int) – length of resulting list if argument is expanded

  • +
  • kwargs (dict) – keyword arguments to be expanded

  • +
+
+
Returns:
+

dictionary with same keys as kwargs and whose values are lists of length n

+
+
Return type:
+

dict

+
+
+
+ +
+
+drawing.util.transpose_inflated_kwargs(inflated)[source]
+
+ +
+
+

Module contents

+
+
+drawing.draw(H, pos=None, with_color=True, with_node_counts=False, with_edge_counts=False, layout=<function spring_layout>, layout_kwargs={}, ax=None, node_radius=None, edges_kwargs={}, nodes_kwargs={}, edge_labels={}, edge_labels_kwargs={}, node_labels={}, node_labels_kwargs={}, with_edge_labels=True, with_node_labels=True, label_alpha=0.35, return_pos=False)[source]
+

Draw a hypergraph as a Matplotlib figure

+

By default this will draw a colorful “rubber band” like hypergraph, where +convex hulls represent edges and are drawn around the nodes they contain.

+

This is a convenience function that wraps calls with sensible parameters to +the following lower-level drawing functions:

+
    +
  • draw_hyper_edges,

  • +
  • draw_hyper_edge_labels,

  • +
  • draw_hyper_labels, and

  • +
  • draw_hyper_nodes

  • +
+

The default layout algorithm is nx.spring_layout, but other layouts can be +passed in. The Hypergraph is converted to a bipartite graph, and the layout +algorithm is passed the bipartite graph.

+

If you have a pre-determined layout, you can pass in a “pos” dictionary. +This is a dictionary mapping from node id’s to x-y coordinates. For example:

+
>>> pos = {
+>>> 'A': (0, 0),
+>>> 'B': (1, 2),
+>>> 'C': (5, -3)
+>>> }
+
+
+

will position the nodes {A, B, C} manually at the locations specified. The +coordinate system is in Matplotlib “data coordinates”, and the figure will +be centered within the figure.

+

By default, this will draw in a new figure, but the axis to render in can be +specified using ax.

+

This approach works well for small hypergraphs, and does not guarantee +a rigorously “correct” drawing. Overlapping of sets in the drawing generally +implies that the sets intersect, but sometimes sets overlap if there is no +intersection. It is not possible, in general, to draw a “correct” hypergraph +this way for an arbitrary hypergraph, in the same way that not all graphs +have planar drawings.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • pos (dict) – mapping of node and edge positions to R^2

  • +
  • with_color (bool) – set to False to disable color cycling of edges

  • +
  • with_node_counts (bool) – set to True to replace the label for collapsed nodes with the number of elements

  • +
  • with_edge_counts (bool) – set to True to label collapsed edges with number of elements

  • +
  • layout (function) – layout algorithm to compute

  • +
  • layout_kwargs (dict) – keyword arguments passed to layout function

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
  • edges_kwargs (dict) – keyword arguments passed to matplotlib.collections.PolyCollection for edges

  • +
  • node_radius (None, int, float, or dict) – radius of all nodes, or dictionary of node:value; the default (None) calculates radius based on number of collapsed nodes; reasonable values range between 1 and 3

  • +
  • nodes_kwargs (dict) – keyword arguments passed to matplotlib.collections.PolyCollection for nodes

  • +
  • edge_labels_kwargs (dict) – keyword arguments passed to matplotlib.annotate for edge labels

  • +
  • node_labels_kwargs (dict) – keyword argumetns passed to matplotlib.annotate for node labels

  • +
  • with_edge_labels (bool) – set to False to make edge labels invisible

  • +
  • with_node_labels (bool) – set to False to make node labels invisible

  • +
  • label_alpha (float) – the transparency (alpha) of the box behind text drawn in the figure

  • +
+
+
+
+ +
+
+drawing.draw_two_column(H, with_node_labels=True, with_edge_labels=True, with_node_counts=False, with_edge_counts=False, with_color=True, edge_kwargs=None, ax=None)
+

Draw a hypergraph using a two-collumn layout.

+

This is intended reproduce an illustrative technique for bipartite graphs +and hypergraphs that is typically used in papers and textbooks.

+

The left column is reserved for nodes and the right column is reserved for +edges. A line is drawn between a node an an edge

+

The order of nodes and edges is optimized to reduce line crossings between +the two columns. Spacing between disconnected components is adjusted to make +the diagram easier to read, by reducing the angle of the lines.

+
+
Parameters:
+
    +
  • H (Hypergraph) – the entity to be drawn

  • +
  • with_node_labels (bool) – False to disable node labels

  • +
  • with_edge_labels (bool) – False to disable edge labels

  • +
  • with_node_counts (bool) – set to True to label collapsed nodes with number of elements

  • +
  • with_edge_counts (bool) – set to True to label collapsed edges with number of elements

  • +
  • with_color (bool) – set to False to disable color cycling of hyper edges

  • +
  • edge_kwargs (dict) – keyword arguments to pass to matplotlib.LineCollection

  • +
  • ax (Axis) – matplotlib axis on which the plot is rendered

  • +
+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/drawing/modules.html b/drawing/modules.html new file mode 100644 index 00000000..9a16f5e3 --- /dev/null +++ b/drawing/modules.html @@ -0,0 +1,176 @@ + + + + + + + drawing — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/genindex.html b/genindex.html new file mode 100644 index 00000000..a383dc56 --- /dev/null +++ b/genindex.html @@ -0,0 +1,1537 @@ + + + + + + Index — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Index

+ +
+ A + | B + | C + | D + | E + | F + | G + | H + | I + | K + | L + | M + | N + | O + | P + | R + | S + | T + | U + +
+

A

+ + + +
+ +

B

+ + + +
+ +

C

+ + + +
+ +

D

+ + + +
+ +

E

+ + + +
+ +

F

+ + + +
+ +

G

+ + + +
+ +

H

+ + + +
+ +

I

+ + + +
+ +

K

+ + + +
+ +

L

+ + + +
+ +

M

+ + +
+ +

N

+ + + +
+ +

O

+ + +
+ +

P

+ + + +
+ +

R

+ + + +
+ +

S

+ + + +
+ +

T

+ + + +
+ +

U

+ + + +
+ + + +
+
+
+ +
+ +
+

© Copyright 2023 Battelle Memorial Institute.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/glossary.html b/glossary.html new file mode 100644 index 00000000..1bdf7519 --- /dev/null +++ b/glossary.html @@ -0,0 +1,204 @@ + + + + + + + Glossary of HNX terms — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Glossary of HNX terms

+

The HNX library centers around the idea of a hypergraph. This glossary provides a few key terms and definitions.

+
+
degree

Given a hypergraph (Nodes, Edges, Incidence), the degree of a node in Nodes is the number of edges in Edges to which the node is incident. +See also: s-degree

+
+
dual

The dual of a hypergraph (Nodes, Edges, Incidence) switches the roles of Nodes and Edges. More precisely, it is the hypergraph (Edges, Nodes, Incidence’), where Incidence’ is the function that assigns Incidence(n,e) to each pair (e,n). The incidence matrix of the dual hypergraph is the transpose of the incidence matrix of (Nodes, Edges, Incidence).

+
+
edge nodes (aka edge elements)

The nodes (or elements) of an edge e in a hypergraph (Nodes, Edges, Incidence) are the nodes that are incident to e.

+
+
Entity and Entity set

Class in entity.py. +HNX stores many of its data structures inside objects of type Entity. Entities help to insure safe behavior, but their use is primarily technical, not mathematical.

+
+
hypergraph

The term hypergraph can have many different meanings. In HNX, it means a tuple (Nodes, Edges, Incidence), where Nodes and Edges are sets, and Incidence is a function that assigns a value of True or False to every pair (n,e) in the Cartesian product Nodes x Edges. We call +- Nodes the set of nodes +- Edges the set of edges +- Incidence the incidence function +Note Another term for this type of object is a multihypergraph. The ability to work with multihypergraphs efficiently is a distinguishing feature of HNX!

+
+
incidence

A node n is incident to an edge e in a hypergraph (Nodes, Edges, Incidence) if Incidence(n,e) = True. +!!! – give the line of code that would allow you to evaluate

+
+
incidence matrix

A rectangular matrix constructed from a hypergraph (Nodes, Edges, Incidence) where the elements of Nodes index the matrix rows, and the elements of Edges index the matrix columns. Entry (n,e) in the incidence matrix is 1 if n and e are incident, and is 0 otherwise.

+
+
simple hypergraph

A hypergraph for which no edge is completely contained in another.

+
+
subhypergraph

A subhypergraph of a hypergraph (Nodes, Edges, Incidence) is a hypergraph (Nodes’, Edges’, Incidence’) such that Nodes’ is a subset of Nodes, Edges’ is a subset of Edges, and every incident pair (n,e) in (Nodes’, Edges’, Incidence’) is also incident in (Nodes, Edges, Incidence)

+
+
subhypergraph induced by a set of nodes

An induced subhypergraph of a hypergraph (Nodes, Edges, Incidence) is a subhypergraph (Nodes’, Edges’, Incidence’) where a pair (n,e) is incident if and only if it is incident in (Nodes, Edges, Incidence)

+
+
toplex

A toplex in a hypergraph (Nodes, Edges, Incidence ) is an edge e whose node set isn’t properly contained in the node set of any other edge. That is, if f is another edge and ever node incident to e is also incident to f, then the node sets of e and f are identical.

+
+
+
+

S-line graphs

+

HNX offers a variety of tool sets for network analysis, including s-line graphs.

+
+
+
s-adjacency matrix

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a square matrix where the elements of Nodes index both rows and columns. The matrix can be weighted or unweighted. Entry (i,j) is nonzero if and only if node i and node j are incident to at least s edges in common. If it is nonzero, then it is equal to the number of shared edges (if weighted) or 1 (if unweighted).

+
+
s-edge-adjacency matrix

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a square matrix where the elements of Edges index both rows and columns. The matrix can be weighted or unweighted. Entry (i,j) is nonzero if and only if edge i and edge j share to at least s nodes, and is equal to the number of shared nodes (if weighted) or 1 (if unweighted).

+
+
s-auxiliary matrix

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the submatrix of the s-edge-adjacency matrix obtained by restricting to rows and columns corresponding to edges of size at least s.

+
+
s-node-walk

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a sequence of nodes in Nodes such that each successive pair of nodes share at least s edges in Edges.

+
+
s-edge-walk

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, a sequence of edges in Edges such that each successive pair of edges intersects in at least s nodes in Nodes.

+
+
s-walk

Either an s-node-walk or an s-edge-walk.

+
+
s-connected component, s-node-connected component

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-connected component is a subhypergraph induced by a subset of Nodes with the property that there exists an s-walk between every pair of nodes in this subset. An s-connected component is the maximal such subset in the sense that it is not properly contained in any other subset satisfying this property.

+
+
s-edge-connected component

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-edge-connected component is a subhypergraph induced by a subset of Edges with the property that there exists an s-edge-walk between every pair of edges in this subset. An s-edge-connected component is the maximal such subset in the sense that it is not properly contained in any other subset satisfying this property.

+
+
s-connected, s-node-connected

A hypergraph is s-connected if it has one s-connected component.

+
+
s-edge-connected

A hypergraph is s-edge-connected if it has one s-edge-connected component.

+
+
s-distance

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-distances between two nodes in Nodes is the length of the shortest s-node-walk between them. If no s-node-walks between the pair of nodes exists, the s-distance between them is infinite. The s-distance +between edges is the length of the shortest s-edge-walk between them. If no s-edge-walks between the pair of edges exist, then s-distance between them is infinite.

+
+
s-diameter

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-diameter is the maximum s-Distance over all pairs of nodes in Nodes.

+
+
s-degree

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, the s-degree of a node is the number of edges in Edges of size at least s to which node belongs. See also: degree

+
+
s-edge

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-edge is any edge of size at least s.

+
+
s-linegraph

For a hypergraph (Nodes, Edges, Incidence) and positive integer s, an s-linegraph is a graph representing +the node to node or edge to edge connections according to the width s of the connections. +The node s-linegraph is a graph on the set Nodes. Two nodes in Nodes are incident in the node s-linegraph if they +share at lease s incident edges in Edges; that is, there are at least s elements of Edges to which they both belong. +The edge s-linegraph is a graph on the set Edges. Two edges in Edges are incident in the edge s-linegraph if they +share at least s incident nodes in Nodes; that is, the edges intersect in at least s nodes in Nodes.

+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/hypconstructors.html b/hypconstructors.html new file mode 100644 index 00000000..ee00a4c6 --- /dev/null +++ b/hypconstructors.html @@ -0,0 +1,342 @@ + + + + + + + Hypergraph Constructors — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Hypergraph Constructors

+

An hnx.Hypergraph H = (V,E) references a pair of disjoint sets: +V = nodes (vertices) and E = (hyper)edges.

+

HNX allows for multi-edges by distinguishing edges by +their identifiers instead of their contents. For example, if +V = {1,2,3} and E = {e1,e2,e3}, +where e1 = {1,2}, e2 = {1,2}, and e3 = {1,2,3}, +the edges e1 and e2 contain the same set of nodes and yet +are distinct and are distinguishable within H = (V,E).

+

HNX provides methods to easily store and +access additional metadata such as cell, edge, and node weights. +Metadata associated with (edge,node) incidences +are referenced as cell_properties. +Metadata associated with a single edge or node is referenced +as its properties.

+

The fundamental object needed to create a hypergraph is a setsystem. The +setsystem defines the many-to-many relationships between edges and nodes in +the hypergraph. Cell properties for the incidence pairs can be defined within +the setsystem or in a separate pandas.Dataframe or dict. +Edge and node properties are defined with a pandas.DataFrame or dict.

+
+

SetSystems

+

There are five types of setsystems currently accepted by the library.

+
    +
  1. iterable of iterables : Barebones hypergraph, which uses Pandas default +indexing to generate hyperedge ids. Elements must be hashable.:

    +
    >>> H = Hypergraph([{1,2},{1,2},{1,2,3}])
    +
    +
    +
  2. +
  3. dictionary of iterables : The most basic way to express many-to-many +relationships providing edge ids. The elements of the iterables must be +hashable):

    +
    >>> H = Hypergraph({'e1':[1,2],'e2':[1,2],'e3':[1,2,3]})
    +
    +
    +
  4. +
  5. dictionary of dictionaries : allows cell properties to be assigned +to a specific (edge, node) incidence. This is particularly useful when +there are variable length dictionaries assigned to each pair:

    +
    >>> d = {'e1':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.1, 'name': 'related_to',
    +>>>                 'startdate': '05.13.2020'}},
    +>>>      'e2':{ 1: {'w':0.52, 'name': 'owned_by'},
    +>>>             2: {'w':0.2}},
    +>>>      'e3':{ 1: {'w':0.5, 'name': 'related_to'},
    +>>>             2: {'w':0.2, 'name': 'owner_of'},
    +>>>             3: {'w':1, 'type': 'relationship'}}
    +
    +
    +
    >>> H = Hypergraph(d, cell_weight_col='w')
    +
    +
    +
  6. +
  7. pandas.DataFrame For large datasets and for datasets with cell +properties it is most efficient to construct a hypergraph directly from +a pandas.DataFrame. Incidence pairs are in the first two columns. +Cell properties shared by all incidence pairs can be placed in their own +column of the dataframe. Variable length dictionaries of cell properties +particular to only some of the incidence pairs may be placed in a single +column of the dataframe. Representing the data above as a dataframe df:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

    col1

    col2

    w

    col3

    e1

    1

    0.5

    {‘name’:’related_to’}

    e1

    2

    0.1

    +
    {“name”:”related_to”,

    “startdate”:”05.13.2020”}

    +
    +
    +

    e2

    1

    0.52

    {“name”:”owned_by”}

    e2

    2

    0.2

    {…}

    +

    The first row of the dataframe is used to reference each column.

    +
    >>> H = Hypergraph(df,edge_col="col1",node_col="col2",
    +>>>                 cell_weight_col="w",misc_cell_properties="col3")
    +
    +
    +
  8. +
  9. numpy.ndarray For homogeneous datasets given in a n x 2 ndarray a +pandas dataframe is generated and column names are added from the +edge_col and node_col arguments. Cell properties containing multiple data +types are added with a separate dataframe or dict and passed through the +cell_properties keyword.

    +
    >>> arr = np.array([['e1','1'],['e1','2'],
    +>>>                 ['e2','1'],['e2','2'],
    +>>>                 ['e3','1'],['e3','2'],['e3','3']])
    +>>> H = hnx.Hypergraph(arr, column_names=['col1','col2'])
    +
    +
    +
  10. +
+
+
+

Edge and Node Properties

+

Properties specific to edges and/or node can be passed through the +keywords: edge_properties, node_properties, properties. +Properties may be passed as dataframes or dicts. +The first column or index of the dataframe or keys of the dict keys +correspond to the edge and/or node identifiers. +If properties are specific to an id, they may be stored in a single +object and passed to the properties keyword. For example:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

id

weight

properties

e1

5.0

{‘type’:’event’}

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

3

1.0

{}

+

A properties dictionary should have the format:

+
dp = {id1 : {prop1:val1, prop2,val2,...}, id2 : ... }
+
+
+

A properties dataframe may be used for nodes and edges sharing ids +but differing in cell properties by adding a level index using 0 +for edges and 1 for nodes:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

level

id

weight

properties

0

e1

5.0

{‘type’:’event’}

0

e2

0.52

{“name”:”owned_by”}

{…}

1

1.2

{‘color’:’red’}

2

.003

{‘name’:’Fido’,’color’:’brown’}

{…}

+
+
+

Weights

+

The default key for cell and object weights is “weight”. The default value +is 1. Weights may be assigned and/or a new default prescribed in the +constructor using cell_weight_col and cell_weights for incidence pairs, +and using edge_weight_prop, node_weight_prop, weight_prop, +default_edge_weight, and default_node_weight for node and edge weights.

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/hypergraph101.html b/hypergraph101.html new file mode 100644 index 00000000..91d4dc0e --- /dev/null +++ b/hypergraph101.html @@ -0,0 +1,558 @@ + + + + + + + A Gentle Introduction to Hypergraph Mathematics — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

A Gentle Introduction to Hypergraph Mathematics

+

Here we gently introduce some of the basic concepts in hypergraph +modeling. We note that in order to maintain this “gentleness”, we will +be mostly avoiding the very important and legitimate issues in the +proper mathematical foundations of hypergraphs and closely related +structures, which can be very complicated. Rather we will be focusing on +only the most common cases used in most real modeling, and call a graph +or hypergraph gentle when they are loopless, simple, finite, +connected, and lacking empty hyperedges, isolated vertices, labels, +weights, or attributes. Additionally, the deep connections between +hypergraphs and other critical mathematical objects like partial orders, +finite topologies, and topological complexes will also be treated +elsewhere. When it comes up, below we will sometimes refer to the added +complexities which would attend if we weren’t being so “gentle”. In +general the reader is referred to [1,2] for a less gentle and more +comprehensive treatment.

+
+

Graphs and Hypergraphs

+

Network science is based on the concept of a graph +\(G=\langle V,E\rangle\) as a system of connections between +entities. \(V\) is a (typically finite) set of elements, nodes, or +objects, which we formally call “vertices”, and \(E\) is a set +of pairs of vertices. Given that, then for two vertices +\(u,v \in V\), an edge is a set \(e=\{u,v\}\) in \(E\), +indicating that there is a connection between \(u\) and \(v\). +It is then common to represent \(G\) as either a Boolean adjacency +matrix \(A_{n \times n}\) where \(n=|V|\), where an +\(i,j\) entry in \(A\) is 1 if \(v_i,v_j\) are connected in +\(G\); or as an incidence matrix \(I_{n \times m}\), where +now also \(m=|E|\), and an \(i,j\) entry in \(I\) is now 1 +if the vertex \(v_i\) is in edge \(e_j\).

+
+_images/exgraph.png +
+

Fig. 1 An example graph, where the numbers are edge IDs.

+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1 Adjacency matrix \(A\) of a graph.

Andrews

Bailey

Carter

Davis

Andrews

0

1

1

1

Bailey

1

0

1

0

Carter

1

1

0

1

Davis

1

0

1

1

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2 Incidence matrix \(I\) of a graph.

1

2

3

4

5

Andrews

1

1

0

1

0

Bailey

0

0

0

1

1

Carter

0

1

1

0

1

Davis

1

0

1

0

0

+
+_images/biblio_hg.png +
+

Fig. 2 An example hypergraph, where similarly now the hyperedges are shown with numeric IDs.

+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 3 Incidence matrix I of a hypergraph.

1

2

3

4

5

Andrews

1

1

0

1

0

Bailey

0

0

0

1

1

Carter

0

1

0

0

1

Davis

1

1

1

0

0

+

Notice that in the incidence matrix \(I\) of a gentle graph +\(G\), it is necessarily the case that every column must have +precisely two 1 entries, reflecting that every edge connects exactly two +vertices. The move to a hypergraph \(H=\langle V,E\rangle\) +relaxes this requirement, in that now a hyperedge (although we will +still say edge when clear from context) \(e \in E\) is a subset +\(e = \{ v_1, v_2, \ldots, v_k\} \subseteq V\) of vertices of +arbitrary size. We call \(e\) a \(k\)-edge when \(|e|=k\). +Note that thereby a 2-edge is a graph edge, while both a singleton +\(e=\{v\}\) and a 3-edge \(e=\{v_1,v_2,v_3\}\), 4-edge +\(e=\{v_1,v_2,v_3,v_4\}\), etc., are all hypergraph edges. In this +way, if every edge in a hypergraph \(H\) happens to be a 2-edge, +then \(H\) is a graph. We call such a hypergraph 2-uniform.

+

Our incidence matrix \(I\) is now very much like that for a graph, +but the requirement that each column have exactly two 1 entries is +relaxed: the column for edge \(e\) with size \(k\) will have +\(k\) 1’s. Thus \(I\) is now a general Boolean matrix (although +with some restrictions when \(H\) is gentle).

+

Notice also that in the examples we’re showing in the figures, the graph +is closely related to the hypergraph. In fact, this particular graph is +the 2-section or underlying graph of the hypergraph. It is the +graph \(G\) recorded when only the pairwise connections in the +hypergraph \(H\) are recognized. Note that while the 2-section is +always determined by the hypergraph, and is frequently used as a +simplified representation, it almost never has enough information to be +able to recover the hypergraph from it.

+
+
+

Important Things About Hypergraphs

+

While all graphs \(G\) are (2-uniform) hypergraphs \(H\), since +they’re very special cases, general hypergraphs have some important +properties which really stand out in distinction, especially to those +already conversant with graphs. The following issues are critical for +hypergraphs, but “disappear” when considering the special case of +2-uniform hypergraphs which are graphs.

+
+

All Hypergraphs Come in Dual Pairs

+

If our incidence matrix \(I\) is a general \(n \times m\) +Boolean matrix, then its transpose \(I^T\) is an \(m \times n\) +Boolean matrix. In fact, \(I^T\) is also the incidence matrix of a +different hypergraph called the dual hypergraph \(H^*\) of +\(H\). In the dual \(H^*\), it’s just that vertices and edges +are swapped: we now have \(H^* = \langle E, V \rangle\) where it’s +\(E\) that is a set of vertices, and the now edges +\(v \in V, v \subseteq E\) are subsets of those vertices.

+
+_images/dual.png +
+

Fig. 3 The dual hypergraph \(H^*\).

+
+
+

Just like the “primal” hypergraph \(H\) has a 2-section, so does the +dual. This is called the line graph, and it is an important +structure which records all of the incident hyperedges. Line graphs are +also used extensively in graph theory.

+

Note that it follows that since every graph \(G\) is a (2-uniform) +hypergraph \(H\), so therefore we can form the dual hypergraph +\(G^*\) of \(G\). If a graph \(G\) is a 2-uniform +hypergraph, is its dual \(G^*\) also a 2-uniform hypergraph? In +general, no, only in the case where \(G\) is a single cycle or a +union of cycles would that be true. Also note that in order to calculate +the line graph of a graph \(G\), one needs to work through its dual +hypergraph \(G^*\).

+
+_images/dual2.png +
+

Fig. 4 The line graph of \(H\), which is the 2-section of the dual \(H^*\).

+
+
+
+
+

Edge Intersections Have Size

+

As we’ve already seen, in a graph all the edges are size 2, whereas in a +hypergarph edges can be arbitrary size \(1, 2, \ldots, n\). Our +example shows a singleton, three “graph edge” pairs, and a 2-edge.

+

In a gentle graph \(G\) consider two edges +\(e = \{ u, v \},f=\{w,z\} \in E\) and their intersection +\(g = e \cap f\). If \(g \neq \emptyset\) then \(e\) and +\(f\) are non-disjoint, and we call them incident. Let +\(s(e,f)=|g|\) be the size of that intersection. If \(G\) is +gentle and \(e\) and \(f\) are incident, then \(s(e,f)=1\), +in that one of \(u,v\) must be equal to one of \(w,z\), and +\(g\) will be that singleton. But in a hypergraph, the intersection +\(g=e \cap f\) of two incident edges can be any size +\(s(e,f) \in [1,\min(|e|,|f|)]\). This aspect, the size of the +intersection of two incident edges, is critical to understanding +hypergraph structure and properties.

+
+
+

Edges Can Be Nested

+

While in a gentle graph \(G\) two edges \(e\) and \(f\) can +be incident or not, in a hypergraph \(H\) there’s another case: two +edges \(e\) and \(f\) may be nested or included, in that +\(e \subseteq f\) or \(f \subseteq e\). That’s exactly the +condition above where \(s(e,f) = \min(|e|,|f|)\), which is the size +of the edge included within the including edge. In our example, we have +that edge 1 is included in edge 2 is included in edge 3.

+
+
+

Walks Have Length and Width

+

A walk is a sequence +\(W = \langle { e_0, e_1, \ldots, e_N } \rangle\) of edges where +each pair \(e_i,e_{i+1}, 0 \le i \le N-1\) in the sequence are +incident. We call \(N\) the length of the walk. Walks are the +raison d’être of both graphs and hypergraphs, in that in a graph +\(G\) a walk \(W\) establishes the connectivity of all the +\(e_i\) to each other, and a way to “travel” between the ends +\(e_0\) and \(e_N\). Naturally in a walk for each such pair we +can also measure the size of the intersection +\(s_i=s(e_i,e_{i+1}), 0 \le i \le N\). While in a gentle graph +\(G\), all the \(s_i=1\), as we’ve seen in a hypergraph +\(H\) all these \(s_i\) can vary widely. So for any walk +\(W\) we can not only talk about its length \(N\), but also +define its width \(s(W) = \min_{0 \le i \le N} s_i\) as the size +of the smallest such intersection. When a walk \(W\) has width +\(s\), we call it an :math:`s`-walk. It follows that all walks +in a graph are 1-walks with width 1. In Fig. 5 we see two +walks in a hypergraph. While both have length 2 (counting edgewise, and +recalling origin zero), the one on the left has width 1, and that on the +right width 3.

+
+_images/swalks.png +
+

Fig. 5 Two hypergraph walks of length 2: (Left) A 1-walk. (Right) A 3-walk.

+
+
+
+
+
+

Towards Less Gentle Things

+

We close with just brief mentions of more advanced issues.

+
+

\(s\)-Walks and Hypernetwork Science

+

Network science has become a dominant force in data analytics in recent +years, including a range of methods measuring distance, connectivity, +reachability, centrality, modularity, and related things. Most all of +these concepts generalize to hypergraphs using “\(s\)-versions” of +them. For example, the \(s\)-distance between two vertices or +hyperedges is the length of the shortest \(s\)-walk between them, so +that as \(s\) goes up, requiring wider connections, the distance +will also tend to grow, so that ultimately perhaps vertices may not be +\(s\)-reachable at all. See [2] for more details.

+
+
+

Hypergraphs in Mathematics

+

Hypergraphs are very general objects mathematically, and are deeply +connected to a range of other essential objects and structures mostly in +discrete science.

+

Most obviously, perhaps, is that there is a one-to-one relationship +between a hypergraph \(H = \langle V, E \rangle\) and a +corresponding bipartite graph \(B=\langle V \sqcup E, I \rangle\). +\(B\) is a new graph (not a hypergraph) with vertices being both the +vertices and the hyperedges from the hypergraph \(H\), and a +connection being a pair \(\{ v, e \} \in I\) if and only if +\(v \in e\) in \(H\). That you can go the other way to define a +hypergraph \(H\) for every bipartite graph \(G\) is evident, but +not all operations carry over unambiguously between hypergraphs and +their bipartite versions.

+
+_images/bicolored1.png +
+

Fig. 6 Bipartite graph.

+
+
+

Even more generally, the Boolean incidence matrix \(I\) of a +hypergraph \(H\) can be taken as the characteristic matrix of a +binary relation. When \(H\) is gentle this is somewhat restricted, +but in general we can see that there are one-to-one relations now +between hypergraphs, binary relations, as well as bipartite graphs from +above.

+

Additionally, we know that every hypergraph implies a hierarchical +structure via the fact that for every pair of incident hyperedges either +one is included in the other, or their intersection is included in both. +This creates a partial order, establishing a further one-to-one mapping +to a variety of lattice structures and dual lattice structures relating +how groups of vertices are included in groups of edges, and vice versa. +Fig. refex shows the concept lattice [3], perhaps the most important +of these structures, determined by our example.

+
+_images/ex.png +
+

Fig. 7 The concept lattice of the example hypergraph \(H\).

+
+
+

Finally, the strength of hypergraphs is their ability to model multi-way +interactions. Similarly, mathematical topology is concerned with how +multi-dimensional objects can be attached to each other, not only in +continuous spaces but also with discrete objects. In fact, a finite +topological space is a special kind of gentle hypergraph closed under +both union and intersection, and there are deep connections between +these structures and the lattices referred to above.

+

In this context also an abstract simplicial complex (ASC) is a kind +of hypergraph where all possible included edges are present. Each +hypergraph determines such an ASC by “closing it down” by subset. ASCs +have a natural topological structure which can reveal hidden structures +measurable by homology, and are used extensively as the workhorse of +topological methods such as persistent homology. In this way hypergraphs +form a perfect bridge from network science to computational topology in +general.

+
+_images/simplicial.png +
+

Fig. 8 A diagram of the ASC implied by our example. Numbers here indicate the actual hyper-edges in the original hypergraph \(H\), where now additionally all sub-edges, including singletons, are in the ASC.

+
+
+
+
+

Non-Gentle Graphs and Hypergraphs

+

Above we described our use of “gentle” graphs and hypergraphs as finite, +loopless, simple, connected, and lacking empty hyperedges, isolated +vertices, labels, weights, or attributes. But at a higher level of +generality we can also have:

+
+
Empty Hyperedges:

If a column of \(I\) has all zero entries.

+
+
Isolated Vertices:

If a row of \(I\) has all zero entries.

+
+
Multihypergraphs:

We may choose to allow duplicated hyperedges, resulting in duplicate +columns in the incidence matrix \(I\).

+
+
Self-Loops:

In a graph allowing an edge to connect to itself.

+
+
Direction:

In an edge, where some vertices are recognized as “inputs” which +point to others recognized as “outputs”.

+
+
Order:

In a hyperedge, where the vertices carry a particular (total) order. +In a graph, this is equivalent to being directed, but not in a +hypergraph.

+
+
Attributes:

In general we use graphs and hypergraphs to model data, and thus +carrying attributes of different types, including weights, labels, +identifiers, types, strings, or really in principle any data object. +These attributes could be on vertices (rows of \(I\)), edges +(columns of \(I\)) or what we call “incidences”, related to a +particular appearnace of a particular vertex in a particular edge +(cells of \(I\)).

+
+
+

[1] Joslyn, Cliff A; Aksoy, Sinan; Callahan, Tiffany J; Hunter, LE; +Jefferson, Brett; Praggastis, Brenda; Purvine, Emilie AH; Tripodi, +Ignacio J: (2021) “Hypernetwork Science: From Multidimensional +Networks to Computational Topology”, in: Unifying Themes in Complex +systems X: Proc. 10th Int. Conf. Complex Systems, ed. D. Braha et +al., pp. 377-392, Springer, +https://doi.org/10.1007/978-3-030-67318-5_25

+

[2] Aksoy, Sinan G; Joslyn, Cliff A; Marrero, Carlos O; Praggastis, B; +Purvine, Emilie AH: (2020) “Hypernetwork Science via High-Order +Hypergraph Walks”, EPJ Data Science, v. 9:16, +https://doi.org/10.1140/epjds/s13688-020-00231-0

+

[3] Ganter, Bernhard and Wille, Rudolf: (1999) Formal Concept +Analysis, Springer-Verlag

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 00000000..7e8badde --- /dev/null +++ b/index.html @@ -0,0 +1,177 @@ + + + + + + + HyperNetX (HNX) — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

HyperNetX (HNX)

+_images/hnxbasics.png +

HNX is a Python library for hypergraphs, the natural models for multi-dimensional network data.

+

To get started, try the interactive COLAB tutorials. For a primer on hypergraphs, try this gentle introduction. To see hypergraphs at work in cutting-edge research, see our list of recent publications.

+
+

Why hypergraphs?

+

Like graphs, hypergraphs capture important information about networks and relationships. But hypergraphs do more – they model multi-way relationships, where ordinary graphs only capture two-way relationships. This library serves as a repository of methods and algorithms that have proven useful over years of exploration into what hypergraphs can tell us.

+

As both vertex adjacency and edge +incidence are generalized to be quantities, +hypergraph paths and walks have both length and width because of these multiway connections. +Most graph metrics have natural generalizations to hypergraphs, but since +hypergraphs are basically set systems, they also admit to the powerful tools of algebraic topology, +including simplicial complexes and simplicial homology, to study their structure.

+
+
+

Our community

+

We have a growing community of users and contributors. For the latest software updates, and to learn about the development team, see the library overview. Have ideas to share? We’d love to hear from you! Our orientation for contributors can help you get started.

+
+
+

Our values

+

Our shared values as software developers guide us in our day-to-day interactions and decision-making. Our open source projects are no exception. Trust, respect, collaboration and transparency are core values we believe should live and breathe within our projects. Our community welcomes participants from around the world with different experiences, unique perspectives, and great ideas to share. See our code of conduct to learn more.

+
+
+

Contact us

+
+
Questions and comments are welcome! Contact us at

hypernetx@pnnl.gov

+
+
+
+
+

Contents

+ +
+

Indices and tables

+ +
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/install.html b/install.html new file mode 100644 index 00000000..7d41abd5 --- /dev/null +++ b/install.html @@ -0,0 +1,262 @@ + + + + + + + Installing HyperNetX — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Installing HyperNetX

+
+

Installation

+

The recommended installation method for most users is to create a virtual environment +and install HyperNetX from PyPi.

+

HyperNetX may be cloned or forked from Github.

+
+
+

Prerequisites

+

HyperNetX officially supports Python 3.8, 3.9, 3.10 and 3.11.

+
+
+

Create a virtual environment

+
+

Using Anaconda

+
>>> conda create -n env-hnx python=3.8 -y
+>>> conda activate env-hnx
+
+
+
+
+

Using venv

+
>>> python -m venv venv-hnx
+>>> source env-hnx/bin/activate
+
+
+
+
+

Using virtualenv

+
>>> virtualenv env-hnx
+>>> source env-hnx/bin/activate
+
+
+
+
+

For Windows Users

+

On both Windows PowerShell or Command Prompt, you can use the following command to activate your virtual environment:

+
>>> .\env-hnx\Scripts\activate
+
+
+

To deactivate your environment, use:

+
>>> .\env-hnx\Scripts\deactivate
+
+
+
+
+
+

Installing Hypernetx

+

Regardless of how you install HyperNetX, ensure that your environment is activated and that you are running Python >=3.8.

+
+

Installing from PyPi

+
>>> pip install hypernetx
+
+
+
+
+

Installing from Source

+

Ensure that you have git installed.

+
>>> git clone https://github.com/pnnl/HyperNetX.git
+>>> cd HyperNetX
+>>> pip install -e .['all']
+
+
+

If you are using zsh as your shell, ensure that the single quotation marks are placed outside the square brackets:

+
>>> pip install -e .'[all]'
+
+
+
+
+
+

Post-Installation Actions

+
+

Running Tests

+
>>> python -m pytest
+
+
+
+
+

Interact with HyperNetX in a REPL

+

Ensure that your environment is activated and that you run python on your terminal to open a REPL:

+
>>> import hypernetx as hnx
+>>> data = { 0: ('A', 'B'), 1: ('B', 'C'), 2: ('D', 'A', 'E'), 3: ('F', 'G', 'H', 'D') }
+>>> H = hnx.Hypergraph(data)
+>>> list(H.nodes)
+['G', 'F', 'D', 'A', 'B', 'H', 'C', 'E']
+>>> list(H.edges)
+[0, 1, 2, 3]
+>>> H.shape
+(8, 4)
+
+
+
+
+

Other Actions if installed from source

+

Ensure that you are at the root of the source directory before running any of the following commands:

+
+

Viewing jupyter notebooks

+

The following command will automatically open the notebooks in a browser.

+
>>> jupyter-notebook tutorials
+
+
+
+
+

Building documentation

+

The following commands will build and open a local version of the documentation in a browser:

+
>>> make build-docs
+>>> open docs/build/index.html
+
+
+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/license.html b/license.html new file mode 100644 index 00000000..392ecbf6 --- /dev/null +++ b/license.html @@ -0,0 +1,147 @@ + + + + + + + License — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

License

+

HyperNetX

+

Copyright © 2023, Battelle Memorial Institute

+

Battelle Memorial Institute (hereinafter Battelle) hereby grants permission +to any person or entity lawfully obtaining a copy of this software and associated +documentation files (hereinafter “the Software”) to redistribute and use the +Software in source and binary forms, with or without modification. Such person +or entity may use, copy, modify, merge, publish, distribute, sublicense, and/or +sell copies of the Software, and may permit others to do so, subject to the +following conditions:

+
    +
  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers.

  • +
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other

  • +
  • Other than as used herein, neither the name Battelle Memorial Institute or Battelle may be used in any form whatsoever without the express written consent of Battelle.

  • +
+

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +IN NO EVENT SHALL BATTELLE OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, +INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, +WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE.

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/modularity.html b/modularity.html new file mode 100644 index 00000000..1f7b386b --- /dev/null +++ b/modularity.html @@ -0,0 +1,231 @@ + + + + + + + Modularity and Clustering — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Modularity and Clustering

+_images/ModularityScreenShot.png +
+

Overview

+

The hypergraph_modularity submodule in HNX provides functions to compute hypergraph modularity for a +given partition of the vertices in a hypergraph. In general, higher modularity indicates a better +partitioning of the vertices into dense communities.

+

Two functions to generate such hypergraph +partitions are provided: Kumar’s algorithm, and the simple Last-Step refinement algorithm.

+

The submodule also provides a function to generate the two-section graph for a given hypergraph which can then be used to find +vertex partitions via graph-based algorithms.

+
+
+

Installation

+

Since it is part of HNX, no extra installation is required. +The submodule can be imported as follows:

+
import hypernetx.algorithms.hypergraph_modularity as hmod
+
+
+
+
+

Using the Tool

+
+

Precomputation

+

In order to make the computation of hypergraph modularity more efficient, some quantities need to be pre-computed. +Given hypergraph H, calling:

+
HG = hmod.precompute_attributes(H)
+
+
+

will pre-compute quantities such as node strength (weighted degree), d-weights (total weight for each edge cardinality) and binomial coefficients.

+
+
+

Modularity

+

Given hypergraph HG and a partition A of its vertices, hypergraph modularity is a measure of the quality of this partition. +Random partitions typically yield modularity near zero (it can be negative) while positive modularity is indicative of the presence +of dense communities, or modules. There are several variations for the definition of hypergraph modularity, and the main difference lies in the +weight given to different edges. Modularity is computed via:

+
q = hmod.modularity(HG, A, wdc=linear)
+
+
+

In a graph, an edge only links 2 nodes, so given partition A, an edge is either within a community (which increases the modularity) +or between communities.

+

With hypergraphs, we consider edges of size d=2 or more. Given some vertex partition A and some d-edge e, let c be the number of nodes +that belong to the most represented part in e; if c > d/2, we consider this edge to be within the part. +Hyper-parameters 0 <= w(d,c) <= 1 control the weight +given to such edges. Three functions are supplied in this submodule, namely:

+
+
linear

\(w(d,c) = c/d\) if \(c > d/2\), else \(0\).

+
+
majority

\(w(d,c) = 1\) if \(c > d/2\), else \(0\).

+
+
strict

\(w(d,c) = 1\) if \(c == d\), else \(0\).

+
+
+

The ‘linear’ function is used by default. More details in [2].

+
+
+

Two-section graph

+

There are several good partitioning algorithms for graphs such as the Louvain algorithm and ECG, a consensus clustering algorithm. +One way to obtain a partition for hypergraph HG is to build its corresponding two-section graph G and run a graph clustering algorithm. +Code is provided to build such graph via:

+
G = hmod.two_section(HG)
+
+
+

which returns an igraph.Graph object.

+
+
+

Clustering Algorithms

+

Two clustering (vertex partitioning) algorithms are supplied. The first one is a hybrid method proposed by Kumar et al. (see [1]) +that uses the Louvain algorithm on the two-section graph, but re-weights the edges according to the distibution of vertices +from each part inside each edge. Given hypergraph HG, this is called as:

+
K = hmod.kumar(HG)
+
+
+

The other supplied algorithm is a simple method to improve hypergraph modularity directely. Given some +initial partition of the vertices (for example via Louvain on the two-section graph), move vertices between parts in order +to improve hypergraph modularity. Given hypergraph HG and initial partition A, this is called as:

+
L = hmod.last_step(HG, A, wdc=linear)
+
+
+

where the ‘wdc’ parameter is the same as in the modularity function.

+
+
+

Other Features

+

We represent a vertex partition A as a list of sets, but another conveninent representation is via a dictionary. +We provide two utility functions to switch representation, namely A = dict2part(D) and D = part2dict(A).

+
+
+

References

+

[1] Kumar T., Vaidyanathan S., Ananthapadmanabhan H., Parthasarathy S. and Ravindran B. “A New Measure of Modularity in Hypergraphs: Theoretical Insights and Implications for Effective Clustering”. In: Cherifi H., Gaito S., Mendes J., Moro E., Rocha L. (eds) Complex Networks and Their Applications VIII. COMPLEX NETWORKS 2019. Studies in Computational Intelligence, vol 881. Springer, Cham. https://doi.org/10.1007/978-3-030-36687-2_24

+

[2] Kamiński B., Prałat P. and Théberge F. “Community Detection Algorithm Using Hypergraph Modularity”. In: Benito R.M., Cherifi C., Cherifi H., Moro E., Rocha L.M., Sales-Pardo M. (eds) Complex Networks & Their Applications IX. COMPLEX NETWORKS 2020. Studies in Computational Intelligence, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-65347-7_13

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/objects.inv b/objects.inv new file mode 100644 index 00000000..80891f44 Binary files /dev/null and b/objects.inv differ diff --git a/overview/index.html b/overview/index.html new file mode 100644 index 00000000..094b4d00 --- /dev/null +++ b/overview/index.html @@ -0,0 +1,269 @@ + + + + + + + Overview — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Overview

+../_images/harrypotter_basic_hyp.png +
+

HyperNetX

+

The HyperNetX library provides classes and methods for the analysis +and visualization of complex network data modeled as hypergraphs. +The library generalizes traditional graph metrics.

+

HypernetX was developed by the Pacific Northwest National Laboratory for the +Hypernets project as part of its High Performance Data Analytics (HPDA) program. +PNNL is operated by Battelle Memorial Institute under Contract DE-ACO5-76RL01830.

+
    +
  • Principal Developer and Designer: Brenda Praggastis

  • +
  • Development Team: Madelyn Shapiro, Mark Bonicillo

  • +
  • Visualization: Dustin Arendt, Ji Young Yun

  • +
  • Principal Investigator: Cliff Joslyn

  • +
  • Program Manager: Brian Kritzstein

  • +
  • Principal Contributors (Design, Theory, Code): Sinan Aksoy, Dustin Arendt, Mark Bonicillo, Helen Jenne, Cliff Joslyn, Nicholas Landry, Audun Myers, Christopher Potvin, Brenda Praggastis, Emilie Purvine, Greg Roek, Madelyn Shapiro, Mirah Shi, Francois Theberge, Ji Young Yun

  • +
+

The code in this repository is intended to support researchers modeling data +as hypergraphs. We have a growing community of users and contributors. +Documentation is available at: https://pnnl.github.io/HyperNetX

+

For questions and comments contact the developers directly at: hypernetx@pnnl.gov

+
+

New Features in Version 2.0

+

HNX 2.0 now accepts metadata as core attributes of the edges and nodes of a +hypergraph. While the library continues to accept lists, dictionaries and +dataframes as basic inputs for hypergraph constructions, both cell +properties and edge and node properties can now be easily added for +retrieval as object attributes.

+

The core library has been rebuilt to take advantage of the flexibility and speed of Pandas Dataframes. +Dataframes offer the ability to store and easily access hypergraph metadata. Metadata can be used for filtering objects, and characterize their +distributions by their attributes.

+

Version 2.0 is not backwards compatible. Objects constructed using version +1.x can be imported from their incidence dictionaries.

+
+

What’s New

+
    +
  1. The Hypergraph constructor now accepts nested dictionaries with incidence cell properties, pandas.DataFrames, and 2-column Numpy arrays.

  2. +
  3. Additional constructors accept incidence matrices and incidence dataframes.

  4. +
  5. Hypergraph constructors accept cell, edge, and node metadata.

  6. +
  7. Metadata available as attributes on the cells, edges, and nodes.

  8. +
  9. User-defined cell weights and default weights available to incidence matrix.

  10. +
  11. Meta data persists with restrictions and removals.

  12. +
  13. Meta data persists onto s-linegraphs as node attributes of Networkx graphs.

  14. +
  15. New hnxwidget available using pip install hnxwidget.

  16. +
+
+
+

What’s Changed

+
    +
  1. The static and dynamic distinctions no longer exist. All hypergraphs use the same underlying data structure, supported by Pandas dataFrames. All hypergraphs maintain a state_dict to avoid repeating computations.

  2. +
  3. Methods for adding nodes and hyperedges are currently not supported.

  4. +
  5. The nwhy optimizations are no longer supported.

  6. +
  7. Entity and EntitySet classes are being moved to the background. The Hypergraph constructor does not accept either.

  8. +
+
+
+
+
+

COLAB Tutorials

+

The following tutorials may be run in your browser using Google Colab. Additional tutorials are +available on GitHub.

+
+
+

Notice

+

This material was prepared as an account of work sponsored by an agency of the United States Government. +Neither the United States Government nor the United States Department of Energy, nor Battelle, +nor any of their employees, nor any jurisdiction or organization that has cooperated in the development of +these materials, makes any warranty, express or implied, or assumes any legal liability or responsibility +for the accuracy, completeness, or usefulness or any information, apparatus, product, software, or process +disclosed, or represents that its use would not infringe privately owned rights. +Reference herein to any specific commercial product, process, or service by trade name, +trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, +or favoring by the United States Government or any agency thereof, or Battelle Memorial Institute. +The views and opinions of authors expressed herein do not necessarily state or reflect +those of the United States Government or any agency thereof.

+
+
+      PACIFIC NORTHWEST NATIONAL LABORATORY
+      operated by
+      BATTELLE
+      for the
+      UNITED STATES DEPARTMENT OF ENERGY
+      under Contract DE-AC05-76RL01830
+   
+
+
+

License

+

HyperNetX is released under the 3-Clause BSD license (see License)

+
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/publications.html b/publications.html new file mode 100644 index 00000000..ccac57a6 --- /dev/null +++ b/publications.html @@ -0,0 +1,144 @@ + + + + + + + Publications — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Publications

+

Joslyn, Cliff A; Aksoy, Sinan; Callahan, Tiffany J; Hunter, LE; Jefferson, Brett; Praggastis, Brenda; Purvine, Emilie AH; Tripodi, Ignacio J: (2021) Hypernetwork Science: From Multidimensional Networks to Computational Topology, in: `Unifying Themes in Complex systems X: Proc. 10th Int. Conf. Complex Systems*, ed. D. Braha et al., pp. 377-392, Springer, https://doi.org/10.1007/978-3-030-67318-5_25

+

Aksoy, Sinan G; Joslyn, Cliff A; Marrero, Carlos O; Praggastis, B; Purvine, Emilie AH: (2020) “Hypernetwork Science via High-Order Hypergraph Walks” , EPJ Data Science, v. 9:16, +https://doi.org/10.1140/epjds/s13688-020-00231-0

+

Aksoy, Sinan G; Hagberg, Aric; Joslyn, Cliff A; Kay, Bill; Purvine, Emilie; Young, Stephen J: (2022) “Models and Methods for Sparse (Hyper)Network Science in Business, Industry, and Government”, Notices of the AMS, v. 69:2, pp. 287-291, +https://doi.org/10.1090/noti2424

+

Feng, Song; Heath, Emily; Jefferson, Brett; Joslyn, CA; Kvinge, Henry; McDermott, Jason E; Mitchell, Hugh D; Praggastis, Brenda; Eisfeld, Amie J; Sims, Amy C; Thackray, Larissa B; Fan, Shufang; Walters, Kevin B; Halfmann, Peter J; Westhoff-Smith, Danielle; Tan, Qing; Menachery, Vineet D; Sheahan, Timothy P; Cockrell, Adam S; Kocher, Jacob F; Stratton, Kelly G; Heller, Natalie C; Bramer, Lisa M; Diamond, Michael S; Baric, Ralph S; Waters, Katrina M; Kawaoka, Yoshihiro; Purvine, Emilie: (2021) “Hypergraph Models of Biological Networks to Identify Genes Critical to Pathogenic Viral Response”, in: BMC Bioinformatics, v. 22:287, +https://doi.org/10.1186/s12859-021-04197-2

+

Myers, Audun; Joslyn, Cliff A; Kay, Bill; Purvine, EAH; Roek, Gregory; Shapiro, Madelyn: (2023) “Topological Analysis of Temporal Hypergraphs”, in: Proc. Wshop. on Analysis of the Web Graph (WAW 2023) https://arxiv.org/abs/2302.02857 and +2022 SIAM Conf. on Mathematics of Data Science, https://www.siam.org/Portals/0/Conferences/MDS/MDS22/MDS22_ABSTRACTS.pdf

+

Joslyn, Cliff A; Aksoy, Sinan; Arendt, Dustin; Firoz, J; Jenkins, Louis; Praggastis, Brenda; Purvine, Emilie AH; Zalewski, Marcin: (2020) “Hypergraph Analytics of Domain Name System Relationships”, in: 17th Wshop. on Algorithms and Models for the Web Graph (WAW 2020), Lecture Notes in Computer Science, v. 12901, ed. Kaminski, B et al., pp. 1-15, Springer, +https://doi.org/10.1007/978-3-030-48478-1_1

+

Hayashi, Koby; Aksoy, Sinan G; Park, CH; and Park, Haesun: (2020) “Hypergraph Random Walks, Laplacians, and Clustering”, in:Proc. 29th ACM Int. Conf. Information and Knowledge Management (CIKM 2020), pp. 495-504, ACM, New York,** +https://doi.org/10.1145/3340531.3412034

+

Kay, WW; Aksoy, Sinan G; Baird, Molly; Best, DM; Jenne, Helen; Joslyn, CA; Potvin, CD; Roek, Greg; Seppala, Garrett; Young, Stephen; Purvine, Emilie: (2022) “Hypergraph Topological Features for Autoencoder-Based Intrusion Detection for Cybersecurity Data”, ML4Cyber Wshop., Int. Conf. Machine Learning 2022, +https://icml.cc/Conferences/2022/ScheduleMultitrack?event=13458#collapse20252

+

Liu, Xu T; Firoz, Jesun; Lumsdaine, Andrew; Joslyn, CA; Aksoy, Sinan; Amburg, Ilya; Praggastis, Brenda; Gebremedhin, Assefaw: (2022) “High-Order Line Graphs of Non-Uniform Hypergraphs: Algorithms, Applications, and Experimental Analysis”, 36th IEEE Int. Parallel and Distributed Processing Symp. (IPDPS 22), +https://ieeexplore.ieee.org/document/9820632

+

Liu, Xu T; Firoz, Jesun; Lumsdaine, Andrew; Joslyn, CA; Aksoy, Sinan; Praggastis, Brenda; Gebremedhin, Assefaw: (2021) “Parallel Algorithms for Efficient Computation of High-Order Line Graphs of Hypergraphs”, in: 2021 IEEE 28th International Conference on High Performance Computing, Data, and Analytics (HiPC 2021), +https://doi.ieeecomputersociety.org/10.1109/HiPC53243.2021.00045

+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/py-modindex.html b/py-modindex.html new file mode 100644 index 00000000..f1fe132a --- /dev/null +++ b/py-modindex.html @@ -0,0 +1,236 @@ + + + + + + Python Module Index — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + +

Python Module Index

+ +
+ a | + c | + d | + r +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
 
+ a
+ algorithms +
    + algorithms.contagion +
    + algorithms.generative_models +
    + algorithms.homology_mod2 +
    + algorithms.hypergraph_modularity +
    + algorithms.laplacians_clustering +
    + algorithms.s_centrality_measures +
 
+ c
+ classes +
    + classes.entity +
    + classes.entityset +
    + classes.helpers +
    + classes.hypergraph +
 
+ d
+ drawing +
    + drawing.rubber_band +
    + drawing.two_column +
    + drawing.util +
 
+ r
+ reports +
    + reports.descriptive_stats +
+ + +
+
+
+ +
+ +
+

© Copyright 2023 Battelle Memorial Institute.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + \ No newline at end of file diff --git a/reports/modules.html b/reports/modules.html new file mode 100644 index 00000000..3725c5d1 --- /dev/null +++ b/reports/modules.html @@ -0,0 +1,171 @@ + + + + + + + reports — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/reports/reports.html b/reports/reports.html new file mode 100644 index 00000000..912689ba --- /dev/null +++ b/reports/reports.html @@ -0,0 +1,681 @@ + + + + + + + reports package — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

reports package

+
+

Submodules

+
+
+

reports.descriptive_stats module

+
+
This module contains methods which compute various distributions for hypergraphs:
    +
  • Edge size distribution

  • +
  • Node degree distribution

  • +
  • Component size distribution

  • +
  • Toplex size distribution

  • +
  • Diameter

  • +
+
+
+

Also computes general hypergraph information: number of nodes, edges, cells, aspect ratio, incidence matrix density

+
+
+reports.descriptive_stats.centrality_stats(X)[source]
+

Computes basic centrality statistics for X

+
+
Parameters:
+

X – an iterable of numbers

+
+
Returns:
+

[min, max, mean, median, standard deviation] – List of centrality statistics for X

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.descriptive_stats.comp_dist(H, aggregated=False)[source]
+

Computes component sizes, number of nodes.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +component sizes (number of nodes) and counts. If aggregated +is False, returns a list of components sizes in H.

  • +
+
+
Returns:
+

comp_dist – List of component sizes or dictionary of component size distribution

+
+
Return type:
+

list or dictionary

+
+
+
+

See also

+

s_comp_dist

+
+
+ +
+
+reports.descriptive_stats.degree_dist(H, aggregated=False)[source]
+

Computes degrees of nodes of a hypergraph.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +degrees and counts. If aggregated is False, returns a +list of degrees in H.

  • +
+
+
Returns:
+

degree_dist – List of degrees or dictionary of degree distribution

+
+
Return type:
+

list or dict

+
+
+
+ +
+
+reports.descriptive_stats.dist_stats(H)[source]
+

Computes many basic hypergraph stats and puts them all into a single dictionary object

+
+
    +
  • nrows = number of nodes (rows in the incidence matrix)

  • +
  • ncols = number of edges (columns in the incidence matrix)

  • +
  • aspect ratio = nrows/ncols

  • +
  • ncells = number of filled cells in incidence matrix

  • +
  • density = ncells/(nrows*ncols)

  • +
  • node degree list = degree_dist(H)

  • +
  • node degree dist = centrality_stats(degree_dist(H))

  • +
  • node degree hist = Counter(degree_dist(H))

  • +
  • max node degree = max(degree_dist(H))

  • +
  • edge size list = edge_size_dist(H)

  • +
  • edge size dist = centrality_stats(edge_size_dist(H))

  • +
  • edge size hist = Counter(edge_size_dist(H))

  • +
  • max edge size = max(edge_size_dist(H))

  • +
  • comp nodes list = s_comp_dist(H, s=1, edges=False)

  • +
  • comp nodes dist = centrality_stats(s_comp_dist(H, s=1, edges=False))

  • +
  • comp nodes hist = Counter(s_comp_dist(H, s=1, edges=False))

  • +
  • comp edges list = s_comp_dist(H, s=1, edges=True)

  • +
  • comp edges dist = centrality_stats(s_comp_dist(H, s=1, edges=True))

  • +
  • comp edges hist = Counter(s_comp_dist(H, s=1, edges=True))

  • +
  • num comps = len(s_comp_dist(H))

  • +
+
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

dist_stats – Dictionary which keeps track of each of the above items (e.g., basic[‘nrows’] = the number of nodes in H)

+
+
Return type:
+

dict

+
+
+
+ +
+
+reports.descriptive_stats.edge_size_dist(H, aggregated=False)[source]
+

Computes edge sizes of a hypergraph.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +edge sizes and counts. If aggregated is False, returns a +list of edge sizes in H.

  • +
+
+
Returns:
+

edge_size_dist – List of edge sizes or dictionary of edge size distribution.

+
+
Return type:
+

list or dict

+
+
+
+ +
+
+reports.descriptive_stats.info(H, node=None, edge=None)[source]
+

Print a summary of simple statistics for H

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • obj (optional) – either a node or edge uid from the hypergraph

  • +
  • dictionary (optional) – If True then returns the info as a dictionary rather +than a string +If False (default) returns the info as a string

  • +
+
+
Returns:
+

info – Returns a string of statistics of the size, +aspect ratio, and density of the hypergraph. +Print the string to see it formatted.

+
+
Return type:
+

string

+
+
+
+ +
+
+reports.descriptive_stats.info_dict(H, node=None, edge=None)[source]
+

Create a summary of simple statistics for H

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • obj (optional) – either a node or edge uid from the hypergraph

  • +
+
+
Returns:
+

info_dict – Returns a dictionary of statistics of the size, +aspect ratio, and density of the hypergraph.

+
+
Return type:
+

dict

+
+
+
+ +
+
+reports.descriptive_stats.s_comp_dist(H, s=1, aggregated=False, edges=True, return_singletons=True)[source]
+

Computes s-component sizes, counting nodes or edges.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • s (positive integer, default is 1) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +s-component sizes and counts in H. If aggregated is +False, returns a list of s-component sizes in H.

  • +
  • edges – If edges is True, the component size is number of edges. +If edges is False, the component size is number of nodes.

  • +
  • return_singletons (bool, optional, default=True) –

  • +
+
+
Returns:
+

s_comp_dist – List of component sizes or dictionary of component size distribution in H

+
+
Return type:
+

list or dictionary

+
+
+
+

See also

+

comp_dist

+
+
+ +
+
+reports.descriptive_stats.s_edge_diameter_dist(H)[source]
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

s_edge_diameter_dist – List of s-edge-diameters for hypergraph H starting with s=1 +and going up as long as the hypergraph is s-edge-connected

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.descriptive_stats.s_node_diameter_dist(H)[source]
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

s_node_diameter_dist – List of s-node-diameters for hypergraph H starting with s=1 +and going up as long as the hypergraph is s-node-connected

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.descriptive_stats.toplex_dist(H, aggregated=False)[source]
+

Computes toplex sizes for hypergraph H.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +toplex sizes and counts in H. If aggregated +is False, returns a list of toplex sizes in H.

  • +
+
+
Returns:
+

toplex_dist – List of toplex sizes or dictionary of toplex size distribution in H

+
+
Return type:
+

list or dictionary

+
+
+
+ +
+
+

Module contents

+
+
+reports.centrality_stats(X)[source]
+

Computes basic centrality statistics for X

+
+
Parameters:
+

X – an iterable of numbers

+
+
Returns:
+

[min, max, mean, median, standard deviation] – List of centrality statistics for X

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.comp_dist(H, aggregated=False)[source]
+

Computes component sizes, number of nodes.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +component sizes (number of nodes) and counts. If aggregated +is False, returns a list of components sizes in H.

  • +
+
+
Returns:
+

comp_dist – List of component sizes or dictionary of component size distribution

+
+
Return type:
+

list or dictionary

+
+
+
+

See also

+

s_comp_dist

+
+
+ +
+
+reports.degree_dist(H, aggregated=False)[source]
+

Computes degrees of nodes of a hypergraph.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +degrees and counts. If aggregated is False, returns a +list of degrees in H.

  • +
+
+
Returns:
+

degree_dist – List of degrees or dictionary of degree distribution

+
+
Return type:
+

list or dict

+
+
+
+ +
+
+reports.dist_stats(H)[source]
+

Computes many basic hypergraph stats and puts them all into a single dictionary object

+
+
    +
  • nrows = number of nodes (rows in the incidence matrix)

  • +
  • ncols = number of edges (columns in the incidence matrix)

  • +
  • aspect ratio = nrows/ncols

  • +
  • ncells = number of filled cells in incidence matrix

  • +
  • density = ncells/(nrows*ncols)

  • +
  • node degree list = degree_dist(H)

  • +
  • node degree dist = centrality_stats(degree_dist(H))

  • +
  • node degree hist = Counter(degree_dist(H))

  • +
  • max node degree = max(degree_dist(H))

  • +
  • edge size list = edge_size_dist(H)

  • +
  • edge size dist = centrality_stats(edge_size_dist(H))

  • +
  • edge size hist = Counter(edge_size_dist(H))

  • +
  • max edge size = max(edge_size_dist(H))

  • +
  • comp nodes list = s_comp_dist(H, s=1, edges=False)

  • +
  • comp nodes dist = centrality_stats(s_comp_dist(H, s=1, edges=False))

  • +
  • comp nodes hist = Counter(s_comp_dist(H, s=1, edges=False))

  • +
  • comp edges list = s_comp_dist(H, s=1, edges=True)

  • +
  • comp edges dist = centrality_stats(s_comp_dist(H, s=1, edges=True))

  • +
  • comp edges hist = Counter(s_comp_dist(H, s=1, edges=True))

  • +
  • num comps = len(s_comp_dist(H))

  • +
+
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

dist_stats – Dictionary which keeps track of each of the above items (e.g., basic[‘nrows’] = the number of nodes in H)

+
+
Return type:
+

dict

+
+
+
+ +
+
+reports.edge_size_dist(H, aggregated=False)[source]
+

Computes edge sizes of a hypergraph.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +edge sizes and counts. If aggregated is False, returns a +list of edge sizes in H.

  • +
+
+
Returns:
+

edge_size_dist – List of edge sizes or dictionary of edge size distribution.

+
+
Return type:
+

list or dict

+
+
+
+ +
+
+reports.info(H, node=None, edge=None)[source]
+

Print a summary of simple statistics for H

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • obj (optional) – either a node or edge uid from the hypergraph

  • +
  • dictionary (optional) – If True then returns the info as a dictionary rather +than a string +If False (default) returns the info as a string

  • +
+
+
Returns:
+

info – Returns a string of statistics of the size, +aspect ratio, and density of the hypergraph. +Print the string to see it formatted.

+
+
Return type:
+

string

+
+
+
+ +
+
+reports.info_dict(H, node=None, edge=None)[source]
+

Create a summary of simple statistics for H

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • obj (optional) – either a node or edge uid from the hypergraph

  • +
+
+
Returns:
+

info_dict – Returns a dictionary of statistics of the size, +aspect ratio, and density of the hypergraph.

+
+
Return type:
+

dict

+
+
+
+ +
+
+reports.s_comp_dist(H, s=1, aggregated=False, edges=True, return_singletons=True)[source]
+

Computes s-component sizes, counting nodes or edges.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • s (positive integer, default is 1) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +s-component sizes and counts in H. If aggregated is +False, returns a list of s-component sizes in H.

  • +
  • edges – If edges is True, the component size is number of edges. +If edges is False, the component size is number of nodes.

  • +
  • return_singletons (bool, optional, default=True) –

  • +
+
+
Returns:
+

s_comp_dist – List of component sizes or dictionary of component size distribution in H

+
+
Return type:
+

list or dictionary

+
+
+
+

See also

+

comp_dist

+
+
+ +
+
+reports.s_edge_diameter_dist(H)[source]
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

s_edge_diameter_dist – List of s-edge-diameters for hypergraph H starting with s=1 +and going up as long as the hypergraph is s-edge-connected

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.s_node_diameter_dist(H)[source]
+
+
Parameters:
+

H (Hypergraph) –

+
+
Returns:
+

s_node_diameter_dist – List of s-node-diameters for hypergraph H starting with s=1 +and going up as long as the hypergraph is s-node-connected

+
+
Return type:
+

list

+
+
+
+ +
+
+reports.toplex_dist(H, aggregated=False)[source]
+

Computes toplex sizes for hypergraph H.

+
+
Parameters:
+
    +
  • H (Hypergraph) –

  • +
  • aggregated – If aggregated is True, returns a dictionary of +toplex sizes and counts in H. If aggregated +is False, returns a list of toplex sizes in H.

  • +
+
+
Returns:
+

toplex_dist – List of toplex sizes or dictionary of toplex size distribution in H

+
+
Return type:
+

list or dictionary

+
+
+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/search.html b/search.html new file mode 100644 index 00000000..8e318e33 --- /dev/null +++ b/search.html @@ -0,0 +1,136 @@ + + + + + + Search — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ + + + +
+ +
+ +
+
+
+ +
+ +
+

© Copyright 2023 Battelle Memorial Institute.

+
+ + Built with Sphinx using a + theme + provided by Read the Docs. + + +
+
+
+
+
+ + + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 00000000..e26c472e --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"docnames": ["algorithms/algorithms", "algorithms/modules", "classes/classes", "classes/modules", "core", "drawing/drawing", "drawing/modules", "glossary", "hypconstructors", "hypergraph101", "index", "install", "license", "modularity", "overview/index", "publications", "reports/modules", "reports/reports", "widget"], "filenames": ["algorithms/algorithms.rst", "algorithms/modules.rst", "classes/classes.rst", "classes/modules.rst", "core.rst", "drawing/drawing.rst", "drawing/modules.rst", "glossary.rst", "hypconstructors.rst", "hypergraph101.rst", "index.rst", "install.rst", "license.rst", "modularity.rst", "overview/index.rst", "publications.rst", "reports/modules.rst", "reports/reports.rst", "widget.rst"], "titles": ["algorithms package", "algorithms", "classes package", "classes", "HyperNetX Packages", "drawing package", "drawing", "Glossary of HNX terms", "Hypergraph Constructors", "A Gentle Introduction to Hypergraph Mathematics", "HyperNetX (HNX)", "Installing HyperNetX", "License", "Modularity and Clustering", "Overview", "Publications", "reports", "reports package", "Hypernetx-Widget"], "terms": {"gillespie_sir": [0, 1, 4], "h": [0, 2, 5, 8, 9, 11, 13, 17], "tau": 0, "gamma": 0, "transmission_funct": 0, "function": [0, 2, 5, 7, 13], "threshold": [0, 1, 4], "initial_infect": 0, "none": [0, 2, 5, 17], "initial_recov": 0, "rho": 0, "tmin": 0, "0": [0, 5, 7, 8, 9, 11, 13, 15], "tmax": 0, "inf": [0, 2], "arg": [0, 2], "sourc": [0, 2, 5, 10, 12, 17], "A": [0, 2, 5, 7, 8, 10, 11, 12, 13, 15], "continu": [0, 9, 14], "time": [0, 2, 9, 18], "sir": 0, "model": [0, 2, 9, 10, 14, 15], "similar": [0, 18], "The": [0, 2, 5, 7, 8, 9, 11, 13, 14, 18], "effect": [0, 2, 13], "heterogen": 0, "landri": [0, 14], "restrepo": 0, "http": [0, 9, 11, 13, 14, 15], "doi": [0, 9, 13, 15], "org": [0, 9, 13, 15], "10": [0, 9, 11, 13, 15], "1063": 0, "5": [0, 2, 5, 8, 9, 14], "0020034": 0, "implement": 0, "network": [0, 2, 7, 9, 10, 13, 14, 15], "eon": 0, "joel": 0, "c": [0, 2, 5, 11, 13, 15], "miller": 0, "epidemicsonnetwork": 0, "readthedoc": 0, "io": [0, 14], "en": [0, 2], "latest": [0, 10], "paramet": [0, 2, 5, 13, 17], "hypernetx": [0, 2, 12, 13], "object": [0, 2, 7, 8, 9, 13, 14, 17], "dictionari": [0, 2, 5, 8, 13, 14, 17], "edg": [0, 3, 4, 5, 7, 10, 11, 13, 14, 17, 18], "size": [0, 2, 3, 4, 5, 7, 13, 17, 18], "kei": [0, 2, 5, 7, 8], "must": [0, 2, 8, 9, 12], "account": [0, 14], "all": [0, 2, 5, 7, 8, 11, 14, 17, 18], "present": [0, 9], "rate": 0, "infect": 0, "each": [0, 2, 5, 7, 8, 9, 13, 17, 18], "float": [0, 2, 5], "heal": 0, "lambda": 0, "default": [0, 2, 5, 8, 13, 14, 17], "ha": [0, 2, 7, 9, 14, 18], "requir": [0, 2, 9, 13], "argument": [0, 2, 5, 8], "node": [0, 3, 4, 5, 7, 9, 11, 13, 14, 17, 18], "statu": 0, "option": [0, 2, 17], "list": [0, 2, 5, 10, 11, 12, 13, 14, 17], "numpi": [0, 2, 5, 8, 14], "arrai": [0, 2, 8, 14], "iter": [0, 2, 5, 8, 17], "initi": [0, 2, 13], "uid": [0, 2, 3, 4, 17], "an": [0, 2, 5, 7, 8, 9, 13, 14, 17, 18], "recov": [0, 9], "from": [0, 2, 5, 7, 8, 9, 10, 13, 14, 15, 17, 18], "1": [0, 2, 5, 7, 8, 9, 11, 13, 14, 15, 17], "fraction": [0, 5], "individu": [0, 2], "both": [0, 2, 7, 9, 10, 11, 14, 18], "cannot": 0, "specifi": [0, 2, 5, 18], "start": [0, 2, 5, 10, 17, 18], "simul": 0, "which": [0, 2, 5, 7, 8, 9, 13, 17, 18], "should": [0, 2, 5, 8, 10], "termin": [0, 11], "hasn": 0, "t": [0, 2, 7, 9, 13, 15], "alreadi": [0, 2, 9, 18], "return_full_data": 0, "bool": [0, 2, 5, 17], "fals": [0, 2, 5, 7, 17], "thi": [0, 2, 5, 7, 8, 9, 10, 12, 13, 14, 17, 18], "return": [0, 2, 5, 13, 17], "recoveri": 0, "event": [0, 2, 8, 12, 15], "true": [0, 2, 5, 7, 9, 17], "transmiss": 0, "allow": [0, 2, 5, 7, 8, 9, 18], "user": [0, 2, 10, 14, 18], "defin": [0, 2, 8, 9, 14], "extra": [0, 2, 13], "i": [0, 2, 5, 7, 8, 10, 11, 12, 13, 14, 17, 18], "r": [0, 5, 13], "number": [0, 2, 5, 7, 9, 13, 17], "suscept": 0, "type": [0, 2, 5, 7, 8, 9, 17], "note": [0, 2, 7, 9, 15], "exampl": [0, 2, 5, 8, 9, 13, 14, 18], "import": [0, 2, 10, 11, 13, 14], "random": [0, 13, 15], "hnx": [0, 8, 11, 13, 14, 18], "n": [0, 2, 5, 7, 8, 9, 11], "1000": 0, "m": [0, 2, 9, 11, 13, 15], "10000": 0, "hyperedgelist": 0, "sampl": 0, "rang": [0, 2, 5, 9], "k": [0, 9, 13], "choic": 0, "2": [0, 5, 8, 9, 11, 13, 15], "3": [0, 2, 5, 8, 9, 11, 13, 14, 15], "100": 0, "gillespie_si": [0, 1, 4], "sim_kwarg": 0, "si": 0, "collective_contagion": [0, 1, 4], "collect": [0, 2, 5], "mechan": 0, "describ": [0, 9], "hashabl": [0, 2, 8], "If": [0, 2, 5, 7, 8, 9, 11, 17], "doesn": 0, "have": [0, 2, 5, 7, 8, 10, 11, 14, 18], "automat": [0, 11], "ar": [0, 2, 5, 7, 8, 9, 10, 11, 12, 13, 14, 18], "valu": [0, 2, 5, 7, 8], "status": 0, "state": [0, 2, 14, 18], "denot": 0, "id": [0, 2, 5, 8, 9], "potenti": 0, "4": [0, 2, 9, 11, 14], "contagion_anim": [0, 1, 4], "fig": [0, 9], "transition_ev": 0, "node_state_color_dict": 0, "edge_state_color_dict": 0, "node_radiu": [0, 5], "fp": 0, "anim": 0, "discret": [0, 9], "current": [0, 2, 8, 14], "onli": [0, 2, 7, 8, 9, 10, 13], "support": [0, 2, 11, 14], "circular": 0, "layout": [0, 5], "matplotlib": [0, 5], "figur": [0, 5, 9], "output": [0, 2, 9], "discrete_si": [0, 1, 4], "discrete_sir": [0, 1, 4], "color": [0, 2, 5, 8, 18], "can": [0, 2, 5, 7, 8, 10, 11, 13, 14, 18], "alpha": [0, 5], "depend": 0, "most": [0, 2, 5, 8, 9, 10, 11, 13], "common": [0, 7, 9], "off": 0, "set": [0, 2, 5, 7, 8, 9, 10, 13, 18], "radiu": [0, 5], "draw": [0, 4], "int": [0, 2, 5, 9, 15], "frame": [0, 2], "per": [0, 2], "second": [0, 2], "pyplot": 0, "plt": 0, "ipython": 0, "displai": 0, "html": [0, 11], "dt": 0, "green": 0, "red": [0, 2, 8], "blue": 0, "to_jshtml": 0, "construct": [0, 2, 7, 8, 14], "simplici": [0, 9, 10], "social": 0, "iacopini": 0, "et": [0, 9, 13, 15], "al": [0, 9, 13, 15], "1038": 0, "s41467": 0, "019": 0, "10431": 0, "6": [0, 2, 14], "step": [0, 13], "forward": 0, "take": [0, 2, 5, 14], "happen": [0, 9], "els": [0, 13], "individual_contagion": [0, 1, 4], "majority_vot": [0, 1, 4], "major": [0, 1, 4, 13], "vote": 0, "neighbor": [0, 2, 3, 4], "contagi": 0, "possibl": [0, 5, 9, 12, 18], "chang": [0, 5, 18], "opinion": [0, 14], "divid": 0, "equal": [0, 2, 7, 9], "choos": [0, 2, 9], "randomli": 0, "between": [0, 2, 5, 7, 8, 9, 13, 18], "abl": [0, 9], "transmit": 0, "chung_lu_hypergraph": [0, 1, 4], "k1": 0, "k2": 0, "gener": [0, 2, 5, 8, 9, 10, 13, 14, 17], "extens": [0, 9], "chung": 0, "lu": 0, "mirah": [0, 14], "shi": [0, 14], "bipartit": [0, 2, 3, 4, 5, 9, 18], "aksoi": [0, 9, 14, 15], "1093": 0, "comnet": 0, "cnx001": 0, "where": [0, 2, 5, 7, 8, 9, 10, 13], "degre": [0, 2, 3, 4, 7, 13, 17, 18], "also": [0, 2, 7, 9, 10, 13, 17, 18], "known": 0, "sum": [0, 2], "roughli": 0, "same": [0, 2, 5, 8, 13, 14], "thei": [0, 2, 5, 7, 8, 9, 10, 18], "warn": [0, 2], "still": [0, 2, 9], "run": [0, 13, 14], "static": [0, 2, 14], "dynam": [0, 14], "gm": 0, "randint": 0, "sort": [0, 2], "dcsbm_hypergraph": [0, 1, 4], "g1": 0, "g2": 0, "omega": 0, "dcsbm": 0, "larremor": 0, "1103": 0, "physrev": 0, "90": 0, "012805": 0, "group": [0, 9], "belong": [0, 2, 7, 13], "match": 0, "2d": [0, 2], "matrix": [0, 2, 7, 14, 17], "entri": [0, 2, 7, 9], "given": [0, 2, 5, 7, 8, 9, 13], "commun": [0, 13, 14], "row": [0, 2, 7, 8, 9, 17], "column": [0, 2, 5, 7, 8, 9, 14, 17], "incid": [0, 2, 7, 8, 10, 14, 17], "determin": [0, 2, 5, 9], "np": [0, 2, 8], "erdos_renyi_hypergraph": [0, 1, 4], "p": [0, 13, 15], "node_label": [0, 2, 5], "edge_label": [0, 2, 5], "erdo": 0, "renyi": 0, "creat": [0, 2, 8, 9, 17], "vertex": [0, 5, 9, 10, 13], "label": [0, 2, 3, 4, 5, 9], "hyperedg": [0, 2, 8, 9, 14], "01": 0, "purpos": [0, 12], "comput": [0, 2, 5, 9, 13, 14, 15, 17], "data": [0, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 15], "identifi": [0, 2, 8, 9, 15], "correspond": [0, 2, 7, 8, 9, 13], "interest": 0, "featur": [0, 7, 15], "topologi": [0, 9, 10, 15], "element": [0, 2, 3, 4, 5, 7, 8, 9], "one": [0, 2, 5, 7, 9, 13], "dimension": [0, 2, 9, 10], "cycl": [0, 5, 9], "relationship": [0, 2, 8, 9, 10, 15], "origin": [0, 2, 9], "bound": [0, 2], "togeth": [0, 5], "higher": [0, 9, 13], "order": [0, 2, 3, 4, 5, 9, 13, 15], "ideal": 0, "we": [0, 7, 9, 10, 13, 14], "want": [0, 18], "briefest": 0, "descript": 0, "minim": [0, 5, 18], "exhibit": 0, "cyclic": 0, "behavior": [0, 7], "base": [0, 2, 5, 9, 13, 15, 18], "discov": 0, "us": [0, 2, 5, 7, 8, 9, 10, 12, 14], "boundari": [0, 5], "map": [0, 2, 5, 9], "repres": [0, 2, 5, 7, 8, 9, 13, 14], "To": [0, 2, 10, 11], "abstract": [0, 9], "complex": [0, 9, 10, 13, 14, 15], "chain": 0, "c_k": 0, "z_2": 0, "addit": [0, 2, 8, 14], "rectangular": [0, 7], "over": [0, 5, 7, 9, 10], "These": [0, 9, 18], "diagon": 0, "kernel": 0, "imag": 0, "betti": [0, 1, 4], "method": [0, 2, 8, 9, 10, 11, 13, 14, 15, 17], "obtain": [0, 7, 12, 13], "snf": 0, "z": [0, 9], "2z": 0, "ferrario": 0, "work": [0, 2, 5, 7, 9, 10, 14], "www": [0, 15], "dlfer": 0, "xyz": 0, "post": 0, "2016": 0, "27": 0, "add_to_column": [0, 1, 4], "j": [0, 7, 9, 13, 15], "replac": [0, 2, 5], "logic": [0, 2], "xor": 0, "index": [0, 2, 3, 4, 7, 8, 10, 11], "being": [0, 2, 9, 14], "alter": [0, 2], "ad": [0, 2, 5, 8, 9, 14], "add_to_row": [0, 1, 4], "bd": 0, "kth": 0, "dict": [0, 2, 5, 8, 17], "dimens": [0, 2, 3, 4], "domain": [0, 15], "tupl": [0, 2, 7], "min": [0, 2, 9, 17], "max": [0, 2, 17], "inclus": [0, 2], "exist": [0, 2, 5, 7, 14], "cell": [0, 2, 8, 9, 14, 17], "betti_numb": [0, 1, 4], "asc": [0, 9], "associ": [0, 2, 8, 12], "bkmatrix": [0, 1, 4], "km1basi": 0, "kbasi": 0, "c_": 0, "basi": 0, "respect": [0, 2, 10], "bk": 0, "store": [0, 2, 7, 8, 14], "boolean": [0, 2, 9], "boundary_group": [0, 1, 4], "image_basi": 0, "csr_matrix": [0, 2], "mathbb": 0, "_2": 0, "ndarrai": [0, 2, 8], "scipi": [0, 2], "spars": [0, 2, 15], "chain_complex": [0, 1, 4], "length": [0, 2, 5, 7, 8, 10], "integ": [0, 2, 5, 7, 17], "greater": [0, 2], "than": [0, 2, 12, 17], "indic": [0, 2, 3, 4, 9, 13], "eg": 0, "homology_basi": [0, 1, 4], "kwarg": [0, 2, 5], "h_k": 0, "partial_k": 0, "krang": 0, "posit": [0, 2, 5, 7, 13, 17, 18], "avail": [0, 2, 14, 18], "need": [0, 2, 5, 8, 9, 13], "shortest": [0, 2, 7, 9], "dim": [0, 2, 3, 4], "been": [0, 2, 14], "provid": [0, 2, 5, 7, 8, 12, 13, 14], "im": 0, "hypergraph_homology_basi": [0, 1, 4], "interpret": [0, 1, 4], "mod": 0, "look": 0, "coset": 0, "good": [0, 12, 13], "rel": [0, 18], "small": [0, 5], "explicit": [0, 2], "term": [0, 2], "vector": 0, "interpreted_basi": 0, "kchain": 0, "ck": 0, "arr": [0, 2, 8], "referenc": [0, 2, 8], "kchainbasi": [0, 1, 4], "toplex": [0, 2, 3, 4, 7, 17], "best": [0, 15], "simpl": [0, 2, 7, 9, 13, 17], "berg": 0, "e": [0, 2, 5, 7, 8, 9, 11, 13, 15, 17, 18], "contain": [0, 2, 5, 7, 8, 17, 18], "anoth": [0, 5, 7, 9, 13], "duplic": [0, 2, 9], "sortabl": 0, "logical_dot": [0, 1, 4], "ar1": 0, "ar2": 0, "equival": [0, 2, 9], "dot": 0, "product": [0, 7, 14], "two": [0, 2, 5, 7, 8, 9, 10, 18], "d": [0, 2, 8, 9, 10, 11, 13, 15], "rais": [0, 2], "hypernetxerror": [0, 2], "error": [0, 2], "logical_matadd": [0, 1, 4], "mat1": 0, "mat2": 0, "binari": [0, 9, 12], "mat": 0, "logical_matmul": [0, 1, 4], "multipl": [0, 2, 8, 18], "inner": 0, "matmulreduc": [0, 1, 4], "revers": [0, 2, 18], "recurs": 0, "appli": [0, 2, 5], "For": [0, 2, 5, 7, 8, 9, 10, 14, 18], "nxm": 0, "multipli": 0, "reduced_row_echelon_form_mod2": [0, 1, 4], "invert": 0, "transform": [0, 2], "reduc": [0, 5], "echelon": 0, "modulo": 0, "l": [0, 13], "linv": 0, "lm": 0, "smith_normal_form_mod2": [0, 1, 4], "track": [0, 17], "print": [0, 17], "out": [0, 5, 9, 12], "lmr": 0, "mxn": 0, "equat": 0, "i_m": 0, "i_n": 0, "ident": [0, 2, 5, 7, 18], "repeatedli": 0, "action": 0, "left": [0, 5, 9], "right": [0, 5, 9, 14], "side": 0, "its": [0, 2, 5, 7, 8, 9, 13, 14, 18], "invers": 0, "final": [0, 9], "verifi": 0, "llinv": 0, "swap_column": [0, 1, 4], "swap": [0, 9], "ith": 0, "jth": 0, "new": [0, 2, 5, 8, 9, 13, 15], "copi": [0, 12], "swap_row": [0, 1, 4], "modular": [0, 1, 4, 9, 10], "adapt": 0, "f": [0, 7, 9, 11, 13, 15], "th\u00e9berg": [0, 13], "github": [0, 11, 14, 18], "repositori": [0, 10, 14], "see": [0, 2, 5, 7, 9, 10, 13, 14, 17], "tutori": [0, 10, 11], "13": [0, 2, 8], "folder": [0, 14], "librari": [0, 2, 7, 8, 10, 14], "usag": [0, 2], "refer": [0, 2, 8, 9, 14], "kumar": [0, 1, 4, 13], "vaidyanathan": [0, 13], "ananthapadmanabhan": [0, 13], "parthasarathi": [0, 13], "ravindran": [0, 13], "b": [0, 2, 5, 9, 11, 13, 15], "theoret": [0, 13], "insight": [0, 13], "implic": [0, 13], "In": [0, 2, 5, 7, 9, 13], "cherifi": [0, 13], "gaito": [0, 13], "mend": [0, 13], "moro": [0, 13], "rocha": [0, 13], "ed": [0, 9, 13, 15], "Their": [0, 13], "applic": [0, 2, 13, 15], "viii": [0, 13], "2019": [0, 13], "studi": [0, 10, 13, 14], "intellig": [0, 13], "vol": [0, 13], "881": [0, 13], "springer": [0, 9, 13, 15], "cham": [0, 13], "1007": [0, 9, 13, 15], "978": [0, 9, 13, 15], "030": [0, 9, 13, 15], "36687": [0, 13], "2_24": [0, 13], "kami\u0144ski": [0, 13], "pra\u0142at": [0, 13], "detect": [0, 13, 15], "benito": [0, 13], "sale": [0, 13], "pardo": [0, 13], "ix": [0, 13], "2020": [0, 2, 8, 9, 13, 15], "943": [0, 13], "65347": [0, 13], "7_13": [0, 13], "poulin": 0, "v": [0, 2, 5, 8, 9, 15], "szufel": 0, "via": [0, 9, 13, 15], "plo": 0, "ONE": 0, "1371": 0, "journal": 0, "pone": 0, "0224307": 0, "dict2part": [0, 1, 4, 13], "part": [0, 5, 13, 14], "partit": [0, 2, 13], "part2dict": [0, 1, 4, 13], "vertic": [0, 2, 5, 8, 9, 13], "hg": [0, 13], "delta": 0, "converg": 0, "stop": 0, "criterion": 0, "last_step": [0, 1, 4, 13], "wdc": [0, 13], "linear": [0, 1, 4, 13], "some": [0, 2, 8, 9, 13], "last": [0, 2, 13], "veri": [0, 9], "tri": 0, "move": [0, 2, 9, 13, 14], "improv": [0, 13, 18], "It": [0, 5, 9], "non": [0, 15], "trivial": 0, "graph": [0, 2, 5, 10, 14, 15, 18], "section": [0, 9], "func": 0, "hyperparamet": 0, "class": [0, 4, 7, 14], "rule": 0, "precomput": 0, "attribut": [0, 2, 9, 14], "precompute_attribut": [0, 1, 4, 13], "ani": [0, 2, 7, 9, 11, 12, 14, 18], "format": [0, 2, 8, 17], "w": [0, 2, 8, 9, 13], "when": [0, 2, 8, 9], "otherwis": [0, 2, 7, 12, 14], "other": [0, 2, 5, 7, 9, 12], "suppli": [0, 5, 13], "strict": [0, 1, 4, 12, 13], "faster": 0, "befor": [0, 2, 11], "call": [0, 5, 7, 9, 13], "either": [0, 2, 7, 9, 13, 14, 17], "unweight": [0, 2, 7], "weight": [0, 7, 9, 13, 14], "strength": [0, 9, 13], "total": [0, 2, 9, 13], "weigth": 0, "cardin": [0, 13], "d_weight": 0, "binomi": [0, 13], "coeffici": [0, 13], "speed": [0, 14], "up": [0, 9, 17], "bin_coef": 0, "isol": [0, 9], "found": [0, 2, 14], "drop": [0, 2], "two_sect": [0, 1, 4, 13], "walk": [0, 2, 7, 10, 15], "igraph": [0, 13], "built": [0, 18], "contruct": 0, "util": [0, 4, 6, 13], "pair": [0, 2, 5, 7, 8], "That": [0, 7, 9], "serv": [0, 10], "input": [0, 9, 14], "spectral": [0, 5], "well": [0, 5, 9, 18], "detail": [0, 2, 9, 13, 18], "methodologi": 0, "hayashi": [0, 15], "park": [0, 15], "proceed": 0, "29th": [0, 15], "acm": [0, 15], "intern": [0, 15], "confer": [0, 15], "inform": [0, 2, 9, 10, 14, 15, 17], "knowledg": [0, 15], "manag": [0, 14, 15], "1145": [0, 15], "3340531": [0, 15], "3412034": [0, 15], "pleas": 0, "direct": [0, 5, 9, 12, 13, 18], "inquiri": 0, "concern": [0, 9], "sinan": [0, 9, 14, 15], "pnnl": [0, 10, 11, 14], "gov": [0, 10, 14], "get_pi": [0, 1, 4], "eigenvector": 0, "largest": [0, 2], "eigenvalu": 0, "magnitud": 0, "so": [0, 2, 5, 9, 12, 13], "intend": [0, 2, 5, 14], "connect": [0, 2, 5, 7, 9, 10, 17], "case": [0, 2, 9, 14], "stationari": 0, "distribut": [0, 12, 14, 15, 17], "csr": [0, 2], "pi": 0, "norm_lap": [0, 1, 4], "symmetr": 0, "digraph": [0, 5], "fan": [0, 15], "cheeger": 0, "inequ": 0, "annal": 0, "combinator": 0, "9": [0, 9, 11, 15], "2005": 0, "19": 0, "context": [0, 2, 9], "g": [0, 2, 5, 9, 11, 13, 15, 17], "cikm": [0, 15], "495": [0, 15], "504": [0, 15], "mean": [0, 2, 7, 17], "path": [0, 2, 5, 10], "link": [0, 2, 13, 18], "cell_weight": [0, 2, 3, 4, 8], "uniform": [0, 9, 15], "whether": [0, 2, 12], "prob_tran": [0, 1, 4], "check_connect": 0, "At": 0, "next": 0, "chosen": [0, 2, 5], "select": 0, "proport": 0, "within": [0, 2, 5, 8, 9, 10, 13, 18], "spec_clu": [0, 1, 4], "existing_lap": 0, "disjoint": [0, 2, 8, 9], "rdc": 0, "spec": 0, "metric": [0, 10, 14], "compon": [0, 2, 3, 4, 5, 7, 17], "accomplish": 0, "adjac": [0, 2, 7, 10], "represent": [0, 2, 5, 9, 13], "essenc": 0, "line": [0, 2, 5, 9, 15], "our": [0, 9], "discuss": 0, "depth": 0, "joslyn": [0, 9, 14, 15], "ortiz": 0, "marrero": [0, 9, 15], "hypernetwork": [0, 15], "scienc": [0, 15], "high": [0, 9, 14, 15], "epj": [0, 9, 15], "sci": 0, "16": [0, 9, 15], "1140": [0, 9, 15], "epjd": [0, 9, 15], "s13688": [0, 9, 15], "020": [0, 9, 15], "00231": [0, 9, 15], "s_betweenness_centr": [0, 1, 4], "return_singleton": [0, 2, 17], "subgraph": [0, 2], "linegraph": [0, 2, 7, 14], "ratio": [0, 17], "pass": [0, 2, 5, 8], "through": [0, 2, 5, 8, 9], "sigma": 0, "those": [0, 9, 14], "c_b": 0, "sum_": 0, "neq": [0, 9], "frac": 0, "connected": 0, "ignor": [0, 2], "singleton": [0, 2, 3, 4, 9], "s_closeness_centr": [0, 1, 4], "reciproc": 0, "distanc": [0, 2, 3, 4, 5, 7, 9], "minu": [0, 2], "u": [0, 5, 9], "str": [0, 2], "close": [0, 9], "singl": [0, 2, 8, 9, 11, 17], "s_eccentr": [0, 1, 4], "longest": [0, 2], "everi": [0, 2, 7, 9, 18], "text": [0, 5], "ecc": 0, "eccentr": 0, "disconnect": [0, 5], "s_harmonic_centr": [0, 1, 4], "intersect": [0, 2, 5, 7], "less": [0, 2], "denorm": 0, "harmon": 0, "becom": [0, 2, 9], "cdotfrac": 0, "s_harmonic_closeness_centr": [0, 1, 4], "packag": [1, 3, 6, 10, 16], "submodul": [1, 3, 4, 6, 13, 16], "contagion": [1, 4], "modul": [1, 3, 4, 6, 10, 13, 16], "generative_model": [1, 4], "homology_mod2": [1, 4], "homologi": [1, 4, 9, 10, 14], "smith": [1, 4, 15], "normal": [1, 4], "form": [1, 2, 4, 9, 12], "mod2": [1, 4, 14], "hypergraph_modular": [1, 4, 13], "laplacians_clust": [1, 4], "hypergraph": [1, 3, 4, 5, 7, 11, 13, 14, 15, 17, 18], "probabl": [1, 4], "transit": [1, 4], "matric": [1, 4, 5, 14], "laplacian": [1, 4, 15], "cluster": [1, 4, 10, 15], "s_centrality_measur": [1, 4], "": [1, 2, 4, 5, 13, 15, 17], "central": [1, 4, 9, 14, 17], "measur": [1, 4, 9, 13], "content": [1, 3, 4, 6, 8, 16], "datafram": [2, 3, 4, 8, 14], "data_col": 2, "sequenc": [2, 7, 9], "ordereddict": 2, "weight_col": 2, "aggregatebi": 2, "misc_props_col": 2, "level_col": 2, "level": [2, 3, 4, 5, 8, 9], "id_col": 2, "handl": [2, 5], "build": [2, 13], "like": [2, 5, 9, 10], "panda": [2, 8, 14], "tabl": [2, 18], "system": [2, 5, 9, 10, 15], "todo": 2, "test": 2, "compat": [2, 14], "updat": [2, 10], "doc": [2, 11], "x": [2, 5, 7, 8, 9, 14, 15, 17], "tensor": 2, "nonzero": [2, 7], "mai": [2, 8, 9, 11, 12, 14, 18], "state_dict": [2, 14], "never": [2, 9], "clear": [2, 9], "remov": [2, 3, 4, 14, 18], "uniqu": [2, 10], "name": [2, 8, 12, 13, 14, 15, 18], "assum": [2, 14], "count": [2, 5, 9, 17], "median": [2, 5, 17], "first": [2, 5, 8, 13], "aggreg": [2, 17], "without": [2, 12, 18], "doubli": 2, "nest": [2, 14], "assign": [2, 5, 7, 8], "item": [2, 5, 17], "explan": 2, "give": [2, 7, 18], "doe": [2, 5, 9, 14], "matter": 2, "miscellan": 2, "you": [2, 5, 7, 9, 10, 11, 18], "misc": 2, "appear": [2, 18], "occurr": 2, "popul": 2, "empti": [2, 3, 4, 9], "consid": [2, 9, 13], "add": [2, 3, 4], "underli": [2, 9, 14], "variabl": [2, 8], "self": [2, 9], "directli": [2, 8, 14, 18], "caus": [2, 12, 18], "add_edg": 2, "add_node_to_edg": 2, "instead": [2, 5, 8], "add_el": [2, 3, 4], "add_elements_from": [2, 3, 4], "arg_set": 2, "deprec": 2, "assign_properti": [2, 3, 4], "prop": 2, "misc_col": 2, "document": [2, 12, 14, 15], "_level_col": 2, "_id_col": 2, "_misc_props_col": 2, "children": [2, 3, 4], "frozenset": 2, "uidset": [2, 3, 4], "uidset_by_level": [2, 3, 4], "uidset_by_column": [2, 3, 4], "thought": 2, "encod": [2, 3, 4], "translat": [2, 3, 4], "rtype": 2, "distinct": [2, 8, 9, 14], "exclud": 2, "dimsiz": [2, 3, 4], "attrlist": [2, 3, 4], "incidence_dict": [2, 3, 4], "membership": [2, 3, 4, 5, 18], "dual": [2, 3, 4, 7], "elements_by_level": [2, 3, 4], "elements_by_column": [2, 3, 4], "col1": [2, 8], "col2": [2, 8], "access": [2, 8, 14], "whose": [2, 5, 7], "level1": 2, "level2": 2, "is_empti": [2, 3, 4], "check": 2, "get_properti": [2, 3, 4], "get": [2, 10], "prop_val": 2, "keyerror": 2, "userwarn": 2, "closest": 2, "set_properti": [2, 3, 4], "prop_nam": 2, "incidence_matrix": [2, 3, 4], "word": 2, "column1": 2, "column2": 2, "fill": [2, 17], "how": [2, 9, 11], "least": [2, 5, 7], "onc": 2, "Not": 2, "compress": 2, "think": 2, "find": [2, 5, 13], "search": [2, 10], "more": [2, 7, 9, 10, 13], "isstat": [2, 3, 4], "treat": [2, 9], "min_level": 2, "max_level": 2, "return_index": 2, "remove_el": [2, 3, 4], "accept": [2, 5, 8, 14], "remove_elements_from": [2, 3, 4], "restrict_to_indic": [2, 3, 4], "restrict": [2, 7, 9, 14], "specif": [2, 8, 14], "constructor": [2, 5, 10, 14], "restrict_to_level": [2, 3, 4], "subset": [2, 5, 7, 9], "invalid": 2, "valueerror": 2, "translate_arr": [2, 3, 4], "full": 2, "across": [2, 5], "coord": 2, "particular": [2, 8, 9, 12], "pd": 2, "cell_properti": [2, 3, 4, 8], "misc_cell_props_col": 2, "keep_weight": 2, "preserv": [2, 18], "keyword": [2, 5, 8], "etc": [2, 9], "jointli": 2, "column_1": 2, "column_2": 2, "column_3": 2, "column_n": 2, "sinc": [2, 9, 10, 13], "convert": [2, 5], "By": [2, 5], "explicitli": 2, "do": [2, 10, 12, 14], "dedic": 2, "after": 2, "pre": [2, 5, 13], "assign_cell_properti": [2, 3, 4], "cell_prop": 2, "result": [2, 5, 9, 18], "attributeerror": 2, "attr": 2, "seri": 2, "collapse_identical_el": [2, 3, 4], "return_equivalence_class": 2, "collaps": [2, 5, 18], "new_ent": 2, "equivalence_class": 2, "get_cell_properti": [2, 3, 4], "item1": 2, "item2": 2, "differ": [2, 7, 8, 9, 10, 13], "set_cell_properti": [2, 3, 4], "extend": [2, 18], "restrict_to": [2, 3, 4], "alia": 2, "array_lik": 2, "keep_membership": 2, "discard": 2, "initlist": 2, "userlist": 2, "custom": [2, 5], "wrapper": 2, "integr": 2, "storag": 2, "assign_weight": [2, 3, 4], "df": [2, 8], "hold": [2, 18], "insid": [2, 7, 13], "create_properti": [2, 3, 4], "abc": 2, "index_col": 2, "multiindex": 2, "dict_depth": [2, 3, 4], "dic": 2, "merge_nested_dict": [2, 3, 4], "merg": [2, 12], "remove_row_dupl": [2, 3, 4], "groupbi": 2, "perform": [2, 14, 15, 18], "valid": 2, "edge_col": [2, 8], "node_col": [2, 8], "cell_weight_col": [2, 8], "misc_cell_properties_col": 2, "edge_properti": [2, 8], "node_properti": [2, 8], "misc_properties_col": 2, "edge_weight_prop_col": 2, "node_weight_prop_col": 2, "weight_prop_col": 2, "default_edge_weight": [2, 8], "default_node_weight": [2, 8], "default_weight": 2, "abov": [2, 5, 8, 9, 12, 17], "hyper": [2, 5, 8, 9, 13, 15, 18], "Will": 2, "edgeid": 2, "nodeid": 2, "sequenti": 2, "miss": [2, 5], "misc_cell_properti": [2, 8], "unless": 2, "agg": 2, "syntax": 2, "concaten": 2, "union": [2, 9], "misc_properti": 2, "dtype": 2, "edge_weight_prop": [2, 8], "node_weight_prop": [2, 8], "weight_prop": [2, 8], "undefin": 2, "multi": [2, 8, 9, 10], "distinguish": [2, 7, 8], "e1": [2, 8], "e2": [2, 8], "e3": [2, 8], "yet": [2, 8], "version": [2, 9, 11], "easili": [2, 8, 14], "metadata": [2, 8, 14], "fundament": [2, 8], "mani": [2, 7, 8, 17], "separ": [2, 8], "There": [2, 8, 13], "five": [2, 8], "barebon": [2, 8], "basic": [2, 8, 9, 10, 14, 17], "wai": [2, 5, 8, 9, 10, 12, 13], "express": [2, 8, 12, 14], "particularli": [2, 8], "related_to": [2, 8], "startdat": [2, 8], "05": [2, 8], "52": [2, 8], "owned_bi": [2, 8], "owner_of": [2, 8], "larg": [2, 8], "dataset": [2, 8], "effici": [2, 7, 8, 13, 15], "share": [2, 7, 8, 10], "place": [2, 8, 11], "own": [2, 8, 14], "col3": [2, 8], "homogen": [2, 8], "column_nam": [2, 8], "among": 2, "combin": 2, "003": [2, 8], "fido": [2, 8], "brown": [2, 8], "dp": [2, 8], "id1": [2, 8], "prop1": [2, 8], "val1": [2, 8], "prop2": [2, 8], "val2": [2, 8], "id2": [2, 8], "prescrib": [2, 8], "adjacency_matrix": [2, 3, 4], "remove_empty_row": 2, "node_index": 2, "auxiliary_matrix": [2, 3, 4], "auxiliari": [2, 7], "userid": 2, "networkx": [2, 5, 14], "nx": [2, 5], "collapse_edg": [2, 3, 4], "use_rep": 2, "return_count": 2, "gotten": 2, "frozen": 2, "follow": [2, 5, 9, 11, 12, 13, 14], "colon": 2, "relat": [2, 9], "collapse_nod": [2, 3, 4], "longer": [2, 14], "rep": 2, "member": 2, "fix": 2, "collapse_nodes_and_edg": [2, 3, 4], "component_subgraph": [2, 3, 4], "s_components_subgraph": 2, "s_component_subgraph": [2, 3, 4], "s_connected_compon": [2, 3, 4], "connected_component_subgraph": [2, 3, 4], "connected_compon": [2, 3, 4], "max_siz": 2, "smallest": [2, 9], "diamet": [2, 3, 4, 7, 17], "v_start": 2, "v_end": 2, "v_1": [2, 9], "v_2": [2, 9], "v_n": 2, "consecut": 2, "target": 2, "edge_dist": [2, 3, 4], "pairwis": [2, 9], "shortest_path_length": 2, "switch_nam": 2, "role": [2, 7], "edge_adjacency_matrix": [2, 3, 4], "edge_index": 2, "remove_zero": 2, "auxillari": 2, "edge_diamet": [2, 3, 4], "e_start": 2, "e_end": 2, "e_1": [2, 9], "e_2": 2, "e_n": [2, 9], "s_edge_connect": 2, "maximum": [2, 7], "xx": 2, "string": [2, 5, 9, 17], "edge_adjac": 2, "edge_neighbor": [2, 3, 4], "minimum": [2, 5], "edge_prop": [2, 3, 4], "edge_size_dist": [2, 3, 4, 16, 17], "_edg": 2, "classmethod": 2, "from_bipartit": [2, 3, 4], "set_nam": 2, "categori": 2, "add_nodes_from": 2, "add_edges_from": 2, "_": 2, "from_incidence_datafram": [2, 3, 4], "fillna": 2, "transpos": [2, 7, 9], "return_only_datafram": 2, "Its": 2, "real": [2, 9], "header": 2, "zero": [2, 9, 13], "ex": 2, "ab": [2, 15], "prior": 2, "entir": [2, 18], "incidence_datafram": [2, 3, 4], "from_numpy_arrai": [2, 3, 4], "from_incidence_matrix": [2, 3, 4], "node_nam": 2, "edge_nam": 2, "dimensionsl": 2, "truthi": 2, "shape": [2, 3, 4, 11], "prepend": 2, "evalu": [2, 7], "get_linegraph": [2, 3, 4], "width": [2, 7, 10], "enter": 2, "sort_row": 2, "sort_column": 2, "includ": [2, 7, 9, 10, 12], "row_index": 2, "col_index": 2, "is_connect": [2, 3, 4], "v0": 2, "vn": 2, "v1": 2, "v2": 2, "e0": 2, "node_diamet": [2, 3, 4], "node_prop": [2, 3, 4], "_node": 2, "number_of_edg": [2, 3, 4], "edgeset": 2, "number_of_nod": [2, 3, 4], "nodeset": 2, "inter": 2, "what": [2, 9, 10], "remove_edg": [2, 3, 4], "remove_nod": [2, 3, 4], "remove_singleton": [2, 3, 4], "clone": [2, 11], "restrict_to_edg": [2, 3, 4], "restrict_to_nod": [2, 3, 4], "induc": [2, 7], "s_connect": 2, "desir": 2, "yield": [2, 13], "s_compon": [2, 3, 4], "end": [2, 9], "set_stat": [2, 3, 4], "outsid": [2, 11], "caution": 2, "save": 2, "entiti": [3, 4, 5, 7, 9, 12, 14], "properti": [3, 4, 7, 9, 14, 18], "entityset": [3, 4, 14], "helper": [3, 4, 5], "algorithm": [4, 5, 10, 15, 18], "rubber_band": [4, 6], "draw_hyper_edge_label": [4, 5, 6], "draw_hyper_edg": [4, 5, 6], "draw_hyper_label": [4, 5, 6], "draw_hyper_nod": [4, 5, 6], "get_default_radiu": [4, 5, 6], "layout_hyper_edg": [4, 5, 6], "layout_node_link": [4, 5, 6], "two_column": [4, 6], "layout_two_column": [4, 5, 6], "get_collapsed_s": [4, 5, 6], "get_frozenset_label": [4, 5, 6], "get_line_graph": [4, 5, 6], "get_set_lay": [4, 5, 6], "inflat": [4, 5, 6], "inflate_kwarg": [4, 5, 6], "transpose_inflated_kwarg": [4, 5, 6], "draw_two_column": [4, 5, 6], "report": 4, "descriptive_stat": [4, 16], "centrality_stat": [4, 16, 17], "comp_dist": [4, 16, 17], "degree_dist": [4, 16, 17], "dist_stat": [4, 16, 17], "info": [4, 16, 17], "info_dict": [4, 16, 17], "s_comp_dist": [4, 16, 17], "s_edge_diameter_dist": [4, 16, 17], "s_node_diameter_dist": [4, 16, 17], "toplex_dist": [4, 16, 17], "po": 5, "with_color": 5, "with_node_count": 5, "with_edge_count": 5, "spring_layout": 5, "layout_kwarg": 5, "ax": 5, "edges_kwarg": 5, "nodes_kwarg": 5, "edge_labels_kwarg": 5, "node_labels_kwarg": 5, "with_edge_label": 5, "with_node_label": 5, "label_alpha": 5, "35": 5, "return_po": 5, "rubber": 5, "band": 5, "convex": 5, "hull": 5, "drawn": 5, "around": [5, 7, 10], "conveni": 5, "wrap": 5, "sensibl": 5, "lower": 5, "y": [5, 11], "coordin": 5, "manual": 5, "locat": [5, 18], "center": [5, 7], "axi": 5, "render": 5, "approach": 5, "guarante": 5, "rigor": 5, "correct": 5, "overlap": 5, "impli": [5, 9, 12, 14], "sometim": [5, 9, 18], "arbitrari": [5, 9], "planar": 5, "disabl": 5, "plot": 5, "polycollect": 5, "calcul": [5, 9], "reason": 5, "annot": 5, "argumetn": 5, "make": [5, 10, 11, 13, 14], "invis": 5, "transpar": [5, 10], "box": 5, "behind": 5, "poli": 5, "curvi": 5, "polygon": 5, "align": 5, "parallel": [5, 15], "orient": [5, 10], "dr": 5, "space": [5, 9], "concentr": 5, "ring": 5, "linewidth": 5, "facecolor": 5, "further": [5, 9], "style": 5, "offset": 5, "appropri": 5, "r0": 5, "circl": [5, 18], "xy": 5, "distant": 5, "point": [5, 9], "Then": 5, "recommend": [5, 11, 14], "surround": 5, "amount": 5, "nx2": 5, "netwrokx": 5, "usual": 5, "techniqu": 5, "edge_kwarg": 5, "collumn": 5, "reproduc": [5, 12], "illustr": 5, "typic": [5, 9, 13], "paper": 5, "textbook": 5, "reserv": 5, "optim": [5, 14, 18], "cross": 5, "adjust": 5, "diagram": [5, 9, 18], "easier": 5, "read": 5, "angl": 5, "linecollect": 5, "disonnecct": 5, "independ": [5, 18], "stack": 5, "quick": 5, "dirti": 5, "whitespac": 5, "overrid": 5, "possibli": 5, "layer": 5, "smaller": 5, "expand": [5, 18], "idea": [7, 10], "few": 7, "definit": [7, 13], "switch": [7, 13], "precis": [7, 9], "aka": 7, "py": 7, "structur": [7, 9, 10, 14], "help": [7, 10, 18], "insur": 7, "safe": 7, "primarili": 7, "technic": 7, "mathemat": [7, 10, 15], "cartesian": 7, "multihypergraph": [7, 9], "abil": [7, 9, 14], "code": [7, 10, 12, 13, 14], "would": [7, 9, 14], "complet": [7, 14, 18], "subhypergraph": 7, "isn": 7, "properli": 7, "ever": 7, "offer": [7, 14], "varieti": [7, 9], "tool": [7, 10], "analysi": [7, 9, 14, 15], "squar": [7, 11], "submatrix": 7, "success": 7, "maxim": 7, "sens": 7, "satisfi": 7, "them": [7, 9, 17, 18], "infinit": 7, "accord": [7, 13], "leas": 7, "here": [9, 18], "gentli": 9, "introduc": 9, "concept": 9, "maintain": [9, 14], "mostli": 9, "avoid": [9, 14], "legitim": 9, "issu": 9, "proper": 9, "foundat": 9, "complic": 9, "rather": [9, 17], "focus": 9, "loopless": 9, "finit": 9, "lack": 9, "addition": 9, "deep": 9, "critic": [9, 15], "partial": 9, "topolog": [9, 15], "elsewher": 9, "below": 9, "attend": 9, "weren": 9, "reader": 9, "comprehens": 9, "treatment": 9, "langl": 9, "rangl": 9, "formal": 9, "a_": 9, "v_i": 9, "v_j": 9, "i_": 9, "now": [9, 14], "e_j": 9, "andrew": [9, 15], "bailei": 9, "carter": 9, "davi": 9, "similarli": 9, "shown": 9, "numer": 9, "notic": [9, 12, 15], "necessarili": [9, 14], "reflect": [9, 14], "exactli": 9, "relax": 9, "although": 9, "sai": 9, "ldot": 9, "v_k": 9, "subseteq": 9, "therebi": 9, "while": [9, 13, 14, 18], "v_3": 9, "v_4": 9, "much": 9, "thu": 9, "re": [9, 13, 18], "show": [9, 18], "fact": 9, "record": 9, "recogn": 9, "alwai": 9, "frequent": 9, "simplifi": 9, "almost": 9, "enough": 9, "special": [9, 12], "realli": 9, "stand": 9, "especi": 9, "convers": 9, "disappear": 9, "just": 9, "primal": 9, "theori": [9, 12, 14], "therefor": 9, "As": [9, 10], "ve": 9, "seen": 9, "wherea": 9, "hypergarph": 9, "three": [9, 13], "cap": 9, "emptyset": 9, "let": [9, 13], "But": [9, 10], "aspect": [9, 17], "understand": 9, "condit": [9, 12], "e_0": 9, "e_i": 9, "e_": 9, "le": [9, 15], "raison": 9, "\u00eatre": 9, "establish": 9, "travel": 9, "natur": [9, 10], "s_i": 9, "vari": 9, "wide": 9, "talk": 9, "min_": 9, "math": 9, "edgewis": 9, "recal": 9, "brief": 9, "mention": 9, "advanc": [9, 18], "domin": 9, "forc": [9, 18], "analyt": [9, 14, 15], "recent": [9, 10], "year": [9, 10], "reachabl": 9, "goe": 9, "wider": 9, "tend": 9, "grow": [9, 10, 14], "ultim": 9, "perhap": 9, "deepli": 9, "essenti": 9, "obvious": 9, "sqcup": 9, "go": [9, 17], "evid": 9, "oper": [9, 14], "carri": 9, "unambigu": 9, "even": [9, 12], "taken": 9, "characterist": 9, "somewhat": 9, "know": 9, "hierarch": 9, "lattic": 9, "vice": 9, "versa": 9, "refex": 9, "interact": [9, 10, 18], "attach": 9, "kind": 9, "under": [9, 14], "down": [9, 18], "reveal": 9, "hidden": [9, 18], "workhors": 9, "persist": [9, 14], "perfect": [9, 18], "bridg": 9, "actual": 9, "sub": 9, "loop": 9, "itself": 9, "principl": 9, "could": 9, "appearnac": 9, "cliff": [9, 14, 15], "callahan": [9, 15], "tiffani": [9, 15], "hunter": [9, 15], "jefferson": [9, 15], "brett": [9, 15], "praggasti": [9, 14, 15], "brenda": [9, 14, 15], "purvin": [9, 14, 15], "emili": [9, 14, 15], "ah": [9, 15], "tripodi": [9, 15], "ignacio": [9, 15], "2021": [9, 15], "multidimension": [9, 15], "unifi": [9, 15], "theme": [9, 15], "proc": [9, 15], "10th": [9, 15], "conf": [9, 15], "braha": [9, 15], "pp": [9, 15], "377": [9, 15], "392": [9, 15], "67318": [9, 15], "5_25": [9, 15], "carlo": [9, 15], "o": [9, 15], "ganter": 9, "bernhard": 9, "Wille": 9, "rudolf": 9, "1999": 9, "verlag": 9, "python": [10, 11], "try": 10, "colab": 10, "primer": 10, "gentl": 10, "introduct": 10, "cut": 10, "research": [10, 14], "public": 10, "captur": 10, "about": 10, "ordinari": 10, "proven": 10, "explor": 10, "tell": 10, "quantiti": [10, 13], "becaus": 10, "multiwai": 10, "admit": 10, "power": 10, "algebra": 10, "contributor": [10, 12, 14], "softwar": [10, 12, 14], "learn": [10, 15], "develop": [10, 14], "team": [10, 14], "overview": 10, "love": 10, "hear": 10, "guid": 10, "dai": 10, "decis": 10, "open": [10, 11], "project": [10, 14], "except": 10, "trust": 10, "collabor": 10, "core": [10, 14], "believ": 10, "live": 10, "breath": 10, "welcom": 10, "particip": 10, "world": 10, "experi": 10, "perspect": 10, "great": 10, "conduct": 10, "question": [10, 14], "comment": [10, 14], "home": 10, "instal": [10, 14], "glossari": 10, "visual": [10, 14, 18], "widget": 10, "licens": 10, "page": 10, "fork": 11, "offici": 11, "8": 11, "11": 11, "conda": 11, "env": 11, "activ": [11, 18], "bin": 11, "On": 11, "powershel": 11, "command": [11, 18], "prompt": 11, "your": [11, 14], "script": 11, "deactiv": 11, "regardless": 11, "ensur": 11, "pip": [11, 14, 18], "git": 11, "com": 11, "cd": [11, 15], "zsh": 11, "shell": 11, "quotat": 11, "mark": [11, 14], "bracket": 11, "pytest": 11, "root": 11, "directori": 11, "browser": [11, 14], "local": 11, "copyright": 12, "2023": [12, 15], "battel": [12, 14], "memori": [12, 14], "institut": [12, 14], "hereinaft": 12, "herebi": 12, "grant": 12, "permiss": 12, "person": 12, "lawfulli": 12, "file": 12, "redistribut": 12, "modif": 12, "Such": 12, "modifi": 12, "publish": 12, "sublicens": 12, "sell": 12, "permit": 12, "subject": 12, "retain": 12, "disclaim": 12, "herein": [12, 14], "neither": [12, 14], "whatsoev": 12, "written": 12, "consent": 12, "BY": 12, "THE": 12, "holder": 12, "AND": 12, "AS": 12, "OR": 12, "warranti": [12, 14], "BUT": 12, "NOT": 12, "limit": 12, "TO": 12, "OF": [12, 14], "merchant": 12, "fit": 12, "FOR": 12, "IN": 12, "NO": 12, "shall": 12, "BE": 12, "liabl": 12, "indirect": 12, "incident": 12, "exemplari": 12, "consequenti": 12, "damag": 12, "procur": 12, "substitut": 12, "servic": [12, 14], "loss": 12, "profit": 12, "busi": [12, 15], "interrupt": 12, "howev": 12, "ON": 12, "liabil": [12, 14], "contract": [12, 14], "tort": 12, "neglig": 12, "aris": 12, "IF": 12, "advis": 12, "SUCH": 12, "better": 13, "dens": 13, "refin": 13, "hmod": 13, "qualiti": 13, "neg": 13, "presenc": 13, "sever": 13, "variat": 13, "main": [13, 18], "li": 13, "q": 13, "increas": 13, "With": 13, "control": [13, 18], "louvain": 13, "ecg": 13, "consensu": 13, "One": 13, "hybrid": 13, "propos": 13, "distibut": 13, "convenin": 13, "tradit": [14, 18], "wa": 14, "pacif": 14, "northwest": 14, "nation": 14, "laboratori": 14, "hypernet": 14, "hpda": 14, "program": 14, "de": [14, 18], "aco5": 14, "76rl01830": 14, "princip": 14, "design": 14, "madelyn": [14, 15], "shapiro": [14, 15], "bonicillo": 14, "dustin": [14, 15], "arendt": [14, 15], "ji": 14, "young": [14, 15], "yun": 14, "investig": 14, "brian": 14, "kritzstein": 14, "helen": [14, 15], "jenn": [14, 15], "nichola": 14, "audun": [14, 15], "myer": [14, 15], "christoph": 14, "potvin": [14, 15], "greg": [14, 15], "roek": [14, 15], "francoi": 14, "theberg": 14, "contact": 14, "retriev": 14, "rebuilt": 14, "advantag": 14, "flexibl": 14, "filter": 14, "character": 14, "backward": 14, "meta": 14, "onto": 14, "hnxwidget": [14, 18], "repeat": 14, "nwhy": 14, "background": [14, 18], "googl": 14, "lesmi": 14, "book": 14, "tour": 14, "triloop": 14, "materi": 14, "prepar": 14, "sponsor": 14, "agenc": 14, "unit": 14, "govern": [14, 15], "nor": 14, "depart": 14, "energi": 14, "employe": 14, "jurisdict": 14, "organ": 14, "cooper": 14, "legal": 14, "respons": [14, 15], "accuraci": 14, "apparatu": 14, "process": [14, 15], "disclos": 14, "infring": 14, "privat": 14, "commerci": 14, "trade": 14, "trademark": 14, "manufactur": 14, "constitut": 14, "endors": 14, "favor": 14, "thereof": 14, "view": 14, "author": 14, "ac05": 14, "releas": [14, 18], "claus": 14, "bsd": 14, "hagberg": 15, "aric": 15, "kai": 15, "bill": 15, "stephen": 15, "2022": 15, "industri": 15, "am": 15, "69": 15, "287": 15, "291": 15, "1090": 15, "noti2424": 15, "feng": 15, "song": 15, "heath": 15, "ca": 15, "kving": 15, "henri": 15, "mcdermott": 15, "jason": 15, "mitchel": 15, "hugh": 15, "eisfeld": 15, "ami": 15, "sim": 15, "thackrai": 15, "larissa": 15, "shufang": 15, "walter": 15, "kevin": 15, "halfmann": 15, "peter": 15, "westhoff": 15, "daniel": 15, "tan": 15, "qing": 15, "menacheri": 15, "vineet": 15, "sheahan": 15, "timothi": 15, "cockrel": 15, "adam": 15, "kocher": 15, "jacob": 15, "stratton": 15, "kelli": 15, "heller": 15, "natali": 15, "bramer": 15, "lisa": 15, "diamond": 15, "michael": 15, "baric": 15, "ralph": 15, "water": 15, "katrina": 15, "kawaoka": 15, "yoshihiro": 15, "biolog": 15, "gene": 15, "pathogen": 15, "viral": 15, "bmc": 15, "bioinformat": 15, "22": 15, "1186": 15, "s12859": 15, "021": 15, "04197": 15, "eah": 15, "gregori": 15, "tempor": 15, "wshop": 15, "web": 15, "waw": 15, "arxiv": 15, "2302": 15, "02857": 15, "siam": 15, "portal": 15, "md": 15, "mds22": 15, "mds22_abstract": 15, "pdf": 15, "firoz": 15, "jenkin": 15, "loui": 15, "zalewski": 15, "marcin": 15, "17th": 15, "lectur": 15, "12901": 15, "kaminski": 15, "15": 15, "48478": 15, "1_1": 15, "kobi": 15, "ch": 15, "haesun": 15, "york": 15, "ww": 15, "baird": 15, "molli": 15, "dm": 15, "seppala": 15, "garrett": 15, "autoencod": 15, "intrus": 15, "cybersecur": 15, "ml4cyber": 15, "machin": 15, "icml": 15, "cc": 15, "schedulemultitrack": 15, "13458": 15, "collapse20252": 15, "liu": 15, "xu": 15, "jesun": 15, "lumsdain": 15, "amburg": 15, "ilya": 15, "gebremedhin": 15, "assefaw": 15, "experiment": 15, "36th": 15, "ieee": 15, "symp": 15, "ipdp": 15, "ieeexplor": 15, "9820632": 15, "28th": 15, "hipc": 15, "ieeecomputersocieti": 15, "1109": 15, "hipc53243": 15, "00045": 15, "variou": 17, "densiti": 17, "statist": 17, "standard": 17, "deviat": 17, "stat": 17, "put": 17, "nrow": 17, "ncol": 17, "ncell": 17, "dist": 17, "hist": 17, "counter": 17, "comp": 17, "num": 17, "len": 17, "keep": [17, 18], "summari": 17, "obj": 17, "long": 17, "hypernetxwidget": 18, "addon": 18, "capabl": 18, "javascript": 18, "interfac": 18, "demo": 18, "euler": 18, "outlin": 18, "might": 18, "upon": 18, "drag": 18, "ctrl": 18, "window": 18, "mac": 18, "click": 18, "pin": 18, "shift": 18, "placement": 18, "hide": 18, "whera": 18, "button": 18, "toolbar": 18, "un": 18, "travers": 18, "altern": 18, "everyth": 18, "hit": 18, "slightli": 18, "visibl": 18, "bulk": 18, "super": 18, "larger": 18, "toggl": 18}, "objects": {"": [[0, 0, 0, "-", "algorithms"], [2, 0, 0, "-", "classes"], [5, 0, 0, "-", "drawing"], [17, 0, 0, "-", "reports"]], "algorithms": [[0, 1, 1, "", "Gillespie_SIR"], [0, 1, 1, "", "Gillespie_SIS"], [0, 1, 1, "", "add_to_column"], [0, 1, 1, "", "add_to_row"], [0, 1, 1, "", "betti"], [0, 1, 1, "", "betti_numbers"], [0, 1, 1, "", "bkMatrix"], [0, 1, 1, "", "boundary_group"], [0, 1, 1, "", "chain_complex"], [0, 1, 1, "", "chung_lu_hypergraph"], [0, 1, 1, "", "collective_contagion"], [0, 0, 0, "-", "contagion"], [0, 1, 1, "", "contagion_animation"], [0, 1, 1, "", "dcsbm_hypergraph"], [0, 1, 1, "", "dict2part"], [0, 1, 1, "", "discrete_SIR"], [0, 1, 1, "", "discrete_SIS"], [0, 1, 1, "", "erdos_renyi_hypergraph"], [0, 0, 0, "-", "generative_models"], [0, 1, 1, "", "get_pi"], [0, 1, 1, "", "homology_basis"], [0, 0, 0, "-", "homology_mod2"], [0, 1, 1, "", "hypergraph_homology_basis"], [0, 0, 0, "-", "hypergraph_modularity"], [0, 1, 1, "", "individual_contagion"], [0, 1, 1, "", "interpret"], [0, 1, 1, "", "kchainbasis"], [0, 1, 1, "", "kumar"], [0, 0, 0, "-", "laplacians_clustering"], [0, 1, 1, "", "last_step"], [0, 1, 1, "", "linear"], [0, 1, 1, "", "logical_dot"], [0, 1, 1, "", "logical_matadd"], [0, 1, 1, "", "logical_matmul"], [0, 1, 1, "", "majority"], [0, 1, 1, "", "majority_vote"], [0, 1, 1, "", "matmulreduce"], [0, 1, 1, "", "modularity"], [0, 1, 1, "", "norm_lap"], [0, 1, 1, "", "part2dict"], [0, 1, 1, "", "precompute_attributes"], [0, 1, 1, "", "prob_trans"], [0, 1, 1, "", "reduced_row_echelon_form_mod2"], [0, 1, 1, "", "s_betweenness_centrality"], [0, 0, 0, "-", "s_centrality_measures"], [0, 1, 1, "", "s_closeness_centrality"], [0, 1, 1, "", "s_eccentricity"], [0, 1, 1, "", "s_harmonic_centrality"], [0, 1, 1, "", "s_harmonic_closeness_centrality"], [0, 1, 1, "", "smith_normal_form_mod2"], [0, 1, 1, "", "spec_clus"], [0, 1, 1, "", "strict"], [0, 1, 1, "", "swap_columns"], [0, 1, 1, "", "swap_rows"], [0, 1, 1, "", "threshold"], [0, 1, 1, "", "two_section"]], "algorithms.contagion": [[0, 1, 1, "", "Gillespie_SIR"], [0, 1, 1, "", "Gillespie_SIS"], [0, 1, 1, "", "collective_contagion"], [0, 1, 1, "", "contagion_animation"], [0, 1, 1, "", "discrete_SIR"], [0, 1, 1, "", "discrete_SIS"], [0, 1, 1, "", "individual_contagion"], [0, 1, 1, "", "majority_vote"], [0, 1, 1, "", "threshold"]], "algorithms.generative_models": [[0, 1, 1, "", "chung_lu_hypergraph"], [0, 1, 1, "", "dcsbm_hypergraph"], [0, 1, 1, "", "erdos_renyi_hypergraph"]], "algorithms.homology_mod2": [[0, 1, 1, "", "add_to_column"], [0, 1, 1, "", "add_to_row"], [0, 1, 1, "", "betti"], [0, 1, 1, "", "betti_numbers"], [0, 1, 1, "", "bkMatrix"], [0, 1, 1, "", "boundary_group"], [0, 1, 1, "", "chain_complex"], [0, 1, 1, "", "homology_basis"], [0, 1, 1, "", "hypergraph_homology_basis"], [0, 1, 1, "", "interpret"], [0, 1, 1, "", "kchainbasis"], [0, 1, 1, "", "logical_dot"], [0, 1, 1, "", "logical_matadd"], [0, 1, 1, "", "logical_matmul"], [0, 1, 1, "", "matmulreduce"], [0, 1, 1, "", "reduced_row_echelon_form_mod2"], [0, 1, 1, "", "smith_normal_form_mod2"], [0, 1, 1, "", "swap_columns"], [0, 1, 1, "", "swap_rows"]], "algorithms.hypergraph_modularity": [[0, 1, 1, "", "dict2part"], [0, 1, 1, "", "kumar"], [0, 1, 1, "", "last_step"], [0, 1, 1, "", "linear"], [0, 1, 1, "", "majority"], [0, 1, 1, "", "modularity"], [0, 1, 1, "", "part2dict"], [0, 1, 1, "", "precompute_attributes"], [0, 1, 1, "", "strict"], [0, 1, 1, "", "two_section"]], "algorithms.laplacians_clustering": [[0, 1, 1, "", "get_pi"], [0, 1, 1, "", "norm_lap"], [0, 1, 1, "", "prob_trans"], [0, 1, 1, "", "spec_clus"]], "algorithms.s_centrality_measures": [[0, 1, 1, "", "s_betweenness_centrality"], [0, 1, 1, "", "s_closeness_centrality"], [0, 1, 1, "", "s_eccentricity"], [0, 1, 1, "", "s_harmonic_centrality"], [0, 1, 1, "", "s_harmonic_closeness_centrality"]], "classes": [[2, 2, 1, "", "Entity"], [2, 2, 1, "", "EntitySet"], [2, 2, 1, "", "Hypergraph"], [2, 0, 0, "-", "entity"], [2, 0, 0, "-", "entityset"], [2, 0, 0, "-", "helpers"], [2, 0, 0, "-", "hypergraph"]], "classes.Entity": [[2, 3, 1, "", "add"], [2, 3, 1, "", "add_element"], [2, 3, 1, "", "add_elements_from"], [2, 3, 1, "", "assign_properties"], [2, 4, 1, "", "cell_weights"], [2, 4, 1, "", "children"], [2, 4, 1, "", "data"], [2, 4, 1, "", "dataframe"], [2, 4, 1, "", "dimensions"], [2, 4, 1, "", "dimsize"], [2, 4, 1, "", "elements"], [2, 3, 1, "", "elements_by_column"], [2, 3, 1, "", "elements_by_level"], [2, 4, 1, "", "empty"], [2, 3, 1, "", "encode"], [2, 3, 1, "", "get_properties"], [2, 3, 1, "", "get_property"], [2, 4, 1, "", "incidence_dict"], [2, 3, 1, "", "incidence_matrix"], [2, 3, 1, "", "index"], [2, 3, 1, "", "indices"], [2, 3, 1, "", "is_empty"], [2, 4, 1, "", "isstatic"], [2, 4, 1, "", "labels"], [2, 3, 1, "", "level"], [2, 4, 1, "", "memberships"], [2, 4, 1, "", "properties"], [2, 3, 1, "", "remove"], [2, 3, 1, "", "remove_element"], [2, 3, 1, "", "remove_elements_from"], [2, 3, 1, "", "restrict_to_indices"], [2, 3, 1, "", "restrict_to_levels"], [2, 3, 1, "", "set_property"], [2, 3, 1, "", "size"], [2, 3, 1, "", "translate"], [2, 3, 1, "", "translate_arr"], [2, 4, 1, "", "uid"], [2, 4, 1, "", "uidset"], [2, 3, 1, "", "uidset_by_column"], [2, 3, 1, "", "uidset_by_level"]], "classes.EntitySet": [[2, 3, 1, "", "assign_cell_properties"], [2, 4, 1, "", "cell_properties"], [2, 3, 1, "", "collapse_identical_elements"], [2, 3, 1, "", "get_cell_properties"], [2, 3, 1, "", "get_cell_property"], [2, 4, 1, "", "memberships"], [2, 3, 1, "", "restrict_to"], [2, 3, 1, "", "restrict_to_levels"], [2, 3, 1, "", "set_cell_property"]], "classes.Hypergraph": [[2, 3, 1, "", "adjacency_matrix"], [2, 3, 1, "", "auxiliary_matrix"], [2, 3, 1, "", "bipartite"], [2, 3, 1, "", "collapse_edges"], [2, 3, 1, "", "collapse_nodes"], [2, 3, 1, "", "collapse_nodes_and_edges"], [2, 3, 1, "", "component_subgraphs"], [2, 3, 1, "", "components"], [2, 3, 1, "", "connected_component_subgraphs"], [2, 3, 1, "", "connected_components"], [2, 4, 1, "", "dataframe"], [2, 3, 1, "", "degree"], [2, 3, 1, "", "diameter"], [2, 3, 1, "", "dim"], [2, 3, 1, "", "distance"], [2, 3, 1, "", "dual"], [2, 3, 1, "", "edge_adjacency_matrix"], [2, 3, 1, "", "edge_diameter"], [2, 3, 1, "", "edge_diameters"], [2, 3, 1, "", "edge_distance"], [2, 3, 1, "", "edge_neighbors"], [2, 4, 1, "", "edge_props"], [2, 3, 1, "", "edge_size_dist"], [2, 4, 1, "", "edges"], [2, 3, 1, "", "from_bipartite"], [2, 3, 1, "", "from_incidence_dataframe"], [2, 3, 1, "", "from_incidence_matrix"], [2, 3, 1, "", "from_numpy_array"], [2, 3, 1, "", "get_cell_properties"], [2, 3, 1, "", "get_linegraph"], [2, 3, 1, "", "get_properties"], [2, 3, 1, "", "incidence_dataframe"], [2, 4, 1, "", "incidence_dict"], [2, 3, 1, "", "incidence_matrix"], [2, 3, 1, "", "is_connected"], [2, 3, 1, "", "neighbors"], [2, 3, 1, "", "node_diameters"], [2, 4, 1, "", "node_props"], [2, 4, 1, "", "nodes"], [2, 3, 1, "", "number_of_edges"], [2, 3, 1, "", "number_of_nodes"], [2, 3, 1, "", "order"], [2, 4, 1, "", "properties"], [2, 3, 1, "", "remove"], [2, 3, 1, "", "remove_edges"], [2, 3, 1, "", "remove_nodes"], [2, 3, 1, "", "remove_singletons"], [2, 3, 1, "", "restrict_to_edges"], [2, 3, 1, "", "restrict_to_nodes"], [2, 3, 1, "", "s_component_subgraphs"], [2, 3, 1, "", "s_components"], [2, 3, 1, "", "s_connected_components"], [2, 3, 1, "", "set_state"], [2, 4, 1, "", "shape"], [2, 3, 1, "", "singletons"], [2, 3, 1, "", "size"], [2, 3, 1, "", "toplexes"]], "classes.entity": [[2, 2, 1, "", "Entity"]], "classes.entity.Entity": [[2, 3, 1, "", "add"], [2, 3, 1, "", "add_element"], [2, 3, 1, "", "add_elements_from"], [2, 3, 1, "", "assign_properties"], [2, 4, 1, "", "cell_weights"], [2, 4, 1, "", "children"], [2, 4, 1, "", "data"], [2, 4, 1, "", "dataframe"], [2, 4, 1, "", "dimensions"], [2, 4, 1, "", "dimsize"], [2, 4, 1, "", "elements"], [2, 3, 1, "", "elements_by_column"], [2, 3, 1, "", "elements_by_level"], [2, 4, 1, "", "empty"], [2, 3, 1, "", "encode"], [2, 3, 1, "", "get_properties"], [2, 3, 1, "", "get_property"], [2, 4, 1, "", "incidence_dict"], [2, 3, 1, "", "incidence_matrix"], [2, 3, 1, "", "index"], [2, 3, 1, "", "indices"], [2, 3, 1, "", "is_empty"], [2, 4, 1, "", "isstatic"], [2, 4, 1, "", "labels"], [2, 3, 1, "", "level"], [2, 4, 1, "", "memberships"], [2, 4, 1, "", "properties"], [2, 3, 1, "", "remove"], [2, 3, 1, "", "remove_element"], [2, 3, 1, "", "remove_elements_from"], [2, 3, 1, "", "restrict_to_indices"], [2, 3, 1, "", "restrict_to_levels"], [2, 3, 1, "", "set_property"], [2, 3, 1, "", "size"], [2, 3, 1, "", "translate"], [2, 3, 1, "", "translate_arr"], [2, 4, 1, "", "uid"], [2, 4, 1, "", "uidset"], [2, 3, 1, "", "uidset_by_column"], [2, 3, 1, "", "uidset_by_level"]], "classes.entityset": [[2, 2, 1, "", "EntitySet"]], "classes.entityset.EntitySet": [[2, 3, 1, "", "assign_cell_properties"], [2, 4, 1, "", "cell_properties"], [2, 3, 1, "", "collapse_identical_elements"], [2, 3, 1, "", "get_cell_properties"], [2, 3, 1, "", "get_cell_property"], [2, 4, 1, "", "memberships"], [2, 3, 1, "", "restrict_to"], [2, 3, 1, "", "restrict_to_levels"], [2, 3, 1, "", "set_cell_property"]], "classes.helpers": [[2, 2, 1, "", "AttrList"], [2, 1, 1, "", "assign_weights"], [2, 1, 1, "", "create_properties"], [2, 1, 1, "", "dict_depth"], [2, 1, 1, "", "encode"], [2, 1, 1, "", "merge_nested_dicts"], [2, 1, 1, "", "remove_row_duplicates"]], "classes.hypergraph": [[2, 2, 1, "", "Hypergraph"]], "classes.hypergraph.Hypergraph": [[2, 3, 1, "", "adjacency_matrix"], [2, 3, 1, "", "auxiliary_matrix"], [2, 3, 1, "", "bipartite"], [2, 3, 1, "", "collapse_edges"], [2, 3, 1, "", "collapse_nodes"], [2, 3, 1, "", "collapse_nodes_and_edges"], [2, 3, 1, "", "component_subgraphs"], [2, 3, 1, "", "components"], [2, 3, 1, "", "connected_component_subgraphs"], [2, 3, 1, "", "connected_components"], [2, 4, 1, "", "dataframe"], [2, 3, 1, "", "degree"], [2, 3, 1, "", "diameter"], [2, 3, 1, "", "dim"], [2, 3, 1, "", "distance"], [2, 3, 1, "", "dual"], [2, 3, 1, "", "edge_adjacency_matrix"], [2, 3, 1, "", "edge_diameter"], [2, 3, 1, "", "edge_diameters"], [2, 3, 1, "", "edge_distance"], [2, 3, 1, "", "edge_neighbors"], [2, 4, 1, "", "edge_props"], [2, 3, 1, "", "edge_size_dist"], [2, 4, 1, "", "edges"], [2, 3, 1, "", "from_bipartite"], [2, 3, 1, "", "from_incidence_dataframe"], [2, 3, 1, "", "from_incidence_matrix"], [2, 3, 1, "", "from_numpy_array"], [2, 3, 1, "", "get_cell_properties"], [2, 3, 1, "", "get_linegraph"], [2, 3, 1, "", "get_properties"], [2, 3, 1, "", "incidence_dataframe"], [2, 4, 1, "", "incidence_dict"], [2, 3, 1, "", "incidence_matrix"], [2, 3, 1, "", "is_connected"], [2, 3, 1, "", "neighbors"], [2, 3, 1, "", "node_diameters"], [2, 4, 1, "", "node_props"], [2, 4, 1, "", "nodes"], [2, 3, 1, "", "number_of_edges"], [2, 3, 1, "", "number_of_nodes"], [2, 3, 1, "", "order"], [2, 4, 1, "", "properties"], [2, 3, 1, "", "remove"], [2, 3, 1, "", "remove_edges"], [2, 3, 1, "", "remove_nodes"], [2, 3, 1, "", "remove_singletons"], [2, 3, 1, "", "restrict_to_edges"], [2, 3, 1, "", "restrict_to_nodes"], [2, 3, 1, "", "s_component_subgraphs"], [2, 3, 1, "", "s_components"], [2, 3, 1, "", "s_connected_components"], [2, 3, 1, "", "set_state"], [2, 4, 1, "", "shape"], [2, 3, 1, "", "singletons"], [2, 3, 1, "", "size"], [2, 3, 1, "", "toplexes"]], "drawing": [[5, 1, 1, "", "draw"], [5, 1, 1, "", "draw_two_column"], [5, 0, 0, "-", "rubber_band"], [5, 0, 0, "-", "two_column"], [5, 0, 0, "-", "util"]], "drawing.rubber_band": [[5, 1, 1, "", "draw"], [5, 1, 1, "", "draw_hyper_edge_labels"], [5, 1, 1, "", "draw_hyper_edges"], [5, 1, 1, "", "draw_hyper_labels"], [5, 1, 1, "", "draw_hyper_nodes"], [5, 1, 1, "", "get_default_radius"], [5, 1, 1, "", "layout_hyper_edges"], [5, 1, 1, "", "layout_node_link"]], "drawing.two_column": [[5, 1, 1, "", "draw"], [5, 1, 1, "", "draw_hyper_edges"], [5, 1, 1, "", "draw_hyper_labels"], [5, 1, 1, "", "layout_two_column"]], "drawing.util": [[5, 1, 1, "", "get_collapsed_size"], [5, 1, 1, "", "get_frozenset_label"], [5, 1, 1, "", "get_line_graph"], [5, 1, 1, "", "get_set_layering"], [5, 1, 1, "", "inflate"], [5, 1, 1, "", "inflate_kwargs"], [5, 1, 1, "", "transpose_inflated_kwargs"]], "reports": [[17, 1, 1, "", "centrality_stats"], [17, 1, 1, "", "comp_dist"], [17, 1, 1, "", "degree_dist"], [17, 0, 0, "-", "descriptive_stats"], [17, 1, 1, "", "dist_stats"], [17, 1, 1, "", "edge_size_dist"], [17, 1, 1, "", "info"], [17, 1, 1, "", "info_dict"], [17, 1, 1, "", "s_comp_dist"], [17, 1, 1, "", "s_edge_diameter_dist"], [17, 1, 1, "", "s_node_diameter_dist"], [17, 1, 1, "", "toplex_dist"]], "reports.descriptive_stats": [[17, 1, 1, "", "centrality_stats"], [17, 1, 1, "", "comp_dist"], [17, 1, 1, "", "degree_dist"], [17, 1, 1, "", "dist_stats"], [17, 1, 1, "", "edge_size_dist"], [17, 1, 1, "", "info"], [17, 1, 1, "", "info_dict"], [17, 1, 1, "", "s_comp_dist"], [17, 1, 1, "", "s_edge_diameter_dist"], [17, 1, 1, "", "s_node_diameter_dist"], [17, 1, 1, "", "toplex_dist"]]}, "objtypes": {"0": "py:module", "1": "py:function", "2": "py:class", "3": "py:method", "4": "py:property"}, "objnames": {"0": ["py", "module", "Python module"], "1": ["py", "function", "Python function"], "2": ["py", "class", "Python class"], "3": ["py", "method", "Python method"], "4": ["py", "property", "Python property"]}, "titleterms": {"algorithm": [0, 1, 13], "packag": [0, 2, 4, 5, 17], "submodul": [0, 2, 5, 17], "contagion": 0, "modul": [0, 2, 5, 17], "generative_model": 0, "homology_mod2": 0, "homologi": 0, "smith": 0, "normal": 0, "form": 0, "mod2": 0, "hypergraph_modular": 0, "laplacians_clust": 0, "hypergraph": [0, 2, 8, 9, 10], "probabl": 0, "transit": 0, "matric": 0, "laplacian": 0, "cluster": [0, 13], "s_centrality_measur": 0, "": [0, 7, 9, 14], "central": 0, "measur": 0, "content": [0, 2, 5, 10, 17], "class": [2, 3], "entiti": 2, "entityset": 2, "helper": 2, "hnx": [2, 7, 10], "2": [2, 14], "0": [2, 14], "setsystem": [2, 8], "edg": [2, 8, 9], "node": [2, 8], "properti": [2, 8], "weight": [2, 8], "hypernetx": [4, 10, 11, 14, 18], "draw": [5, 6], "rubber_band": 5, "two_column": 5, "util": 5, "glossari": 7, "term": 7, "line": 7, "graph": [7, 9, 13], "constructor": 8, "A": 9, "gentl": 9, "introduct": 9, "mathemat": 9, "adjac": 9, "matrix": 9, "incid": 9, "i": 9, "import": 9, "thing": 9, "about": 9, "all": 9, "come": 9, "dual": 9, "pair": 9, "intersect": 9, "have": 9, "size": 9, "can": 9, "Be": 9, "nest": 9, "walk": 9, "length": 9, "width": 9, "toward": 9, "less": 9, "hypernetwork": 9, "scienc": 9, "non": 9, "why": 10, "our": 10, "commun": 10, "valu": 10, "contact": 10, "u": 10, "indic": 10, "tabl": 10, "instal": [11, 13, 18], "prerequisit": 11, "creat": 11, "virtual": 11, "environ": 11, "us": [11, 13, 18], "anaconda": 11, "venv": 11, "virtualenv": 11, "For": 11, "window": 11, "user": 11, "from": 11, "pypi": 11, "sourc": 11, "post": 11, "action": 11, "run": 11, "test": 11, "interact": 11, "repl": 11, "other": [11, 13, 18], "view": 11, "jupyt": 11, "notebook": 11, "build": 11, "document": 11, "licens": [12, 14], "modular": 13, "overview": [13, 14, 18], "tool": [13, 18], "precomput": 13, "two": 13, "section": 13, "featur": [13, 14, 18], "refer": 13, "new": 14, "version": 14, "what": 14, "chang": 14, "colab": 14, "tutori": 14, "notic": 14, "public": 15, "report": [16, 17], "descriptive_stat": 17, "widget": 18, "layout": 18, "select": 18, "side": 18, "panel": 18}, "envversion": {"sphinx.domains.c": 2, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 8, "sphinx.domains.index": 1, "sphinx.domains.javascript": 2, "sphinx.domains.math": 2, "sphinx.domains.python": 3, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.intersphinx": 1, "sphinx.ext.todo": 2, "sphinx.ext.viewcode": 1, "sphinx": 57}, "alltitles": {"algorithms package": [[0, "algorithms-package"]], "Submodules": [[0, "submodules"], [2, "submodules"], [5, "submodules"], [17, "submodules"]], "algorithms.contagion module": [[0, "module-algorithms.contagion"]], "algorithms.generative_models module": [[0, "module-algorithms.generative_models"]], "algorithms.homology_mod2 module": [[0, "module-algorithms.homology_mod2"]], "Homology and Smith Normal Form": [[0, "homology-and-smith-normal-form"]], "Homology Mod2": [[0, "homology-mod2"]], "algorithms.hypergraph_modularity module": [[0, "module-algorithms.hypergraph_modularity"]], "Hypergraph_Modularity": [[0, "hypergraph-modularity"]], "algorithms.laplacians_clustering module": [[0, "module-algorithms.laplacians_clustering"]], "Hypergraph Probability Transition Matrices, Laplacians, and Clustering": [[0, "hypergraph-probability-transition-matrices-laplacians-and-clustering"]], "algorithms.s_centrality_measures module": [[0, "module-algorithms.s_centrality_measures"]], "S-Centrality Measures": [[0, "s-centrality-measures"]], "Module contents": [[0, "module-algorithms"], [2, "module-classes"], [5, "module-drawing"], [17, "module-reports"]], "algorithms": [[1, "algorithms"]], "classes package": [[2, "classes-package"]], "classes.entity module": [[2, "module-classes.entity"]], "classes.entityset module": [[2, "module-classes.entityset"]], "classes.helpers module": [[2, "module-classes.helpers"]], "classes.hypergraph module": [[2, "module-classes.hypergraph"]], "Hypergraphs in HNX 2.0": [[2, "hypergraphs-in-hnx-2-0"], [2, "id21"]], "SetSystems": [[2, "setsystems"], [2, "id22"], [8, "setsystems"]], "Edge and Node Properties": [[2, "edge-and-node-properties"], [2, "id23"], [8, "edge-and-node-properties"]], "Weights": [[2, "weights"], [2, "id24"], [8, "weights"]], "classes": [[3, "classes"]], "HyperNetX Packages": [[4, "hypernetx-packages"]], "drawing package": [[5, "drawing-package"]], "drawing.rubber_band module": [[5, "module-drawing.rubber_band"]], "drawing.two_column module": [[5, "module-drawing.two_column"]], "drawing.util module": [[5, "module-drawing.util"]], "drawing": [[6, "drawing"]], "Glossary of HNX terms": [[7, "glossary-of-hnx-terms"]], "S-line graphs": [[7, "s-line-graphs"]], "Hypergraph Constructors": [[8, "hypergraph-constructors"]], "A Gentle Introduction to Hypergraph Mathematics": [[9, "a-gentle-introduction-to-hypergraph-mathematics"]], "Graphs and Hypergraphs": [[9, "graphs-and-hypergraphs"]], "Adjacency matrix A of a graph.": [[9, "id2"]], "Incidence matrix I of a graph.": [[9, "id3"]], "Incidence matrix I of a hypergraph.": [[9, "id5"]], "Important Things About Hypergraphs": [[9, "important-things-about-hypergraphs"]], "All Hypergraphs Come in Dual Pairs": [[9, "all-hypergraphs-come-in-dual-pairs"]], "Edge Intersections Have Size": [[9, "edge-intersections-have-size"]], "Edges Can Be Nested": [[9, "edges-can-be-nested"]], "Walks Have Length and Width": [[9, "walks-have-length-and-width"]], "Towards Less Gentle Things": [[9, "towards-less-gentle-things"]], "s-Walks and Hypernetwork Science": [[9, "s-walks-and-hypernetwork-science"]], "Hypergraphs in Mathematics": [[9, "hypergraphs-in-mathematics"]], "Non-Gentle Graphs and Hypergraphs": [[9, "non-gentle-graphs-and-hypergraphs"]], "HyperNetX (HNX)": [[10, "hypernetx-hnx"]], "Why hypergraphs?": [[10, "why-hypergraphs"]], "Our community": [[10, "our-community"]], "Our values": [[10, "our-values"]], "Contact us": [[10, "contact-us"]], "Contents": [[10, "contents"]], "Indices and tables": [[10, "indices-and-tables"]], "Installing HyperNetX": [[11, "installing-hypernetx"]], "Installation": [[11, "installation"], [13, "installation"], [18, "installation"]], "Prerequisites": [[11, "prerequisites"]], "Create a virtual environment": [[11, "create-a-virtual-environment"]], "Using Anaconda": [[11, "using-anaconda"]], "Using venv": [[11, "using-venv"]], "Using virtualenv": [[11, "using-virtualenv"]], "For Windows Users": [[11, "for-windows-users"]], "Installing Hypernetx": [[11, "id1"]], "Installing from PyPi": [[11, "installing-from-pypi"]], "Installing from Source": [[11, "installing-from-source"]], "Post-Installation Actions": [[11, "post-installation-actions"]], "Running Tests": [[11, "running-tests"]], "Interact with HyperNetX in a REPL": [[11, "interact-with-hypernetx-in-a-repl"]], "Other Actions if installed from source": [[11, "other-actions-if-installed-from-source"]], "Viewing jupyter notebooks": [[11, "viewing-jupyter-notebooks"]], "Building documentation": [[11, "building-documentation"]], "License": [[12, "license"], [14, "license"]], "Modularity and Clustering": [[13, "modularity-and-clustering"]], "Overview": [[13, "overview"], [14, "overview"], [18, "overview"]], "Using the Tool": [[13, "using-the-tool"], [18, "using-the-tool"]], "Precomputation": [[13, "precomputation"]], "Modularity": [[13, "id1"]], "Two-section graph": [[13, "two-section-graph"]], "Clustering Algorithms": [[13, "clustering-algorithms"]], "Other Features": [[13, "other-features"], [18, "other-features"]], "References": [[13, "references"]], "HyperNetX": [[14, "hypernetx"]], "New Features in Version 2.0": [[14, "new-features-in-version-2-0"]], "What\u2019s New": [[14, "what-s-new"]], "What\u2019s Changed": [[14, "what-s-changed"]], "COLAB Tutorials": [[14, "colab-tutorials"]], "Notice": [[14, "notice"]], "Publications": [[15, "publications"]], "reports": [[16, "reports"]], "reports package": [[17, "reports-package"]], "reports.descriptive_stats module": [[17, "module-reports.descriptive_stats"]], "Hypernetx-Widget": [[18, "hypernetx-widget"]], "Layout": [[18, "layout"]], "Selection": [[18, "selection"]], "Side Panel": [[18, "side-panel"]]}, "indexentries": {"gillespie_sir() (in module algorithms)": [[0, "algorithms.Gillespie_SIR"]], "gillespie_sir() (in module algorithms.contagion)": [[0, "algorithms.contagion.Gillespie_SIR"]], "gillespie_sis() (in module algorithms)": [[0, "algorithms.Gillespie_SIS"]], "gillespie_sis() (in module algorithms.contagion)": [[0, "algorithms.contagion.Gillespie_SIS"]], "add_to_column() (in module algorithms)": [[0, "algorithms.add_to_column"]], "add_to_column() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.add_to_column"]], "add_to_row() (in module algorithms)": [[0, "algorithms.add_to_row"]], "add_to_row() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.add_to_row"]], "algorithms": [[0, "module-algorithms"]], "algorithms.contagion": [[0, "module-algorithms.contagion"]], "algorithms.generative_models": [[0, "module-algorithms.generative_models"]], "algorithms.homology_mod2": [[0, "module-algorithms.homology_mod2"]], "algorithms.hypergraph_modularity": [[0, "module-algorithms.hypergraph_modularity"]], "algorithms.laplacians_clustering": [[0, "module-algorithms.laplacians_clustering"]], "algorithms.s_centrality_measures": [[0, "module-algorithms.s_centrality_measures"]], "betti() (in module algorithms)": [[0, "algorithms.betti"]], "betti() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.betti"]], "betti_numbers() (in module algorithms)": [[0, "algorithms.betti_numbers"]], "betti_numbers() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.betti_numbers"]], "bkmatrix() (in module algorithms)": [[0, "algorithms.bkMatrix"]], "bkmatrix() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.bkMatrix"]], "boundary_group() (in module algorithms)": [[0, "algorithms.boundary_group"]], "boundary_group() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.boundary_group"]], "chain_complex() (in module algorithms)": [[0, "algorithms.chain_complex"]], "chain_complex() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.chain_complex"]], "chung_lu_hypergraph() (in module algorithms)": [[0, "algorithms.chung_lu_hypergraph"]], "chung_lu_hypergraph() (in module algorithms.generative_models)": [[0, "algorithms.generative_models.chung_lu_hypergraph"]], "collective_contagion() (in module algorithms)": [[0, "algorithms.collective_contagion"]], "collective_contagion() (in module algorithms.contagion)": [[0, "algorithms.contagion.collective_contagion"]], "contagion_animation() (in module algorithms)": [[0, "algorithms.contagion_animation"]], "contagion_animation() (in module algorithms.contagion)": [[0, "algorithms.contagion.contagion_animation"]], "dcsbm_hypergraph() (in module algorithms)": [[0, "algorithms.dcsbm_hypergraph"]], "dcsbm_hypergraph() (in module algorithms.generative_models)": [[0, "algorithms.generative_models.dcsbm_hypergraph"]], "dict2part() (in module algorithms)": [[0, "algorithms.dict2part"]], "dict2part() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.dict2part"]], "discrete_sir() (in module algorithms)": [[0, "algorithms.discrete_SIR"]], "discrete_sir() (in module algorithms.contagion)": [[0, "algorithms.contagion.discrete_SIR"]], "discrete_sis() (in module algorithms)": [[0, "algorithms.discrete_SIS"]], "discrete_sis() (in module algorithms.contagion)": [[0, "algorithms.contagion.discrete_SIS"]], "erdos_renyi_hypergraph() (in module algorithms)": [[0, "algorithms.erdos_renyi_hypergraph"]], "erdos_renyi_hypergraph() (in module algorithms.generative_models)": [[0, "algorithms.generative_models.erdos_renyi_hypergraph"]], "get_pi() (in module algorithms)": [[0, "algorithms.get_pi"]], "get_pi() (in module algorithms.laplacians_clustering)": [[0, "algorithms.laplacians_clustering.get_pi"]], "homology_basis() (in module algorithms)": [[0, "algorithms.homology_basis"]], "homology_basis() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.homology_basis"]], "hypergraph_homology_basis() (in module algorithms)": [[0, "algorithms.hypergraph_homology_basis"]], "hypergraph_homology_basis() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.hypergraph_homology_basis"]], "individual_contagion() (in module algorithms)": [[0, "algorithms.individual_contagion"]], "individual_contagion() (in module algorithms.contagion)": [[0, "algorithms.contagion.individual_contagion"]], "interpret() (in module algorithms)": [[0, "algorithms.interpret"]], "interpret() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.interpret"]], "kchainbasis() (in module algorithms)": [[0, "algorithms.kchainbasis"]], "kchainbasis() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.kchainbasis"]], "kumar() (in module algorithms)": [[0, "algorithms.kumar"]], "kumar() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.kumar"]], "last_step() (in module algorithms)": [[0, "algorithms.last_step"]], "last_step() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.last_step"]], "linear() (in module algorithms)": [[0, "algorithms.linear"]], "linear() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.linear"]], "logical_dot() (in module algorithms)": [[0, "algorithms.logical_dot"]], "logical_dot() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.logical_dot"]], "logical_matadd() (in module algorithms)": [[0, "algorithms.logical_matadd"]], "logical_matadd() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.logical_matadd"]], "logical_matmul() (in module algorithms)": [[0, "algorithms.logical_matmul"]], "logical_matmul() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.logical_matmul"]], "majority() (in module algorithms)": [[0, "algorithms.majority"]], "majority() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.majority"]], "majority_vote() (in module algorithms)": [[0, "algorithms.majority_vote"]], "majority_vote() (in module algorithms.contagion)": [[0, "algorithms.contagion.majority_vote"]], "matmulreduce() (in module algorithms)": [[0, "algorithms.matmulreduce"]], "matmulreduce() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.matmulreduce"]], "modularity() (in module algorithms)": [[0, "algorithms.modularity"]], "modularity() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.modularity"]], "module": [[0, "module-algorithms"], [0, "module-algorithms.contagion"], [0, "module-algorithms.generative_models"], [0, "module-algorithms.homology_mod2"], [0, "module-algorithms.hypergraph_modularity"], [0, "module-algorithms.laplacians_clustering"], [0, "module-algorithms.s_centrality_measures"], [2, "module-classes"], [2, "module-classes.entity"], [2, "module-classes.entityset"], [2, "module-classes.helpers"], [2, "module-classes.hypergraph"], [5, "module-drawing"], [5, "module-drawing.rubber_band"], [5, "module-drawing.two_column"], [5, "module-drawing.util"], [17, "module-reports"], [17, "module-reports.descriptive_stats"]], "norm_lap() (in module algorithms)": [[0, "algorithms.norm_lap"]], "norm_lap() (in module algorithms.laplacians_clustering)": [[0, "algorithms.laplacians_clustering.norm_lap"]], "part2dict() (in module algorithms)": [[0, "algorithms.part2dict"]], "part2dict() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.part2dict"]], "precompute_attributes() (in module algorithms)": [[0, "algorithms.precompute_attributes"]], "precompute_attributes() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.precompute_attributes"]], "prob_trans() (in module algorithms)": [[0, "algorithms.prob_trans"]], "prob_trans() (in module algorithms.laplacians_clustering)": [[0, "algorithms.laplacians_clustering.prob_trans"]], "reduced_row_echelon_form_mod2() (in module algorithms)": [[0, "algorithms.reduced_row_echelon_form_mod2"]], "reduced_row_echelon_form_mod2() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.reduced_row_echelon_form_mod2"]], "s_betweenness_centrality() (in module algorithms)": [[0, "algorithms.s_betweenness_centrality"]], "s_betweenness_centrality() (in module algorithms.s_centrality_measures)": [[0, "algorithms.s_centrality_measures.s_betweenness_centrality"]], "s_closeness_centrality() (in module algorithms)": [[0, "algorithms.s_closeness_centrality"]], "s_closeness_centrality() (in module algorithms.s_centrality_measures)": [[0, "algorithms.s_centrality_measures.s_closeness_centrality"]], "s_eccentricity() (in module algorithms)": [[0, "algorithms.s_eccentricity"]], "s_eccentricity() (in module algorithms.s_centrality_measures)": [[0, "algorithms.s_centrality_measures.s_eccentricity"]], "s_harmonic_centrality() (in module algorithms)": [[0, "algorithms.s_harmonic_centrality"]], "s_harmonic_centrality() (in module algorithms.s_centrality_measures)": [[0, "algorithms.s_centrality_measures.s_harmonic_centrality"]], "s_harmonic_closeness_centrality() (in module algorithms)": [[0, "algorithms.s_harmonic_closeness_centrality"]], "s_harmonic_closeness_centrality() (in module algorithms.s_centrality_measures)": [[0, "algorithms.s_centrality_measures.s_harmonic_closeness_centrality"]], "smith_normal_form_mod2() (in module algorithms)": [[0, "algorithms.smith_normal_form_mod2"]], "smith_normal_form_mod2() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.smith_normal_form_mod2"]], "spec_clus() (in module algorithms)": [[0, "algorithms.spec_clus"]], "spec_clus() (in module algorithms.laplacians_clustering)": [[0, "algorithms.laplacians_clustering.spec_clus"]], "strict() (in module algorithms)": [[0, "algorithms.strict"]], "strict() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.strict"]], "swap_columns() (in module algorithms)": [[0, "algorithms.swap_columns"]], "swap_columns() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.swap_columns"]], "swap_rows() (in module algorithms)": [[0, "algorithms.swap_rows"]], "swap_rows() (in module algorithms.homology_mod2)": [[0, "algorithms.homology_mod2.swap_rows"]], "threshold() (in module algorithms)": [[0, "algorithms.threshold"]], "threshold() (in module algorithms.contagion)": [[0, "algorithms.contagion.threshold"]], "two_section() (in module algorithms)": [[0, "algorithms.two_section"]], "two_section() (in module algorithms.hypergraph_modularity)": [[0, "algorithms.hypergraph_modularity.two_section"]], "attrlist (class in classes.helpers)": [[2, "classes.helpers.AttrList"]], "entity (class in classes)": [[2, "classes.Entity"]], "entity (class in classes.entity)": [[2, "classes.entity.Entity"]], "entityset (class in classes)": [[2, "classes.EntitySet"]], "entityset (class in classes.entityset)": [[2, "classes.entityset.EntitySet"]], "hypergraph (class in classes)": [[2, "classes.Hypergraph"]], "hypergraph (class in classes.hypergraph)": [[2, "classes.hypergraph.Hypergraph"]], "add() (classes.entity method)": [[2, "classes.Entity.add"]], "add() (classes.entity.entity method)": [[2, "classes.entity.Entity.add"]], "add_element() (classes.entity method)": [[2, "classes.Entity.add_element"]], "add_element() (classes.entity.entity method)": [[2, "classes.entity.Entity.add_element"]], "add_elements_from() (classes.entity method)": [[2, "classes.Entity.add_elements_from"]], "add_elements_from() (classes.entity.entity method)": [[2, "classes.entity.Entity.add_elements_from"]], "adjacency_matrix() (classes.hypergraph method)": [[2, "classes.Hypergraph.adjacency_matrix"]], "adjacency_matrix() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.adjacency_matrix"]], "assign_cell_properties() (classes.entityset method)": [[2, "classes.EntitySet.assign_cell_properties"]], "assign_cell_properties() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.assign_cell_properties"]], "assign_properties() (classes.entity method)": [[2, "classes.Entity.assign_properties"]], "assign_properties() (classes.entity.entity method)": [[2, "classes.entity.Entity.assign_properties"]], "assign_weights() (in module classes.helpers)": [[2, "classes.helpers.assign_weights"]], "auxiliary_matrix() (classes.hypergraph method)": [[2, "classes.Hypergraph.auxiliary_matrix"]], "auxiliary_matrix() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.auxiliary_matrix"]], "bipartite() (classes.hypergraph method)": [[2, "classes.Hypergraph.bipartite"]], "bipartite() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.bipartite"]], "cell_properties (classes.entityset property)": [[2, "classes.EntitySet.cell_properties"]], "cell_properties (classes.entityset.entityset property)": [[2, "classes.entityset.EntitySet.cell_properties"]], "cell_weights (classes.entity property)": [[2, "classes.Entity.cell_weights"]], "cell_weights (classes.entity.entity property)": [[2, "classes.entity.Entity.cell_weights"]], "children (classes.entity property)": [[2, "classes.Entity.children"]], "children (classes.entity.entity property)": [[2, "classes.entity.Entity.children"]], "classes": [[2, "module-classes"]], "classes.entity": [[2, "module-classes.entity"]], "classes.entityset": [[2, "module-classes.entityset"]], "classes.helpers": [[2, "module-classes.helpers"]], "classes.hypergraph": [[2, "module-classes.hypergraph"]], "collapse_edges() (classes.hypergraph method)": [[2, "classes.Hypergraph.collapse_edges"]], "collapse_edges() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.collapse_edges"]], "collapse_identical_elements() (classes.entityset method)": [[2, "classes.EntitySet.collapse_identical_elements"]], "collapse_identical_elements() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.collapse_identical_elements"]], "collapse_nodes() (classes.hypergraph method)": [[2, "classes.Hypergraph.collapse_nodes"]], "collapse_nodes() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.collapse_nodes"]], "collapse_nodes_and_edges() (classes.hypergraph method)": [[2, "classes.Hypergraph.collapse_nodes_and_edges"]], "collapse_nodes_and_edges() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.collapse_nodes_and_edges"]], "component_subgraphs() (classes.hypergraph method)": [[2, "classes.Hypergraph.component_subgraphs"]], "component_subgraphs() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.component_subgraphs"]], "components() (classes.hypergraph method)": [[2, "classes.Hypergraph.components"]], "components() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.components"]], "connected_component_subgraphs() (classes.hypergraph method)": [[2, "classes.Hypergraph.connected_component_subgraphs"]], "connected_component_subgraphs() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.connected_component_subgraphs"]], "connected_components() (classes.hypergraph method)": [[2, "classes.Hypergraph.connected_components"]], "connected_components() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.connected_components"]], "create_properties() (in module classes.helpers)": [[2, "classes.helpers.create_properties"]], "data (classes.entity property)": [[2, "classes.Entity.data"]], "data (classes.entity.entity property)": [[2, "classes.entity.Entity.data"]], "dataframe (classes.entity property)": [[2, "classes.Entity.dataframe"]], "dataframe (classes.hypergraph property)": [[2, "classes.Hypergraph.dataframe"]], "dataframe (classes.entity.entity property)": [[2, "classes.entity.Entity.dataframe"]], "dataframe (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.dataframe"]], "degree() (classes.hypergraph method)": [[2, "classes.Hypergraph.degree"]], "degree() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.degree"]], "diameter() (classes.hypergraph method)": [[2, "classes.Hypergraph.diameter"]], "diameter() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.diameter"]], "dict_depth() (in module classes.helpers)": [[2, "classes.helpers.dict_depth"]], "dim() (classes.hypergraph method)": [[2, "classes.Hypergraph.dim"]], "dim() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.dim"]], "dimensions (classes.entity property)": [[2, "classes.Entity.dimensions"]], "dimensions (classes.entity.entity property)": [[2, "classes.entity.Entity.dimensions"]], "dimsize (classes.entity property)": [[2, "classes.Entity.dimsize"]], "dimsize (classes.entity.entity property)": [[2, "classes.entity.Entity.dimsize"]], "distance() (classes.hypergraph method)": [[2, "classes.Hypergraph.distance"]], "distance() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.distance"]], "dual() (classes.hypergraph method)": [[2, "classes.Hypergraph.dual"]], "dual() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.dual"]], "edge_adjacency_matrix() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_adjacency_matrix"]], "edge_adjacency_matrix() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_adjacency_matrix"]], "edge_diameter() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_diameter"]], "edge_diameter() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_diameter"]], "edge_diameters() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_diameters"]], "edge_diameters() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_diameters"]], "edge_distance() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_distance"]], "edge_distance() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_distance"]], "edge_neighbors() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_neighbors"]], "edge_neighbors() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_neighbors"]], "edge_props (classes.hypergraph property)": [[2, "classes.Hypergraph.edge_props"]], "edge_props (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.edge_props"]], "edge_size_dist() (classes.hypergraph method)": [[2, "classes.Hypergraph.edge_size_dist"]], "edge_size_dist() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.edge_size_dist"]], "edges (classes.hypergraph property)": [[2, "classes.Hypergraph.edges"]], "edges (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.edges"]], "elements (classes.entity property)": [[2, "classes.Entity.elements"]], "elements (classes.entity.entity property)": [[2, "classes.entity.Entity.elements"]], "elements_by_column() (classes.entity method)": [[2, "classes.Entity.elements_by_column"]], "elements_by_column() (classes.entity.entity method)": [[2, "classes.entity.Entity.elements_by_column"]], "elements_by_level() (classes.entity method)": [[2, "classes.Entity.elements_by_level"]], "elements_by_level() (classes.entity.entity method)": [[2, "classes.entity.Entity.elements_by_level"]], "empty (classes.entity property)": [[2, "classes.Entity.empty"]], "empty (classes.entity.entity property)": [[2, "classes.entity.Entity.empty"]], "encode() (classes.entity method)": [[2, "classes.Entity.encode"]], "encode() (classes.entity.entity method)": [[2, "classes.entity.Entity.encode"]], "encode() (in module classes.helpers)": [[2, "classes.helpers.encode"]], "from_bipartite() (classes.hypergraph class method)": [[2, "classes.Hypergraph.from_bipartite"]], "from_bipartite() (classes.hypergraph.hypergraph class method)": [[2, "classes.hypergraph.Hypergraph.from_bipartite"]], "from_incidence_dataframe() (classes.hypergraph class method)": [[2, "classes.Hypergraph.from_incidence_dataframe"]], "from_incidence_dataframe() (classes.hypergraph.hypergraph class method)": [[2, "classes.hypergraph.Hypergraph.from_incidence_dataframe"]], "from_incidence_matrix() (classes.hypergraph class method)": [[2, "classes.Hypergraph.from_incidence_matrix"]], "from_incidence_matrix() (classes.hypergraph.hypergraph class method)": [[2, "classes.hypergraph.Hypergraph.from_incidence_matrix"]], "from_numpy_array() (classes.hypergraph class method)": [[2, "classes.Hypergraph.from_numpy_array"]], "from_numpy_array() (classes.hypergraph.hypergraph class method)": [[2, "classes.hypergraph.Hypergraph.from_numpy_array"]], "get_cell_properties() (classes.entityset method)": [[2, "classes.EntitySet.get_cell_properties"]], "get_cell_properties() (classes.hypergraph method)": [[2, "classes.Hypergraph.get_cell_properties"]], "get_cell_properties() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.get_cell_properties"]], "get_cell_properties() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.get_cell_properties"]], "get_cell_property() (classes.entityset method)": [[2, "classes.EntitySet.get_cell_property"]], "get_cell_property() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.get_cell_property"]], "get_linegraph() (classes.hypergraph method)": [[2, "classes.Hypergraph.get_linegraph"]], "get_linegraph() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.get_linegraph"]], "get_properties() (classes.entity method)": [[2, "classes.Entity.get_properties"]], "get_properties() (classes.hypergraph method)": [[2, "classes.Hypergraph.get_properties"]], "get_properties() (classes.entity.entity method)": [[2, "classes.entity.Entity.get_properties"]], "get_properties() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.get_properties"]], "get_property() (classes.entity method)": [[2, "classes.Entity.get_property"]], "get_property() (classes.entity.entity method)": [[2, "classes.entity.Entity.get_property"]], "incidence_dataframe() (classes.hypergraph method)": [[2, "classes.Hypergraph.incidence_dataframe"]], "incidence_dataframe() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.incidence_dataframe"]], "incidence_dict (classes.entity property)": [[2, "classes.Entity.incidence_dict"]], "incidence_dict (classes.hypergraph property)": [[2, "classes.Hypergraph.incidence_dict"]], "incidence_dict (classes.entity.entity property)": [[2, "classes.entity.Entity.incidence_dict"]], "incidence_dict (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.incidence_dict"]], "incidence_matrix() (classes.entity method)": [[2, "classes.Entity.incidence_matrix"]], "incidence_matrix() (classes.hypergraph method)": [[2, "classes.Hypergraph.incidence_matrix"]], "incidence_matrix() (classes.entity.entity method)": [[2, "classes.entity.Entity.incidence_matrix"]], "incidence_matrix() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.incidence_matrix"]], "index() (classes.entity method)": [[2, "classes.Entity.index"]], "index() (classes.entity.entity method)": [[2, "classes.entity.Entity.index"]], "indices() (classes.entity method)": [[2, "classes.Entity.indices"]], "indices() (classes.entity.entity method)": [[2, "classes.entity.Entity.indices"]], "is_connected() (classes.hypergraph method)": [[2, "classes.Hypergraph.is_connected"]], "is_connected() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.is_connected"]], "is_empty() (classes.entity method)": [[2, "classes.Entity.is_empty"]], "is_empty() (classes.entity.entity method)": [[2, "classes.entity.Entity.is_empty"]], "isstatic (classes.entity property)": [[2, "classes.Entity.isstatic"]], "isstatic (classes.entity.entity property)": [[2, "classes.entity.Entity.isstatic"]], "labels (classes.entity property)": [[2, "classes.Entity.labels"]], "labels (classes.entity.entity property)": [[2, "classes.entity.Entity.labels"]], "level() (classes.entity method)": [[2, "classes.Entity.level"]], "level() (classes.entity.entity method)": [[2, "classes.entity.Entity.level"]], "memberships (classes.entity property)": [[2, "classes.Entity.memberships"]], "memberships (classes.entityset property)": [[2, "classes.EntitySet.memberships"]], "memberships (classes.entity.entity property)": [[2, "classes.entity.Entity.memberships"]], "memberships (classes.entityset.entityset property)": [[2, "classes.entityset.EntitySet.memberships"]], "merge_nested_dicts() (in module classes.helpers)": [[2, "classes.helpers.merge_nested_dicts"]], "neighbors() (classes.hypergraph method)": [[2, "classes.Hypergraph.neighbors"]], "neighbors() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.neighbors"]], "node_diameters() (classes.hypergraph method)": [[2, "classes.Hypergraph.node_diameters"]], "node_diameters() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.node_diameters"]], "node_props (classes.hypergraph property)": [[2, "classes.Hypergraph.node_props"]], "node_props (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.node_props"]], "nodes (classes.hypergraph property)": [[2, "classes.Hypergraph.nodes"]], "nodes (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.nodes"]], "number_of_edges() (classes.hypergraph method)": [[2, "classes.Hypergraph.number_of_edges"]], "number_of_edges() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.number_of_edges"]], "number_of_nodes() (classes.hypergraph method)": [[2, "classes.Hypergraph.number_of_nodes"]], "number_of_nodes() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.number_of_nodes"]], "order() (classes.hypergraph method)": [[2, "classes.Hypergraph.order"]], "order() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.order"]], "properties (classes.entity property)": [[2, "classes.Entity.properties"]], "properties (classes.hypergraph property)": [[2, "classes.Hypergraph.properties"]], "properties (classes.entity.entity property)": [[2, "classes.entity.Entity.properties"]], "properties (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.properties"]], "remove() (classes.entity method)": [[2, "classes.Entity.remove"]], "remove() (classes.hypergraph method)": [[2, "classes.Hypergraph.remove"]], "remove() (classes.entity.entity method)": [[2, "classes.entity.Entity.remove"]], "remove() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.remove"]], "remove_edges() (classes.hypergraph method)": [[2, "classes.Hypergraph.remove_edges"]], "remove_edges() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.remove_edges"]], "remove_element() (classes.entity method)": [[2, "classes.Entity.remove_element"]], "remove_element() (classes.entity.entity method)": [[2, "classes.entity.Entity.remove_element"]], "remove_elements_from() (classes.entity method)": [[2, "classes.Entity.remove_elements_from"]], "remove_elements_from() (classes.entity.entity method)": [[2, "classes.entity.Entity.remove_elements_from"]], "remove_nodes() (classes.hypergraph method)": [[2, "classes.Hypergraph.remove_nodes"]], "remove_nodes() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.remove_nodes"]], "remove_row_duplicates() (in module classes.helpers)": [[2, "classes.helpers.remove_row_duplicates"]], "remove_singletons() (classes.hypergraph method)": [[2, "classes.Hypergraph.remove_singletons"]], "remove_singletons() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.remove_singletons"]], "restrict_to() (classes.entityset method)": [[2, "classes.EntitySet.restrict_to"]], "restrict_to() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.restrict_to"]], "restrict_to_edges() (classes.hypergraph method)": [[2, "classes.Hypergraph.restrict_to_edges"]], "restrict_to_edges() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.restrict_to_edges"]], "restrict_to_indices() (classes.entity method)": [[2, "classes.Entity.restrict_to_indices"]], "restrict_to_indices() (classes.entity.entity method)": [[2, "classes.entity.Entity.restrict_to_indices"]], "restrict_to_levels() (classes.entity method)": [[2, "classes.Entity.restrict_to_levels"]], "restrict_to_levels() (classes.entityset method)": [[2, "classes.EntitySet.restrict_to_levels"]], "restrict_to_levels() (classes.entity.entity method)": [[2, "classes.entity.Entity.restrict_to_levels"]], "restrict_to_levels() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.restrict_to_levels"]], "restrict_to_nodes() (classes.hypergraph method)": [[2, "classes.Hypergraph.restrict_to_nodes"]], "restrict_to_nodes() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.restrict_to_nodes"]], "s_component_subgraphs() (classes.hypergraph method)": [[2, "classes.Hypergraph.s_component_subgraphs"]], "s_component_subgraphs() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.s_component_subgraphs"]], "s_components() (classes.hypergraph method)": [[2, "classes.Hypergraph.s_components"]], "s_components() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.s_components"]], "s_connected_components() (classes.hypergraph method)": [[2, "classes.Hypergraph.s_connected_components"]], "s_connected_components() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.s_connected_components"]], "set_cell_property() (classes.entityset method)": [[2, "classes.EntitySet.set_cell_property"]], "set_cell_property() (classes.entityset.entityset method)": [[2, "classes.entityset.EntitySet.set_cell_property"]], "set_property() (classes.entity method)": [[2, "classes.Entity.set_property"]], "set_property() (classes.entity.entity method)": [[2, "classes.entity.Entity.set_property"]], "set_state() (classes.hypergraph method)": [[2, "classes.Hypergraph.set_state"]], "set_state() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.set_state"]], "shape (classes.hypergraph property)": [[2, "classes.Hypergraph.shape"]], "shape (classes.hypergraph.hypergraph property)": [[2, "classes.hypergraph.Hypergraph.shape"]], "singletons() (classes.hypergraph method)": [[2, "classes.Hypergraph.singletons"]], "singletons() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.singletons"]], "size() (classes.entity method)": [[2, "classes.Entity.size"]], "size() (classes.hypergraph method)": [[2, "classes.Hypergraph.size"]], "size() (classes.entity.entity method)": [[2, "classes.entity.Entity.size"]], "size() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.size"]], "toplexes() (classes.hypergraph method)": [[2, "classes.Hypergraph.toplexes"]], "toplexes() (classes.hypergraph.hypergraph method)": [[2, "classes.hypergraph.Hypergraph.toplexes"]], "translate() (classes.entity method)": [[2, "classes.Entity.translate"]], "translate() (classes.entity.entity method)": [[2, "classes.entity.Entity.translate"]], "translate_arr() (classes.entity method)": [[2, "classes.Entity.translate_arr"]], "translate_arr() (classes.entity.entity method)": [[2, "classes.entity.Entity.translate_arr"]], "uid (classes.entity property)": [[2, "classes.Entity.uid"]], "uid (classes.entity.entity property)": [[2, "classes.entity.Entity.uid"]], "uidset (classes.entity property)": [[2, "classes.Entity.uidset"]], "uidset (classes.entity.entity property)": [[2, "classes.entity.Entity.uidset"]], "uidset_by_column() (classes.entity method)": [[2, "classes.Entity.uidset_by_column"]], "uidset_by_column() (classes.entity.entity method)": [[2, "classes.entity.Entity.uidset_by_column"]], "uidset_by_level() (classes.entity method)": [[2, "classes.Entity.uidset_by_level"]], "uidset_by_level() (classes.entity.entity method)": [[2, "classes.entity.Entity.uidset_by_level"]], "draw() (in module drawing)": [[5, "drawing.draw"]], "draw() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.draw"]], "draw() (in module drawing.two_column)": [[5, "drawing.two_column.draw"]], "draw_hyper_edge_labels() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.draw_hyper_edge_labels"]], "draw_hyper_edges() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.draw_hyper_edges"]], "draw_hyper_edges() (in module drawing.two_column)": [[5, "drawing.two_column.draw_hyper_edges"]], "draw_hyper_labels() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.draw_hyper_labels"]], "draw_hyper_labels() (in module drawing.two_column)": [[5, "drawing.two_column.draw_hyper_labels"]], "draw_hyper_nodes() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.draw_hyper_nodes"]], "draw_two_column() (in module drawing)": [[5, "drawing.draw_two_column"]], "drawing": [[5, "module-drawing"]], "drawing.rubber_band": [[5, "module-drawing.rubber_band"]], "drawing.two_column": [[5, "module-drawing.two_column"]], "drawing.util": [[5, "module-drawing.util"]], "get_collapsed_size() (in module drawing.util)": [[5, "drawing.util.get_collapsed_size"]], "get_default_radius() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.get_default_radius"]], "get_frozenset_label() (in module drawing.util)": [[5, "drawing.util.get_frozenset_label"]], "get_line_graph() (in module drawing.util)": [[5, "drawing.util.get_line_graph"]], "get_set_layering() (in module drawing.util)": [[5, "drawing.util.get_set_layering"]], "inflate() (in module drawing.util)": [[5, "drawing.util.inflate"]], "inflate_kwargs() (in module drawing.util)": [[5, "drawing.util.inflate_kwargs"]], "layout_hyper_edges() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.layout_hyper_edges"]], "layout_node_link() (in module drawing.rubber_band)": [[5, "drawing.rubber_band.layout_node_link"]], "layout_two_column() (in module drawing.two_column)": [[5, "drawing.two_column.layout_two_column"]], "transpose_inflated_kwargs() (in module drawing.util)": [[5, "drawing.util.transpose_inflated_kwargs"]], "entity and entity set": [[7, "term-Entity-and-Entity-set"]], "degree": [[7, "term-degree"]], "dual": [[7, "term-dual"]], "edge nodes (aka edge elements)": [[7, "term-edge-nodes-aka-edge-elements"]], "hypergraph": [[7, "term-hypergraph"]], "incidence": [[7, "term-incidence"]], "incidence matrix": [[7, "term-incidence-matrix"]], "simple hypergraph": [[7, "term-simple-hypergraph"]], "subhypergraph": [[7, "term-subhypergraph"]], "subhypergraph induced by a set of nodes": [[7, "term-subhypergraph-induced-by-a-set-of-nodes"]], "toplex": [[7, "term-toplex"]], "centrality_stats() (in module reports)": [[17, "reports.centrality_stats"]], "centrality_stats() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.centrality_stats"]], "comp_dist() (in module reports)": [[17, "reports.comp_dist"]], "comp_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.comp_dist"]], "degree_dist() (in module reports)": [[17, "reports.degree_dist"]], "degree_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.degree_dist"]], "dist_stats() (in module reports)": [[17, "reports.dist_stats"]], "dist_stats() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.dist_stats"]], "edge_size_dist() (in module reports)": [[17, "reports.edge_size_dist"]], "edge_size_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.edge_size_dist"]], "info() (in module reports)": [[17, "reports.info"]], "info() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.info"]], "info_dict() (in module reports)": [[17, "reports.info_dict"]], "info_dict() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.info_dict"]], "reports": [[17, "module-reports"]], "reports.descriptive_stats": [[17, "module-reports.descriptive_stats"]], "s_comp_dist() (in module reports)": [[17, "reports.s_comp_dist"]], "s_comp_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.s_comp_dist"]], "s_edge_diameter_dist() (in module reports)": [[17, "reports.s_edge_diameter_dist"]], "s_edge_diameter_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.s_edge_diameter_dist"]], "s_node_diameter_dist() (in module reports)": [[17, "reports.s_node_diameter_dist"]], "s_node_diameter_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.s_node_diameter_dist"]], "toplex_dist() (in module reports)": [[17, "reports.toplex_dist"]], "toplex_dist() (in module reports.descriptive_stats)": [[17, "reports.descriptive_stats.toplex_dist"]]}}) \ No newline at end of file diff --git a/widget.html b/widget.html new file mode 100644 index 00000000..2606c6b3 --- /dev/null +++ b/widget.html @@ -0,0 +1,192 @@ + + + + + + + Hypernetx-Widget — HyperNetX 2.0.4 documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Hypernetx-Widget

+_images/WidgetScreenShot.png +
+

Overview

+

The HyperNetXWidget is an addon for HNX, which extends the built in visualization +capabilities of HNX to a JavaScript based interactive visualization. The tool has two main interfaces, +the hypergraph visualization and the nodes & edges panel. +You may demo the widget here

+
+
+

Installation

+

The HypernetxWidget is available on GitHub and may be +installed using pip:

+
>>> pip install hnxwidget
+
+
+
+
+

Using the Tool

+
+

Layout

+

The hypergraph visualization is an Euler diagram that shows nodes as circles and hyper edges as outlines +containing the nodes/circles they contain. The visualization uses a force directed optimization to perform +the layout. This algorithm is not perfect and sometimes gives results that the user might want to improve upon. +The visualization allows the user to drag nodes and position them directly at any time. The algorithm will +re-position any nodes that are not specified by the user. Ctrl (Windows) or Command (Mac) clicking a node +will release a pinned node it to be re-positioned by the algorithm.

+
+
+

Selection

+

Nodes and edges can be selected by clicking them. Nodes and edges can be selected independently of each other, +i.e., it is possible to select an edge without selecting the nodes it contains. Multiple nodes and edges can +be selected, by holding down Shift while clicking. Shift clicking an already selected node will de-select it. +Clicking the background will de-select all nodes and edges. Dragging a selected node will drag all selected +nodes, keeping their relative placement. +Selected nodes can be hidden (having their appearance minimized) or removed completely from the visualization. +Hiding a node or edge will not cause a change in the layout, wheras removing a node or edge will. +The selection can also be expanded. Buttons in the toolbar allow for selecting all nodes contained within selected edges, +and selecting all edges containing any selected nodes. +The toolbar also contains buttons to select all nodes (or edges), un-select all nodes (or edges), +or reverse the selected nodes (or edges). An advanced user might:

+
    +
  • Select all nodes not in an edge by: select an edge, select all nodes in that edge, then reverse the selected nodes to select every node not in that edge.

  • +
  • Traverse the graph by: selecting a start node, then alternating select all edges containing selected nodes and selecting all nodes within selected edges

  • +
  • Pin Everything by: hitting the button to select all nodes, then drag any node slightly to activate the pinning for all nodes.

  • +
+
+
+

Side Panel

+

Details on nodes and edges are visible in the side panel. For both nodes and edges, a table shows the node name, degree (or size for edges), its selection state, removed state, and color. These properties can also be controlled directly from this panel. The color of nodes and edges can be set in bulk here as well, for example, coloring by degree.

+
+
+

Other Features

+

Nodes with identical edge membership can be collapsed into a super node, which can be helpful for larger hypergraphs. Dragging any node in a super node will drag the entire super node. This feature is available as a toggle in the nodes panel.

+

The hypergraph can also be visualized as a bipartite graph (similar to a traditional node-link diagram). Toggling this feature will preserve the locations of the nodes between the bipartite and the Euler diagrams.

+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file