You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I downloaded and executed as a .py file one of the tutorials. I'll share the exact result I get down below. Is it normal it keeps starting itself again and again? Thanks!
(graphvite) batuhanm@etiyayz:~/graphvite$ python node_representation_learning_on_large_graphs.py
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0460918 s
[time] GraphApplication.build: 0.166983 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 10.2 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387883
Batch id: 2000 / 6548
loss = 0.382675
Batch id: 3000 / 6548
loss = 0.379259
Batch id: 4000 / 6548
loss = 0.376252
Batch id: 5000 / 6548
loss = 0.372444
Batch id: 6000 / 6548
loss = 0.370453
[time] GraphApplication.train: 4.45633 s
effective labels: 14472 / 14476
1
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0466001 s
[time] GraphApplication.build: 0.145495 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387772
Batch id: 2000 / 6548
loss = 0.382389
Batch id: 3000 / 6548
loss = 0.378922
Batch id: 4000 / 6548
loss = 0.376096
Batch id: 5000 / 6548
loss = 0.372452
Batch id: 6000 / 6548
loss = 0.370365
[time] GraphApplication.train: 4.49339 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0464778 s
[time] GraphApplication.build: 0.141464 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387672
Batch id: 2000 / 6548
loss = 0.382468
Batch id: 3000 / 6548
loss = 0.379075
Batch id: 4000 / 6548
loss = 0.376327
Batch id: 5000 / 6548
loss = 0.372425
Batch id: 6000 / 6548
loss = 0.37032
[time] GraphApplication.train: 4.50981 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0471516 s
[time] GraphApplication.build: 0.14229 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.388005
Batch id: 2000 / 6548
loss = 0.382484
Batch id: 3000 / 6548
loss = 0.379027
Batch id: 4000 / 6548
loss = 0.376399
Batch id: 5000 / 6548
loss = 0.3726
Batch id: 6000 / 6548
loss = 0.370475
[time] GraphApplication.train: 4.42358 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0460806 s
[time] GraphApplication.build: 0.140951 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387827
Batch id: 2000 / 6548
loss = 0.382371
Batch id: 3000 / 6548
loss = 0.378984
Batch id: 4000 / 6548
loss = 0.376317
Batch id: 5000 / 6548
loss = 0.372333
Batch id: 6000 / 6548
loss = 0.370397
[time] GraphApplication.train: 4.44952 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0463161 s
[time] GraphApplication.build: 0.141157 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387918
Batch id: 2000 / 6548
loss = 0.382566
Batch id: 3000 / 6548
loss = 0.379185
Batch id: 4000 / 6548
loss = 0.37623
Batch id: 5000 / 6548
loss = 0.372305
Batch id: 6000 / 6548
loss = 0.37036
[time] GraphApplication.train: 4.48253 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0465286 s
[time] GraphApplication.build: 0.141467 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387701
Batch id: 2000 / 6548
loss = 0.382437
Batch id: 3000 / 6548
loss = 0.378942
Batch id: 4000 / 6548
loss = 0.376202
Batch id: 5000 / 6548
loss = 0.37252
Batch id: 6000 / 6548
loss = 0.3704
[time] GraphApplication.train: 4.50553 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0460839 s
[time] GraphApplication.build: 0.141356 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387748
Batch id: 2000 / 6548
loss = 0.382556
Batch id: 3000 / 6548
loss = 0.37917
Batch id: 4000 / 6548
loss = 0.376153
Batch id: 5000 / 6548
loss = 0.372358
Batch id: 6000 / 6548
loss = 0.37046
[time] GraphApplication.train: 4.4739 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.0463991 s
[time] GraphApplication.build: 0.141246 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.387929
Batch id: 2000 / 6548
loss = 0.382644
Batch id: 3000 / 6548
loss = 0.379018
Batch id: 4000 / 6548
loss = 0.376168
Batch id: 5000 / 6548
loss = 0.372515
Batch id: 6000 / 6548
loss = 0.370471
[time] GraphApplication.train: 4.48483 s
effective labels: 14472 / 14476
1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 114, in _main
prepare(preparation_data)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/batuhanm/graphvite/node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 234, in gpu_map
pool = multiprocessing.Pool(len(gpus))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 119, in Pool
context=self.get_context())
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 176, in __init__
self._repopulate_pool()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
w.start()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
loading graph from /home/batuhanm/.graphvite/dataset/blogcatalog/blogcatalog_train.txt
0.00018755%
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Graph<uint32>
------------------ Graph -------------------
#vertex: 10308, #edge: 327429
as undirected: yes, normalization: no
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[time] GraphApplication.load: 0.046684 s
[time] GraphApplication.build: 0.141742 s
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
GraphSolver<128, float32, uint32>
----------------- Resource -----------------
#worker: 1, #sampler: 7, #partition: 1
tied weights: no, episode size: 200
gpu memory limit: 9.99 GiB
gpu memory cost: 51.5 MiB
----------------- Sampling -----------------
augmentation step: 2, shuffle base: 2
random walk length: 40
random walk batch size: 100
#negative: 1, negative sample exponent: 0.75
----------------- Training -----------------
model: LINE
optimizer: SGD
learning rate: 0.025, lr schedule: linear
weight decay: 0.005
#epoch: 2000, batch size: 100000
resume: no
positive reuse: 1, negative weight: 5
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Batch id: 0 / 6548
loss = 0
Batch id: 1000 / 6548
loss = 0.388031
Batch id: 2000 / 6548
loss = 0.382297
Batch id: 3000 / 6548
loss = 0.379089
Batch id: 4000 / 6548
loss = 0.376302
^CTraceback (most recent call last):
File "node_representation_learning_on_large_graphs.py", line 65, in <module>
app.node_classification(file_name=gv.dataset.blogcatalog.label, portions=(0.2,))
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 346, in node_classification
results = self.gpu_map(linear_classification, settings)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/site-packages/graphvite/application/application.py", line 235, in gpu_map
results = pool.map(func, settings, chunksize=1)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 651, in get
self.wait(timeout)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/multiprocessing/pool.py", line 648, in wait
self._event.wait(timeout)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/threading.py", line 552, in wait
signaled = self._cond.wait(timeout)
File "/home/batuhanm/miniconda3/envs/graphvite/lib/python3.7/threading.py", line 296, in wait
waiter.acquire()
KeyboardInterrupt
The text was updated successfully, but these errors were encountered:
Hello there!
I downloaded and executed as a .py file one of the tutorials. I'll share the exact result I get down below. Is it normal it keeps starting itself again and again? Thanks!
The text was updated successfully, but these errors were encountered: