Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

input is too complicated to be hashable #714

Open
yibeichan opened this issue Oct 16, 2023 · 0 comments
Open

input is too complicated to be hashable #714

yibeichan opened this issue Oct 16, 2023 · 0 comments

Comments

@yibeichan
Copy link
Collaborator

when debugging first-level glm for pydra tutorial we (@yibeichan @djarecka) encountered an error related to hash for a specific node.

Node:

# get glm report
@pydra.mark.task
@pydra.mark.annotate(
    {'model': ty.Any, 'contrasts': str, 'return': {'output_file': str}}
)
def glm_report(model, contrasts):
    output_file = os.path.join(workflow_out_dir, 'glm_report.html')
    report = make_glm_report(model, contrasts)
    report.save_as_html(output_file)
    return output_file

Error:

Task exception was never retrieved
future: <Task finished name='Task-7' coro=<ConcurrentFuturesWorker.exec_as_coro() done, defined at /Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/engine/workers.py:173> exception=UnhashableError('Cannot hash object {\'model\': FirstLevelModel(smoothing_fwhm=5.0, subject_label=\'10159\', t_r=2), \'contrasts\': \'StopSuccess - Go\', \'_func\': b\'\\x80\\x05\\x95\\x04\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x8c\\x17cloudpickle.cloudpickle\\x94\\x8c\\x0e_make_function\\x94\\x93\\x94(h\\x00\\x8c\\r_builtin_type\\x94\\x93\\x94\\x8c\\x08CodeType\\x94\\x85\\x94R\\x94(K\\x02K\\x00K\\x00K\\x04K\\x04K\\x03C\\x9a\\x97\\x00t\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00j\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xa0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00t\\x06\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00d\\x01\\xa6\\x02\\x00\\x00\\xab\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00}\\x02t\\t\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00|\\x00|\\x01\\xa6\\x02\\x00\\x00\\xab\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00}\\x03|\\x03\\xa0\\x05\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00|\\x02\\xa6\\x01\\x00\\x00\\xab\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00|\\x02S\\x00\\x94N\\x8c\\x0fglm_report.html\\x94\\x86\\x94(\\x8c\\x02os\\x94\\x8c\\x04path\\x94\\x8c\\x04join\\x94\\x8c\\x10workflow_out_dir\\x94\\x8c\\x0fmake_glm_report\\x94\\x8c\\x0csave_as_html\\x94t\\x94(\\x8c\\x05model\\x94\\x8c\\tcontrasts\\x94\\x8c\\x0boutput_file\\x94\\x8c\\x06report\\x94t\\x94\\x8cN/var/folders/wr/x5xt_yqs2cvc_gb3sf147lvc0000gn/T/ipykernel_55279/3311347549.py\\x94\\x8c\\nglm_report\\x94h\\x18K\\x0eCD\\x80\\x00\\xf5\\n\\x00\\x13\\x15\\x94\\\'\\x97,\\x92,\\xd5\\x1f/\\xd01B\\xd1\\x12C\\xd4\\x12C\\x80K\\xdd\\r\\x1c\\x98U\\xa0I\\xd1\\r.\\xd4\\r.\\x80F\\xd8\\x04\\n\\xd7\\x04\\x17\\xd2\\x04\\x17\\x98\\x0b\\xd1\\x04$\\xd4\\x04$\\xd0\\x04$\\xd8\\x0b\\x16\\xd0\\x04\\x16\\x94C\\x00\\x94))t\\x94R\\x94}\\x94(\\x8c\\x0b__package__\\x94N\\x8c\\x08__name__\\x94\\x8c\\x08__main__\\x94uNNNt\\x94R\\x94\\x8c\\x1ccloudpickle.cloudpickle_fast\\x94\\x8c\\x12_function_setstate\\x94\\x93\\x94h"}\\x94}\\x94(h\\x1fh\\x18\\x8c\\x0c__qualname__\\x94h\\x18\\x8c\\x0f__annotations__\\x94}\\x94(h\\x12\\x8c\\x06typing\\x94\\x8c\\x03Any\\x94\\x93\\x94h\\x13\\x8c\\x08builtins\\x94\\x8c\\x03str\\x94\\x93\\x94\\x8c\\x06return\\x94}\\x94h\\x14h0su\\x8c\\x0e__kwdefaults__\\x94N\\x8c\\x0c__defaults__\\x94N\\x8c\\n__module__\\x94h \\x8c\\x07__doc__\\x94N\\x8c\\x0b__closure__\\x94N\\x8c\\x17_cloudpickle_submodules\\x94]\\x94\\x8c\\x0b__globals__\\x94}\\x94(h\\x0bh\\x00\\x8c\\tsubimport\\x94\\x93\\x94h\\x0b\\x85\\x94R\\x94h\\x0e\\x8c\\x07pathlib\\x94\\x8c\\tPosixPath\\x94\\x93\\x94(\\x8c\\x01/\\x94\\x8c\\x03tmp\\x94\\x8c\\x07outputs\\x94\\x8c\\x056_glm\\x94t\\x94R\\x94h\\x0f\\x8c\\x1enilearn.reporting.glm_reporter\\x94h\\x0f\\x93\\x94uu\\x86\\x94\\x86R0.\'}')>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 106, in bytes_repr
    dct = obj.__dict__
          ^^^^^^^^^^^^
AttributeError: 'weakref.ReferenceType' object has no attribute '__dict__'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 73, in hash_object
    return hash_single(obj, Cache({}))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 89, in hash_single
    for chunk in bytes_repr(obj, cache):
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 223, in bytes_repr_dict
    yield from bytes_repr_mapping_contents(obj, cache)
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 264, in bytes_repr_mapping_contents
    yield bytes(hash_single(mapping[key], cache))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 89, in hash_single
    for chunk in bytes_repr(obj, cache):
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 114, in bytes_repr
    yield from bytes_repr_mapping_contents(dct, cache)
  File "/Users/yibeichen/miniconda3/envs/pydra-tutorial/lib/python3.11/site-packages/pydra/utils/hash.py", line 264, in bytes_repr_mapping_contents
    yield bytes(hash_single(mapping[key], cache))

We figured that the error is due to the input model being too complicated; model is the first-level glm model after model.fit, which probably contains too much information.

We solved this problem by integrating glm_report into the model_fitting (l1estimation) node but we want to note this issue here for further reference. We'll probably encounter similar issue in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant