Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ep/qmml #666

Closed
wants to merge 57 commits into from
Closed

Ep/qmml #666

wants to merge 57 commits into from

Conversation

epens94
Copy link
Collaborator

@epens94 epens94 commented Oct 24, 2024

Contains changes to work with QCML Dataset

  • First Draft of QCML Dataclass
  • Inclusion of adaptive loss function
  • Inclusion of Bernstein and PhysNet radial basis
  • New Splitting class based on atom type

epens94 and others added 30 commits February 7, 2024 18:49
The molecular charge and molecular spin
embedding taken from SpookyNet and slighly modified
for SchNet representation.
testing the electronic_embedding module
databases for carbene and ag3 ions
train_V01.py is the main file to run the training
train.sh is the bash script to run the training on the cluster
add electronic embedding to so3 net and bugfix painn and schnet rep
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be ignored

@@ -0,0 +1 @@
include src/schnetpack/train/ressources/partition_spline_for_robust_loss.npz
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From this file values are loaded which are used to approximate the partition function to for the adaptive loss fn

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep this private/local and not upload to schnetpack

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean because of the data ? or the attached comments ?

batch_size: 50
num_train: 0.90
num_val: 0.05
load_properties: [formation_energy,forces,charge,multiplicity]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don`t think it is possible to pass a list like this. If I remember correctly it should work like

load properties:
    - formation_energy
    - forces
    - ...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible to pass a list like this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

@@ -57,5 +57,8 @@ script-files = [
"src/scripts/spkdeploy",
]

# Ensure package data such as resources are included
package-data = { "schnetpack.train" = ["ressources/partition_spline_for_robust_loss.npz"] }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is needed to include the file when building the package

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would maybe directly add this in the experiment yaml file and not as a separate config file, because it is barely used at the moment

@@ -1,5 +1,6 @@
defaults:
- radial_basis: gaussian
- nuclear_embedding: complex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this and leave default value as it is

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, done in 0f6fc8b

scheduler_cls: torch.optim.lr_scheduler.MultiStepLR
scheduler_monitor: val_loss
scheduler_args:
milestones: [3,9,15,18,24,30,36]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the milestones are very specific to the experiment. I would either set them to ??? or just define the scheduler in the experiment.yaml

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, done in ca4acd7

@@ -344,8 +344,10 @@ def _get_properties(
properties = {}
properties[structure.idx] = torch.tensor([idx])
for pname in load_properties:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove blank lines

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@epens94 epens94 closed this Oct 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants