Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/integrate tsam no multiple timegrid storage #1058

Open
wants to merge 2 commits into
base: feature/integrate_tsam
Choose a base branch
from

Conversation

Maxhi77
Copy link

@Maxhi77 Maxhi77 commented Feb 28, 2024

Added possibility to have diverse storage with and without inter_storage_content in tsam mode.

For this purpose, multiple_time_grid has been added to generic_storage. The naming can be improved in the future.

@Maxhi77 Maxhi77 requested a review from henhuy February 28, 2024 13:57
@pep8speaks
Copy link

Hello @Maxhi77! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 511:9: E306 expected 1 blank line before a nested definition, found 0
Line 539:80: E501 line too long (81 > 79 characters)
Line 546:80: E501 line too long (115 > 79 characters)
Line 557:80: E501 line too long (96 > 79 characters)
Line 558:70: E126 continuation line over-indented for hanging indent
Line 558:80: E501 line too long (95 > 79 characters)
Line 560:80: E501 line too long (84 > 79 characters)
Line 561:25: E117 over-indented
Line 562:80: E501 line too long (80 > 79 characters)
Line 772:21: E122 continuation line missing indentation or outdented

@henhuy
Copy link
Contributor

henhuy commented Feb 28, 2024

Thanks @Maxhi77 - will have a look.

@Maxhi77
Copy link
Author

Maxhi77 commented Feb 28, 2024

I just saw I forgot to add the possibility for investment, which I am going to do now.

@Maxhi77
Copy link
Author

Maxhi77 commented Feb 28, 2024

I see a logical fault in my approach, I do not know to solve in this moment.

For a storage with a multiple_tsam_timegrid:

for n in group:
   if n.multiple_tsam_timegrid:
      self.storage_content_inter = Var(
          self.STORAGES, m.CLUSTERS_OFFSET, within=NonNegativeReals
      )
      self.storage_content_intra = Var(
          self.STORAGES, m.TIMEINDEX_TYPICAL_CLUSTER_OFFSET
      )
   else:
      self.storage_content_intra = Var(
          self.STORAGES, m.TIMEINDEX_TYPICAL_CLUSTER_OFFSET,
          bounds=_storage_content_bound_intra_rule
      )

When I understand oemof correctly, the GenericStorageBlock is for all storages the same. Right now the last storage of the group would define in which if statment is true, which affects all storages. In my case I want to define variables differently, if multiple_tsam_timegrid is True. All solutions I am thinking about are linked to a lot of changes.

Do you have an idea?

@henhuy
Copy link
Contributor

henhuy commented Feb 28, 2024

You are right, it's not that simple.
I only see two options:

  1. split storages internally like this is done for balanced storages in
    self.STORAGES_BALANCED = Set(

    In this case all variables would have to be changed whether they depend on your new generated Set or the original set.
  2. Create a new block class which inherits from GenericStorageBlock and overwrite some of the initialization functions to your needs. Then in GeneriStorage at https://github.com/oemof/oemof-solph/blob/b00fd5f975dc72646e431d65bc07c650ace2bfd3/src/oemof/solph/components/_generic_storage.py#L328 you have to define which block shall be used, depending on your boolean variable multiple_tsam_timegrid. This seems a bit easier IMO

@Maxhi77
Copy link
Author

Maxhi77 commented Feb 28, 2024

Thanks for the answer. I had the first possibility in mind, but thought it is quite complex.

The second one sounds easier, but might be not clean and fancied by reviewers. Therefore I am going to link them (@gnn @p-snft ) for their opinion.

@Maxhi77 Maxhi77 assigned gnn and p-snft Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants