diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 41b4fee785..a1cc689f94 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-30T12:11:13","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.0","generation_timestamp":"2024-10-31T08:21:17","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/advanced_concepts/Lossless_DC_power_flow/index.html b/dev/advanced_concepts/Lossless_DC_power_flow/index.html index 29d1b6ffbc..180b94f9b8 100644 --- a/dev/advanced_concepts/Lossless_DC_power_flow/index.html +++ b/dev/advanced_concepts/Lossless_DC_power_flow/index.html @@ -1,2 +1,2 @@ -Lossless nodal DC power flows · SpineOpt.jl

Lossless nodal DC power flows

Currently, there are two different methods to represent lossless DC power flows. In the following the implementation of the nodal model is presented, based of node voltage angles.

Key concepts

In the following, it is described how to set up a connection in order to represent a nodal lossless DC power flow network. Therefore, key object - and relationship classes as well as parameters are introduced.

  1. connection: A connection represents the electricity line being modelled. A physical property of a connection is its connection_reactance, which is defined on the connection object. Furthermore, if the reactance is given in a p.u. different from the standard unit used (e.g. p.u. = 100MVA), the parameter connection_reactance_base can be used to perform this conversion.
  2. node: In a lossless DC power flow model, nodes correspond to buses. To use voltage angles for the representation of a lossless DC model, the has_voltage_angle needs to be true for these nodes (which will trigger the generation of the node_voltage_angle variable). Limits on the voltage angle can be enforced through the max_voltage_angle and min_voltage_angle parameters. The reference node of the system should have a voltage angle equal to zero, assigned through the parameter fix_node_voltage_angle.
  3. connection__to_node and connection__from_node : These relationships need to be introduced between the connection and each node, in order to allow power flows (i.e. connection_flow). Furthermore, a capacity limit on the connection line can be introduced on these relationships through the parameter connection_capacity.
  4. connection__node__node: To ensure energy conservation across the power line, a fixed ratio between incoming and outgoing flows should be given. The fix_ratio_out_in_connection_flow parameter enforces a fixed ratio between outgoing flows (i.e. to_node) and incoming flows (i.e. from_node). This parameter should be defined for both flow direction.

The mathematical formulation of the lossless DC power flow model using voltage angles is fully described here.

+Lossless nodal DC power flows · SpineOpt.jl

Lossless nodal DC power flows

Currently, there are two different methods to represent lossless DC power flows. In the following the implementation of the nodal model is presented, based of node voltage angles.

Key concepts

In the following, it is described how to set up a connection in order to represent a nodal lossless DC power flow network. Therefore, key object - and relationship classes as well as parameters are introduced.

  1. connection: A connection represents the electricity line being modelled. A physical property of a connection is its connection_reactance, which is defined on the connection object. Furthermore, if the reactance is given in a p.u. different from the standard unit used (e.g. p.u. = 100MVA), the parameter connection_reactance_base can be used to perform this conversion.
  2. node: In a lossless DC power flow model, nodes correspond to buses. To use voltage angles for the representation of a lossless DC model, the has_voltage_angle needs to be true for these nodes (which will trigger the generation of the node_voltage_angle variable). Limits on the voltage angle can be enforced through the max_voltage_angle and min_voltage_angle parameters. The reference node of the system should have a voltage angle equal to zero, assigned through the parameter fix_node_voltage_angle.
  3. connection__to_node and connection__from_node : These relationships need to be introduced between the connection and each node, in order to allow power flows (i.e. connection_flow). Furthermore, a capacity limit on the connection line can be introduced on these relationships through the parameter connection_capacity.
  4. connection__node__node: To ensure energy conservation across the power line, a fixed ratio between incoming and outgoing flows should be given. The fix_ratio_out_in_connection_flow parameter enforces a fixed ratio between outgoing flows (i.e. to_node) and incoming flows (i.e. from_node). This parameter should be defined for both flow direction.

The mathematical formulation of the lossless DC power flow model using voltage angles is fully described here.

diff --git a/dev/advanced_concepts/decomposition/index.html b/dev/advanced_concepts/decomposition/index.html index 300db6cb6f..6638c0395e 100644 --- a/dev/advanced_concepts/decomposition/index.html +++ b/dev/advanced_concepts/decomposition/index.html @@ -1,2 +1,2 @@ -Decomposition · SpineOpt.jl

Decomposition

Decomposition approaches take advantage of certain problem structures to separate them into multiple related problems which are each more easily solved. Decomposition also allows us to do the inverse, which is to combine independent problems into a single problem, where each can be solved separately but with communication between them (e.g. investments and operations problems)

Decomposition thus allows us to do a number of things

  • Solve larger problems which are otherwise intractable
  • Include more detail in problems which otherwise need to be simplified
  • Combine related problems (e.g. investments/operations) in a more scientific way (rather than ad-hoc).
  • Employ parallel computing methods to solve multiple problems simultaneously.

High-level Decomposition Algorithm

The high-level algorithm is described below. For a more detailed description please see Benders decomposition

  • Model initialisation (preprocessdatastructure, generate temporal structures etc.)
  • For each benders_iteration
    • Solve master problem
    • Process master-problem solution:
      • set units_invested_bi(unit=u) equal to the investment variables solution from the master problem
    • Solve operations problem loop
    • Process operations sub-problem
      • set units_on_mv(unit=u) equal to the marginal value of the units_on bound constraint
    • Test for convergence
    • Update master problem
    • Rewind operations problem
    • Next benders iteration

Duals and reduced costs calculation for decomposition

The marginal values above are computed as the reduced costs of relevant optimisation variables. However, the dual solution to a MIP problem is not well defined. The standard approach to obtaining marginal values from a MIP model is to relax the integer variables, fix them to their last solution value and re-solve the problem as an LP. This is the standard approach in energy system modelling to obtain energy prices. However, although this is the standard approach, it does need to be used with caution. The main hazard associated with inferring duals in this way is that the impact on costs of an investment may be overstated. However, since these duals are used in Benders decomposition to obtain a lower bound on costs (i.e. the maximum potential value from an investment), this is ok and can be "corrected" in the next iteration. And finally, the benders gap will tell us how close our decomposed problem is to the optimal global solution.

Reporting dual values and reduced costs

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's relaxed problem marginal value will be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced cost for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of units_on in the final fixed LP to be written to the output db.

Using Decomposition

Assuming one has set up a conventional investments problem as described in Investment Optimization the following additional steps are required to utilise the decomposition framework:

  • Set the model_type parameter for your model to spineopt_benders.
  • Specify max_gap parameter for your model - This determines the master problem convergence criterion for the relative benders gap. A value of 0.05 will represent a relative benders gap of 5%.
  • Specify the max_iterations parameter for your model - This determines the master problem convergence criterion for the number of iterations. A value of 10 could be appropriate but this is highly dependent on the size and nature of the problem.

Once the above is set, all investment decisions in the model are automatically decomposed and optimised in a Benders master problem. This behaviour may change in the future to allow some investment decisions to be optimised in the operations problem and some optimised in the master problem as desired.

+Decomposition · SpineOpt.jl

Decomposition

Decomposition approaches take advantage of certain problem structures to separate them into multiple related problems which are each more easily solved. Decomposition also allows us to do the inverse, which is to combine independent problems into a single problem, where each can be solved separately but with communication between them (e.g. investments and operations problems)

Decomposition thus allows us to do a number of things

  • Solve larger problems which are otherwise intractable
  • Include more detail in problems which otherwise need to be simplified
  • Combine related problems (e.g. investments/operations) in a more scientific way (rather than ad-hoc).
  • Employ parallel computing methods to solve multiple problems simultaneously.

High-level Decomposition Algorithm

The high-level algorithm is described below. For a more detailed description please see Benders decomposition

  • Model initialisation (preprocessdatastructure, generate temporal structures etc.)
  • For each benders_iteration
    • Solve master problem
    • Process master-problem solution:
      • set units_invested_bi(unit=u) equal to the investment variables solution from the master problem
    • Solve operations problem loop
    • Process operations sub-problem
      • set units_on_mv(unit=u) equal to the marginal value of the units_on bound constraint
    • Test for convergence
    • Update master problem
    • Rewind operations problem
    • Next benders iteration

Duals and reduced costs calculation for decomposition

The marginal values above are computed as the reduced costs of relevant optimisation variables. However, the dual solution to a MIP problem is not well defined. The standard approach to obtaining marginal values from a MIP model is to relax the integer variables, fix them to their last solution value and re-solve the problem as an LP. This is the standard approach in energy system modelling to obtain energy prices. However, although this is the standard approach, it does need to be used with caution. The main hazard associated with inferring duals in this way is that the impact on costs of an investment may be overstated. However, since these duals are used in Benders decomposition to obtain a lower bound on costs (i.e. the maximum potential value from an investment), this is ok and can be "corrected" in the next iteration. And finally, the benders gap will tell us how close our decomposed problem is to the optimal global solution.

Reporting dual values and reduced costs

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's relaxed problem marginal value will be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced cost for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of units_on in the final fixed LP to be written to the output db.

Using Decomposition

Assuming one has set up a conventional investments problem as described in Investment Optimization the following additional steps are required to utilise the decomposition framework:

  • Set the model_type parameter for your model to spineopt_benders.
  • Specify max_gap parameter for your model - This determines the master problem convergence criterion for the relative benders gap. A value of 0.05 will represent a relative benders gap of 5%.
  • Specify the max_iterations parameter for your model - This determines the master problem convergence criterion for the number of iterations. A value of 10 could be appropriate but this is highly dependent on the size and nature of the problem.

Once the above is set, all investment decisions in the model are automatically decomposed and optimised in a Benders master problem. This behaviour may change in the future to allow some investment decisions to be optimised in the operations problem and some optimised in the master problem as desired.

diff --git a/dev/advanced_concepts/investment_optimization/index.html b/dev/advanced_concepts/investment_optimization/index.html index d1307a03df..b5908e32ef 100644 --- a/dev/advanced_concepts/investment_optimization/index.html +++ b/dev/advanced_concepts/investment_optimization/index.html @@ -1,2 +1,2 @@ -Investment Optimization · SpineOpt.jl

Investment Optimization

SpineOpt offers numerous ways to optimise investment decisions energy system models and in particular, offers a number of methologogies for capturing increased detail in investment models while containing the impact on run time. The basic principles of investments will be discussed first and this will be followed by more advanced approaches.

Key concepts for investments

Investment Decisions

These are the investment decisions that SpineOpt currently supports. At a high level, this means that the activity of the entities in question is controlled by an investment decision variable. The current implementation supports investments in:

Investment Variable Types

In all cases the capacity of the unit or connection or the maximum node state of a node is multiplied by the investment variable which may either be continuous or integer. This is determined, for units, by setting the unit_investment_variable_type parameter accordingly. Similary, for connections and node storages the connection_investment_variable_type and storage_investment_variable_type are specified.

Identiying Investment Candidate Units, Connections and Storages

The parameter candidate_units represents the number of units of this type that may be invested in. candidate_units determines the upper bound of the investment variable and setting this to a value greater than 0 identifies the unit as an investment candidate unit in the optimisation. If the unit_investment_variable_type is set to :unit_investment_variable_type_integer, the investment variable can be interpreted as the number of discrete units that may be invested in. However, if unit_investment_variable_type is :unit_investment_variable_type_continuous and the unit_capacity is set to unity, the investment decision variable can then be interpreted as the capacity of the unit rather than the number of units with candidate_units being the maximum capacity that can be invested in. Finally, we can invest in discrete blocks of capacity by setting unit_capacity to the size of the investment capacity blocks and have unit_investment_variable_type set to :unit_investment_variable_type_integer with candidate_units representing the maximum number of capacity blocks that may be invested in. The key points here are:

Investment Costs

Investment costs are specified by setting the appropriate *_investment\_cost parameter. The investment cost for units are specified by setting the unit_investment_cost parameter. This is currently interpreted as the full cost over the investment period for the unit. See the section below on investment temporal structure for setting the investment period. If the investment period is 1 year, then the corresponding unit_investment_cost is the annualised investment cost. For connections and storages, the investment cost parameters are connection_investment_cost and storage_investment_cost, respectively.

Temporal and Stochastic Structure of Investment Decisions

SpineOpt's flexible stochastic and temporal structure extend to investments where individual investment decisions can have their own temporal and stochastic structure independent of other investment decisions and other model variables. A global temporal resolution for all investment decisions can be defined by specifying the relationship model__default_investment_temporal_block. If a specific temporal resolution is required for specific investment decisions, then one can specify the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_temporal_block.

Similarly, a global stochastic structure can be defined for all investment decisions by specifying the relationship model__default_investment_stochastic_structure. If a specific stochastic structure is required for specific investment decisions, then one can specifying the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_stochastic_structure.

Impact of connection investments on network characteristics

The model parameter use_connection_intact_flow is available to control whether or not the impact of connection investments on the network characteristics should be captured. If set to true, then the model will use line outage distribution factors (LODF) to compute the impact of each connection investment over the flow across the network. Note that this introduces another variable, connection_intact_flow, representing the hypothetical flow on a connection in case all connection investments were in place. Also note that the impact of each connection is captured individually.

Creating an Investment Candidate Unit Example

If we have model that is not currently set up for investments and we wish to create an investment candidate unit, we can take the following steps.

Model Reference

Variables for investments

Variable NameIndicesDescription
units_invested_availableunit, s, tThe number of invested in units that are available at a given (s, t)
units_investedunit, s, tThe point-in-time investment decision corresponding to the number of units invested in at (s,t)
units_mothballedunit, s, t"Instantaneous" decision variable to mothball a unit
connections_invested_availableconnection, s, tThe number of invested-in connectionss that are available at a given (s, t)
connections_investedconnection, s, tThe point-in-time investment decision corresponding to the number of connectionss invested in at (s,t)
connections_decommissionedconnection, s, t"Instantaneous" decision variable to decommission a connection
storages_invested_availablenode, s, tThe number of invested-in storages that are available at a given (s, t)
storages_investednode, s, tThe point-in-time investment decision corresponding to the number of storages invested in at (s,t)
storages_decommissionednode, s, t"instantaneous" decision variable to decommission a storage

Relationships for investments

Relationship NameRelated Object Class ListDescription
model__default_investment_temporal_blockmodel, temporal_blockDefault temporal resolution for investment decisions effective if unit__investmenttemporalblock is not specified
model__default_investment_stochastic_structuremodel, stochastic_structureDefault stochastic structure for investment decisions effective if unit__investmentstochasticstructure is not specified
unit__investment_temporal_blockunit, temporal_blockSet temporal resolution of investment decisions - overrides model__defaultinvestmenttemporal_block
unit__investment_stochastic_structureunit, stochastic_structureSet stochastic structure for investment decisions - overrides model__defaultinvestmentstochastic_structure

Parameters for investments

Parameter NameObject Class ListDescription
candidate_unitsunitThe number of additional units of this type that can be invested in
unit_investment_costunitThe total overnight investment cost per candidate unit over the model horizon
unit_investment_tech_lifetimeunitThe investment lifetime of the unit - once invested-in, a unit must exist for at least this amount of time
unit_investment_variable_typeunitWhether the units_invested_available variable is continuous, integer or binary
fix_units_investedunitFix the value of units_invested
fix_units_invested_availableunitFix the value of connections_invested_available
candidate_connectionsconnectionThe number of additional connections of this type that can be invested in
connection_investment_costconnectionThe total overnight investment cost per candidate connection over the model horizon
connection_investment_tech_lifetimeconnectionThe investment lifetime of the connection - once invested-in, a connection must exist for at least this amount of time
connection_investment_variable_typeconnectionWhether the connections_invested_available variable is continuous, integer or binary
fix_connections_investedconnectionFix the value of connections_invested
fix_connections_invested_availableconnectionFix the value of connection_invested_available
candidate_storagesnodeThe number of additional storages of this type that can be invested in at node
storage_investment_costnodeThe total overnight investment cost per candidate storage over the model horizon
storage_investment_tech_lifetimenodeThe investment lifetime of the storage - once invested-in, a storage must exist for at least this amount of time
storage_investment_variable_typenodeWhether the storages_invested_available variable is continuous, integer or binary
fix_storages_investednodeFix the value of storages_invested
fix_storages_invested_availablenodeFix the value of storages_invested_available
FilenameRelative PathDescription
constraintunitsinvested_available.jl\constraintsconstrains units_invested_available to be less than candidate_units
constraintunitsinvested_transition.jl\constraintsdefines the relationship between units_invested_available, units_invested and units_mothballed. Analagous to units_on, units_started and units_shutdown
constraintunitlifetime.jl\constraintsonce a unit is invested-in, it must remain in existence for at least unit_investment_tech_lifetime - analagous to min_up_time.
constraintunitsavailable.jl\constraintsEnforces units_available is the sum of number_of_units and units_invested_available
constraintconnectionsinvested_available.jl\constraintsconstrains connections_invested_available to be less than candidate_connections
constraintconnectionsinvested_transition.jl\constraintsdefines the relationship between connections_invested_available, connections_invested and connections_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintconnectionlifetime.jl\constraintsonce a connection is invested-in, it must remain in existence for at least connection_investment_tech_lifetime - analagous to min_up_time.
constraintstoragesinvested_available.jl\constraintsconstrains storages_invested_available to be less than candidate_storages
constraintstoragesinvested_transition.jl\constraintsdefines the relationship between storages_invested_available, storages_invested and storages_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintstoragelifetime.jl\constraintsonce a storage is invested-in, it must remain in existence for at least storage_investment_tech_lifetime - analagous to min_up_time.
+Investment Optimization · SpineOpt.jl

Investment Optimization

SpineOpt offers numerous ways to optimise investment decisions energy system models and in particular, offers a number of methologogies for capturing increased detail in investment models while containing the impact on run time. The basic principles of investments will be discussed first and this will be followed by more advanced approaches.

Key concepts for investments

Investment Decisions

These are the investment decisions that SpineOpt currently supports. At a high level, this means that the activity of the entities in question is controlled by an investment decision variable. The current implementation supports investments in:

Investment Variable Types

In all cases the capacity of the unit or connection or the maximum node state of a node is multiplied by the investment variable which may either be continuous or integer. This is determined, for units, by setting the unit_investment_variable_type parameter accordingly. Similary, for connections and node storages the connection_investment_variable_type and storage_investment_variable_type are specified.

Identiying Investment Candidate Units, Connections and Storages

The parameter candidate_units represents the number of units of this type that may be invested in. candidate_units determines the upper bound of the investment variable and setting this to a value greater than 0 identifies the unit as an investment candidate unit in the optimisation. If the unit_investment_variable_type is set to :unit_investment_variable_type_integer, the investment variable can be interpreted as the number of discrete units that may be invested in. However, if unit_investment_variable_type is :unit_investment_variable_type_continuous and the unit_capacity is set to unity, the investment decision variable can then be interpreted as the capacity of the unit rather than the number of units with candidate_units being the maximum capacity that can be invested in. Finally, we can invest in discrete blocks of capacity by setting unit_capacity to the size of the investment capacity blocks and have unit_investment_variable_type set to :unit_investment_variable_type_integer with candidate_units representing the maximum number of capacity blocks that may be invested in. The key points here are:

Investment Costs

Investment costs are specified by setting the appropriate *_investment\_cost parameter. The investment cost for units are specified by setting the unit_investment_cost parameter. This is currently interpreted as the full cost over the investment period for the unit. See the section below on investment temporal structure for setting the investment period. If the investment period is 1 year, then the corresponding unit_investment_cost is the annualised investment cost. For connections and storages, the investment cost parameters are connection_investment_cost and storage_investment_cost, respectively.

Temporal and Stochastic Structure of Investment Decisions

SpineOpt's flexible stochastic and temporal structure extend to investments where individual investment decisions can have their own temporal and stochastic structure independent of other investment decisions and other model variables. A global temporal resolution for all investment decisions can be defined by specifying the relationship model__default_investment_temporal_block. If a specific temporal resolution is required for specific investment decisions, then one can specify the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_temporal_block.

Similarly, a global stochastic structure can be defined for all investment decisions by specifying the relationship model__default_investment_stochastic_structure. If a specific stochastic structure is required for specific investment decisions, then one can specifying the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_stochastic_structure.

Impact of connection investments on network characteristics

The model parameter use_connection_intact_flow is available to control whether or not the impact of connection investments on the network characteristics should be captured. If set to true, then the model will use line outage distribution factors (LODF) to compute the impact of each connection investment over the flow across the network. Note that this introduces another variable, connection_intact_flow, representing the hypothetical flow on a connection in case all connection investments were in place. Also note that the impact of each connection is captured individually.

Creating an Investment Candidate Unit Example

If we have model that is not currently set up for investments and we wish to create an investment candidate unit, we can take the following steps.

Model Reference

Variables for investments

Variable NameIndicesDescription
units_invested_availableunit, s, tThe number of invested in units that are available at a given (s, t)
units_investedunit, s, tThe point-in-time investment decision corresponding to the number of units invested in at (s,t)
units_mothballedunit, s, t"Instantaneous" decision variable to mothball a unit
connections_invested_availableconnection, s, tThe number of invested-in connectionss that are available at a given (s, t)
connections_investedconnection, s, tThe point-in-time investment decision corresponding to the number of connectionss invested in at (s,t)
connections_decommissionedconnection, s, t"Instantaneous" decision variable to decommission a connection
storages_invested_availablenode, s, tThe number of invested-in storages that are available at a given (s, t)
storages_investednode, s, tThe point-in-time investment decision corresponding to the number of storages invested in at (s,t)
storages_decommissionednode, s, t"instantaneous" decision variable to decommission a storage

Relationships for investments

Relationship NameRelated Object Class ListDescription
model__default_investment_temporal_blockmodel, temporal_blockDefault temporal resolution for investment decisions effective if unit__investmenttemporalblock is not specified
model__default_investment_stochastic_structuremodel, stochastic_structureDefault stochastic structure for investment decisions effective if unit__investmentstochasticstructure is not specified
unit__investment_temporal_blockunit, temporal_blockSet temporal resolution of investment decisions - overrides model__defaultinvestmenttemporal_block
unit__investment_stochastic_structureunit, stochastic_structureSet stochastic structure for investment decisions - overrides model__defaultinvestmentstochastic_structure

Parameters for investments

Parameter NameObject Class ListDescription
candidate_unitsunitThe number of additional units of this type that can be invested in
unit_investment_costunitThe total overnight investment cost per candidate unit over the model horizon
unit_investment_tech_lifetimeunitThe investment lifetime of the unit - once invested-in, a unit must exist for at least this amount of time
unit_investment_variable_typeunitWhether the units_invested_available variable is continuous, integer or binary
fix_units_investedunitFix the value of units_invested
fix_units_invested_availableunitFix the value of connections_invested_available
candidate_connectionsconnectionThe number of additional connections of this type that can be invested in
connection_investment_costconnectionThe total overnight investment cost per candidate connection over the model horizon
connection_investment_tech_lifetimeconnectionThe investment lifetime of the connection - once invested-in, a connection must exist for at least this amount of time
connection_investment_variable_typeconnectionWhether the connections_invested_available variable is continuous, integer or binary
fix_connections_investedconnectionFix the value of connections_invested
fix_connections_invested_availableconnectionFix the value of connection_invested_available
candidate_storagesnodeThe number of additional storages of this type that can be invested in at node
storage_investment_costnodeThe total overnight investment cost per candidate storage over the model horizon
storage_investment_tech_lifetimenodeThe investment lifetime of the storage - once invested-in, a storage must exist for at least this amount of time
storage_investment_variable_typenodeWhether the storages_invested_available variable is continuous, integer or binary
fix_storages_investednodeFix the value of storages_invested
fix_storages_invested_availablenodeFix the value of storages_invested_available
FilenameRelative PathDescription
constraintunitsinvested_available.jl\constraintsconstrains units_invested_available to be less than candidate_units
constraintunitsinvested_transition.jl\constraintsdefines the relationship between units_invested_available, units_invested and units_mothballed. Analagous to units_on, units_started and units_shutdown
constraintunitlifetime.jl\constraintsonce a unit is invested-in, it must remain in existence for at least unit_investment_tech_lifetime - analagous to min_up_time.
constraintunitsavailable.jl\constraintsEnforces units_available is the sum of number_of_units and units_invested_available
constraintconnectionsinvested_available.jl\constraintsconstrains connections_invested_available to be less than candidate_connections
constraintconnectionsinvested_transition.jl\constraintsdefines the relationship between connections_invested_available, connections_invested and connections_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintconnectionlifetime.jl\constraintsonce a connection is invested-in, it must remain in existence for at least connection_investment_tech_lifetime - analagous to min_up_time.
constraintstoragesinvested_available.jl\constraintsconstrains storages_invested_available to be less than candidate_storages
constraintstoragesinvested_transition.jl\constraintsdefines the relationship between storages_invested_available, storages_invested and storages_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintstoragelifetime.jl\constraintsonce a storage is invested-in, it must remain in existence for at least storage_investment_tech_lifetime - analagous to min_up_time.
diff --git a/dev/advanced_concepts/mga/index.html b/dev/advanced_concepts/mga/index.html index ebb5e6cd36..033743d987 100644 --- a/dev/advanced_concepts/mga/index.html +++ b/dev/advanced_concepts/mga/index.html @@ -1,2 +1,2 @@ -Modelling to generate alternatives · SpineOpt.jl

Modelling to generate alternatives

Through modelling to generate alternatives (short MGA), near-optimal solutions can be explored under certain conditions. Currently, SpineOpt supports two methods for MGA are available.

Modelling to generate alternative: Maximally different portfolios

The idea is that an orginal problem is solved, and subsequently solved again under the condition that the realization of variables should be maximally different from the previous iteration(s), while keeping the objective function within a certain threshold (defined by max_mga_slack).

In SpineOpt, we choose units_invested_available, connections_invested_available, and storages_invested_available as variables that can be considered for the maximum-difference-problem. The implementation is based on Modelling to generate alternatives: A technique to explore uncertainty in energy-environment-economy models.

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. You should also define the number of iterations (max_mga_iterations, and the maximum allowed deviation from the original objective function (max_mga_slack).
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA difference maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • The original MGA formulation is non-convex (maximization problem of an absolute function), but has been linearized through big M method. For this purpose, one should always make sure that units_invested_big_m_mga, connections_invested_big_m_mga, or storages_invested_big_m_mga, respectively, is sufficently large to always be larger the the maximum possible difference per MGA iteration. (Typically the number of candidates could suffice.)
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.

Modelling to generate alternative: Trade-offs between technology investments

The idea of this approach is to explore near-optimal solutions that maximize/minimize investment in a certain technology (or multiple technologies simultanesously).

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. The maximum allowed deviation from the original objective function should be defined via max_mga_slack. Note that for this method, we don't define an explicit number of iteration via the max_mga_iterations parameter (see also below)
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA min/maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • To explore near-optimal solutions using this methodology, the units_invested_mga_weight, connections_invested_mga_weight, and storages_invested_mga_weight parameters are used to define near-optimal solutions. For this purpose, these parameters are defined as Arrays, defining the weight of the technology per iterations. Note that the length of these Arrays should be the same for all technologies, as this will correspond to the number of MGA iterations, i.e., the number of near-optimal solutions. To analyze the trade-off between two technology types, we can, e.g., define units_invested_mga_weight for unit group 1 as [-1,-0.5,0], whereas the use the weights [0,-0.5,-1] for the other technology storage group 1 in question. Note that a negative sign will correspond to a minimization of investments in the corresponding technology type, while positive signs correspond to a maximization of the respective technology. In the given example, we would hence first minimize the investments in unit group 1, then minimize the two technologies simultaneuously, and finally only minimize investments in storage group 2.
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.
+Modelling to generate alternatives · SpineOpt.jl

Modelling to generate alternatives

Through modelling to generate alternatives (short MGA), near-optimal solutions can be explored under certain conditions. Currently, SpineOpt supports two methods for MGA are available.

Modelling to generate alternative: Maximally different portfolios

The idea is that an orginal problem is solved, and subsequently solved again under the condition that the realization of variables should be maximally different from the previous iteration(s), while keeping the objective function within a certain threshold (defined by max_mga_slack).

In SpineOpt, we choose units_invested_available, connections_invested_available, and storages_invested_available as variables that can be considered for the maximum-difference-problem. The implementation is based on Modelling to generate alternatives: A technique to explore uncertainty in energy-environment-economy models.

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. You should also define the number of iterations (max_mga_iterations, and the maximum allowed deviation from the original objective function (max_mga_slack).
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA difference maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • The original MGA formulation is non-convex (maximization problem of an absolute function), but has been linearized through big M method. For this purpose, one should always make sure that units_invested_big_m_mga, connections_invested_big_m_mga, or storages_invested_big_m_mga, respectively, is sufficently large to always be larger the the maximum possible difference per MGA iteration. (Typically the number of candidates could suffice.)
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.

Modelling to generate alternative: Trade-offs between technology investments

The idea of this approach is to explore near-optimal solutions that maximize/minimize investment in a certain technology (or multiple technologies simultanesously).

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. The maximum allowed deviation from the original objective function should be defined via max_mga_slack. Note that for this method, we don't define an explicit number of iteration via the max_mga_iterations parameter (see also below)
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA min/maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • To explore near-optimal solutions using this methodology, the units_invested_mga_weight, connections_invested_mga_weight, and storages_invested_mga_weight parameters are used to define near-optimal solutions. For this purpose, these parameters are defined as Arrays, defining the weight of the technology per iterations. Note that the length of these Arrays should be the same for all technologies, as this will correspond to the number of MGA iterations, i.e., the number of near-optimal solutions. To analyze the trade-off between two technology types, we can, e.g., define units_invested_mga_weight for unit group 1 as [-1,-0.5,0], whereas the use the weights [0,-0.5,-1] for the other technology storage group 1 in question. Note that a negative sign will correspond to a minimization of investments in the corresponding technology type, while positive signs correspond to a maximization of the respective technology. In the given example, we would hence first minimize the investments in unit group 1, then minimize the two technologies simultaneuously, and finally only minimize investments in storage group 2.
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.
diff --git a/dev/advanced_concepts/multi-year/index.html b/dev/advanced_concepts/multi-year/index.html index 2dff94a555..3773d23f25 100644 --- a/dev/advanced_concepts/multi-year/index.html +++ b/dev/advanced_concepts/multi-year/index.html @@ -1,2 +1,2 @@ -Multi-year Investments · SpineOpt.jl

Multi-year investments

Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

There are two tutorials related to multi-year investments: Capacity planning Tutorial and Multi-year investments. This section covers the concepts of multi-year investments in SpineOpt, but we highly recommend checking out these tutorial for a more thorough understanding of how the model is set up.

Basic idea

SpineOpt offers flexibility to the users so that different things can be modeled given specific set-ups and inputs of the model. This flexibility can be greatly illustrated by the multi-year investments modeling. We apply the same mathematical formulation for any capacity planning exercises, as shown in Capacity planning Tutorial. For the multi-year model, what you would need differently from a single-year model is mainly the specification of the temporal structure, i.e., the investment period and operational period, and the rest works very much similarly to a single-year model.

Economic representation

Parameters

It can be tricky to put the correct cost parameters into the model since factors like discounting and end-of-lifetime effects have to be taken into account. For that purpose, SpineOpt has incorporated some dedicated parameters for economic representation. Set use_economic_representation to true will activate these paramters.

Discounted annuities

This factor translates the overnight costs of investment into discounted (to the discount_year) annual payments, distributed over the total lifetime of the investment. Investment payments are assumed to increase linearly over the lead-time, and decrease linearly towards the end of the economic lifetime. This is also illustrated here:

image

For this purpose, we first calculate first the fraction of payment per year (e.g. something like 0.25, 0.5, 0.75,1 over the lead time; 1 for the economic lifetime minus the lead time, and 0.75, 0.5, 0.25 and 0 for the remaining economic lifetime). Each payment fraction is then multiplied by the discounting factor of the payment year with respect to the discounting year (e.g. start of optimization).

Salvage fraction

As we consider (discounted and annuitized) overnight costs in the objective function, it can happen that the lifetime of a unit exceeds the model horizon. In such cases, the salvage fraction needs to be deducted from the objective function. In principle, this means, that the annuities "already paid", which exceed the modelling horizon, are recuperated.

Discounted durations

The discounted duration is used to discount operational costs within a certain investment period to the discount year (e.g. beginning of the optimization). If milestone years are used for investments, the discounted duration is calculated for each investment period as defined by investment temporal blocks, otherwise, it will be calculated on a yearly basis.

Technology specific discount factors

The technology specific discount factor can be used, if e.g. investments in a certain technology are particularly risky. The default value for this parameter is 1.

Adaptions objective terms

When use_economic_representation is set to true:

  • Investment costs are multiplied with discounted annuties conversion factor and the technological discount factor and (1 - salvage fraction).

  • Operational cost terms are multiplied with the discounted duration factor.

Additional information

More information can be found in the following files.

  • Economic representation in SpineOpt contains the details of the calculation of the economic prameters. Note that this document covers more concepts than what is currently available in SpineOpt (the available ones are the parameters listed above and the adaptions in the objective), the rest is under development.

  • Economic parameters calculation tool is an excel tool that you can use to calculate the economic parameters on your own.

Warning

Please also note that the use of economic representation in SpineOpt does not support rolling horizon and Benders' decomposition, which warrants future improvements.

+Multi-year Investments · SpineOpt.jl

Multi-year investments

Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

There are two tutorials related to multi-year investments: Capacity planning Tutorial and Multi-year investments. This section covers the concepts of multi-year investments in SpineOpt, but we highly recommend checking out these tutorial for a more thorough understanding of how the model is set up.

Basic idea

SpineOpt offers flexibility to the users so that different things can be modeled given specific set-ups and inputs of the model. This flexibility can be greatly illustrated by the multi-year investments modeling. We apply the same mathematical formulation for any capacity planning exercises, as shown in Capacity planning Tutorial. For the multi-year model, what you would need differently from a single-year model is mainly the specification of the temporal structure, i.e., the investment period and operational period, and the rest works very much similarly to a single-year model.

Economic representation

Parameters

It can be tricky to put the correct cost parameters into the model since factors like discounting and end-of-lifetime effects have to be taken into account. For that purpose, SpineOpt has incorporated some dedicated parameters for economic representation. Set use_economic_representation to true will activate these paramters.

Discounted annuities

This factor translates the overnight costs of investment into discounted (to the discount_year) annual payments, distributed over the total lifetime of the investment. Investment payments are assumed to increase linearly over the lead-time, and decrease linearly towards the end of the economic lifetime. This is also illustrated here:

image

For this purpose, we first calculate first the fraction of payment per year (e.g. something like 0.25, 0.5, 0.75,1 over the lead time; 1 for the economic lifetime minus the lead time, and 0.75, 0.5, 0.25 and 0 for the remaining economic lifetime). Each payment fraction is then multiplied by the discounting factor of the payment year with respect to the discounting year (e.g. start of optimization).

Salvage fraction

As we consider (discounted and annuitized) overnight costs in the objective function, it can happen that the lifetime of a unit exceeds the model horizon. In such cases, the salvage fraction needs to be deducted from the objective function. In principle, this means, that the annuities "already paid", which exceed the modelling horizon, are recuperated.

Discounted durations

The discounted duration is used to discount operational costs within a certain investment period to the discount year (e.g. beginning of the optimization). If milestone years are used for investments, the discounted duration is calculated for each investment period as defined by investment temporal blocks, otherwise, it will be calculated on a yearly basis.

Technology specific discount factors

The technology specific discount factor can be used, if e.g. investments in a certain technology are particularly risky. The default value for this parameter is 1.

Adaptions objective terms

When use_economic_representation is set to true:

  • Investment costs are multiplied with discounted annuties conversion factor and the technological discount factor and (1 - salvage fraction).

  • Operational cost terms are multiplied with the discounted duration factor.

Additional information

More information can be found in the following files.

  • Economic representation in SpineOpt contains the details of the calculation of the economic prameters. Note that this document covers more concepts than what is currently available in SpineOpt (the available ones are the parameters listed above and the adaptions in the objective), the rest is under development.

  • Economic parameters calculation tool is an excel tool that you can use to calculate the economic parameters on your own.

Warning

Please also note that the use of economic representation in SpineOpt does not support rolling horizon and Benders' decomposition, which warrants future improvements.

diff --git a/dev/advanced_concepts/multi_stage/index.html b/dev/advanced_concepts/multi_stage/index.html index 1ad5395093..3e649a9769 100644 --- a/dev/advanced_concepts/multi_stage/index.html +++ b/dev/advanced_concepts/multi_stage/index.html @@ -1,2 +1,2 @@ -Multi-stage optimisation · SpineOpt.jl

Multi-stage optimisation

Note

This section describes how to run multi-stage optimisations with SpineOpt using the stage class - not to be confused with the rolling horizon optimisation technique described in Temporal Framework, nor the Benders decomposition algorithm described in Decomposition.

Warning

This feature is experimental. It may change in future versions without notice.

By default, SpineOpt is solved as a 'single-stage' optimisation problem. However you can add additional stages to the optimisation by creating stage objects in your DB.

To motivate this discussion, say you want to model a storage over a year with hourly resolution. The model is large, so you would like to solve it using a rolling horizon of, say, one day - so it solves quickly (see roll_forward and the Temporal Framework section). But this wouldn't capture the long-term value of your storage!

To remediate this, you can introduce an additional 'stage' that solves the entire year at once with a lower temporal resolution (say, one day instead of one hour), and then fixes the storage level at certain points for your higher-resolution rolling horizon model. Both models, the year-long model at daily resolution and the rolling horizon model at hourly resolution, will solve faster than the year-long model at hourly resolution - hopefully much faster - leading to a good compromise between speed and accuracy.

So how do you do that? You use a stage.

The stage class

In SpineOpt, a stage is an additional optimisation model that fixes certain outputs for another set of models declared as their children.

The children of a stage are defined via stage__child_stage relationships (with the parent stage in the first dimension). If a stage has no stage__child_stage relationships as a parent, then it is assumed to have only one children: the model itself.

The outputs that a stage fixes for its children are defined via stage__output__node, stage__output__unit and/or stage__output__connection relationships. For example, if you want to fix node_state for a node, then you would create a stage__output__node between the stage, the node_state output and the node.

By default, the output is fixed at the end of each child's rolling window. However, you can fix it at other points in time by specifying the output_resolution parameter as a duration (or array of durations) relative to the start of the child's rolling window. For example, if you specify an output_resolution of 1 day, then the output will be fixed at one day after the child's window start. If you specify something like [1 day, 2 days], then it will be fixed at one day after the window start, and then at two days after that (i.e., three days after the window start).

The optimisation model that a stage solves is given by the stage_scenario parameter value, which must be a scenario in your DB.

And that's basically it!

Example

In case of the year-long storage model with hourly resolution, here is how you would do it.

First, the basic setup:

  1. Create your model.
  2. Create a temporal_block called flat.
  3. Create the rest of your model (the storage node, etc.)
  4. Create a model__default_temporal_block between your model and the flat temporal_block (to keep things simple, but of course you can use node__temporal_block, etc., as needed).
  5. Create a scenario called e.g. Base_scenario including only the Base alternative.
  6. For the Base alternative:
    1. Specify model_start and model_end for your model to cover the year of interest.
    2. Specify roll_forward for your model as 1 day.
    3. Specify resolution for your temporal_block as 1 hour.

With the above, if you run the Base_scenario SpineOpt will run an hourly-resolution year-long rolling horizon model solving one day at a time, that would probably finish in reasonable time but wouldn't capture the long-term value of your storage.

Next, the 'stage' stuff:

  1. Create a stage called lt_storage.
  2. (Don't create any stage__child_stage relationsips - the only child is the model - plus you don't have/need other stages).
  3. Create a stage__output__node between your stage, the node_state output and your storage node.
  4. Create an alternative called lt_storage_alt.
  5. Create a scenario called lt_storage_scen with lt_storage_alt in the higher rank and the Base alternative in the lower rank.
  6. For the lt_storage_alt:
    1. Specify roll_forward for your model as nothing - so the model doesn't roll - the entire year is solved at once.
    2. Specify resolution for the flat temporal_block as 1 day.
    3. (Don't specify output_resolution so the output is fixed at the end of the model's rolling window.)
  7. For the Base alternative, specify stage_scenario for the lt_storage stage as lt_storage_scen.

Now, if you run the Base_scenario SpineOpt will run a two-stage model:

  • First, a daily-resolution year-long model that will capture the long-term value of your storage.
  • Next, an hourly-resolution year-long rolling horizon model solving one day at a time, where the node_state of your storage node will be fixed at the end of each day to the optimal LT trajectory computed in the previous stage.
+Multi-stage optimisation · SpineOpt.jl

Multi-stage optimisation

Note

This section describes how to run multi-stage optimisations with SpineOpt using the stage class - not to be confused with the rolling horizon optimisation technique described in Temporal Framework, nor the Benders decomposition algorithm described in Decomposition.

Warning

This feature is experimental. It may change in future versions without notice.

By default, SpineOpt is solved as a 'single-stage' optimisation problem. However you can add additional stages to the optimisation by creating stage objects in your DB.

To motivate this discussion, say you want to model a storage over a year with hourly resolution. The model is large, so you would like to solve it using a rolling horizon of, say, one day - so it solves quickly (see roll_forward and the Temporal Framework section). But this wouldn't capture the long-term value of your storage!

To remediate this, you can introduce an additional 'stage' that solves the entire year at once with a lower temporal resolution (say, one day instead of one hour), and then fixes the storage level at certain points for your higher-resolution rolling horizon model. Both models, the year-long model at daily resolution and the rolling horizon model at hourly resolution, will solve faster than the year-long model at hourly resolution - hopefully much faster - leading to a good compromise between speed and accuracy.

So how do you do that? You use a stage.

The stage class

In SpineOpt, a stage is an additional optimisation model that fixes certain outputs for another set of models declared as their children.

The children of a stage are defined via stage__child_stage relationships (with the parent stage in the first dimension). If a stage has no stage__child_stage relationships as a parent, then it is assumed to have only one children: the model itself.

The outputs that a stage fixes for its children are defined via stage__output__node, stage__output__unit and/or stage__output__connection relationships. For example, if you want to fix node_state for a node, then you would create a stage__output__node between the stage, the node_state output and the node.

By default, the output is fixed at the end of each child's rolling window. However, you can fix it at other points in time by specifying the output_resolution parameter as a duration (or array of durations) relative to the start of the child's rolling window. For example, if you specify an output_resolution of 1 day, then the output will be fixed at one day after the child's window start. If you specify something like [1 day, 2 days], then it will be fixed at one day after the window start, and then at two days after that (i.e., three days after the window start).

The optimisation model that a stage solves is given by the stage_scenario parameter value, which must be a scenario in your DB.

And that's basically it!

Example

In case of the year-long storage model with hourly resolution, here is how you would do it.

First, the basic setup:

  1. Create your model.
  2. Create a temporal_block called flat.
  3. Create the rest of your model (the storage node, etc.)
  4. Create a model__default_temporal_block between your model and the flat temporal_block (to keep things simple, but of course you can use node__temporal_block, etc., as needed).
  5. Create a scenario called e.g. Base_scenario including only the Base alternative.
  6. For the Base alternative:
    1. Specify model_start and model_end for your model to cover the year of interest.
    2. Specify roll_forward for your model as 1 day.
    3. Specify resolution for your temporal_block as 1 hour.

With the above, if you run the Base_scenario SpineOpt will run an hourly-resolution year-long rolling horizon model solving one day at a time, that would probably finish in reasonable time but wouldn't capture the long-term value of your storage.

Next, the 'stage' stuff:

  1. Create a stage called lt_storage.
  2. (Don't create any stage__child_stage relationsips - the only child is the model - plus you don't have/need other stages).
  3. Create a stage__output__node between your stage, the node_state output and your storage node.
  4. Create an alternative called lt_storage_alt.
  5. Create a scenario called lt_storage_scen with lt_storage_alt in the higher rank and the Base alternative in the lower rank.
  6. For the lt_storage_alt:
    1. Specify roll_forward for your model as nothing - so the model doesn't roll - the entire year is solved at once.
    2. Specify resolution for the flat temporal_block as 1 day.
    3. (Don't specify output_resolution so the output is fixed at the end of the model's rolling window.)
  7. For the Base alternative, specify stage_scenario for the lt_storage stage as lt_storage_scen.

Now, if you run the Base_scenario SpineOpt will run a two-stage model:

  • First, a daily-resolution year-long model that will capture the long-term value of your storage.
  • Next, an hourly-resolution year-long rolling horizon model solving one day at a time, where the node_state of your storage node will be fixed at the end of each day to the optimal LT trajectory computed in the previous stage.
diff --git a/dev/advanced_concepts/powerflow/index.html b/dev/advanced_concepts/powerflow/index.html index 9a0d86a906..c6aecaa852 100644 --- a/dev/advanced_concepts/powerflow/index.html +++ b/dev/advanced_concepts/powerflow/index.html @@ -1,2 +1,2 @@ -PTDF-Based Powerflow · SpineOpt.jl

Power transfer distribution factors (PTDF) based DC power flow

There are two main methodologies for directly including DC powerflow in unit commitment/energy system models. One method is to directly include the bus voltage angles as variables in the model. This method is described in Nodal lossless DC Powerflow.

Here we discuss the method of using power transfer distribution factors (PTDF) for DC power flow and line outage distribution factors (lodf) for security constrained unit commitment.

Warning

The calculations for investments using the PTDF method do not consider the mutual effect of multiple simultaneous investments. In other words, the results are (increasingly more) incorrect for (more) lines that interact with each other. Yet, this method remains useful for choosing between multiple simultaneous investments that are assumed non-interacting and/or multiple mutually exclusive investments.

On the other hand, investments using the angle based method work for multiple lines but this method is slower and does not take into account the N-1 rule.

Warning

Connecting AC lines through two DC lines is also not supported in our implementation of the PTDF method but it is possible to do this with our implementation of the angle based method.

Key concepts

  1. ptdf: The power transfer distribution factors are a property of the network reactances and their derivation may be found here. ptdf(n, c) represents the fraction of an injection at node n that will flow on connection c. The flow on connection c is then the sum over all nodes of ptdf(n, c)*net_injection(c). The advantage of this method is that it introduces no additional variables into the problem and instead, introduces only one constraint for each connection whose flow we are interested in monitoring.
  2. lodf: Line outage distribution factors are a function of the network ptdfs and their derivation is also found here. lodf(c_contingency, c_monitored) represents the fraction of the pre-contingency flow on connection c_contingency that will flow on c_monitored if c_contingency is disconnected. Therefore, the post contingency flow on connection c_monitored is the pre_contingency flow plus lodf(c_contingency, c_monitored)\*pre_contingency_flow(c_contingency)). Therefore, consideration of N contingencies on M monitored lines introduces N x M constraints into the model. Usually one wishes to contain this number and methods are given below to achieve this.
  3. Defining your network To identify the network for which ptdfs, lodfs and connection_flows will be calculated according to the ptdf method, one does the following:
    • Create node objects for each bus in the model.
    • Create connection objects representing each line of the network: For each connection specify the connection_reactance parameter and the connection_type parameter. Setting connection_type=connection_type_lossless_bidirectional simplifies the amount of data that needs to be specified for an eletrical network. See connection_type for more details
    • Set the connection__to_node and connection__from_node relationships to define the topology of each connection along with the connection_capacity parameter on one or both of these relationships.
    • Set the connection_emergency_capacity parameter to define the post contingency rating if lodf-based N-1 security constraints are to be included
    • Create a commodity object and node__commodity relationships for all the nodes that comprise the electrical network for which PTDFs are to be calculated.
    • Specify the commodity_physics parameter for the commodity to :commodity_physics_ptdf if ptdf-based DC load flow is desired with no N-1 security constraints or to :commodity_physics_lodf if it is desired to include lodf-based N-1 security constraints
    • To identify the reference bus(node) specify the node_opf_type parameter for the appropriate node with the value node_opf_type_reference.
  4. Controlling problem size
    • The lines to be monitored are specified by setting the connection_monitored property for each connection for which a flow constraint is to be generated
    • The contingencies to be considered are specified by setting the connection_contingency property for the appropriate connections. For N contingencies and M monitored lines, N x M constraints will be generated.
    • If the lodf(c_contingency, c_monitored) is very small, it means the outage of c_contingency has a small impact on the flow on c_monitoredand there is little point in including this constraint in the model. This can be achieved by setting the commodity_lodf_tolerance commodity parameter. Contingency / Monotired line combinations with lodfs below this value will be ignored, reducing the size of the model.
    • If ptdf(n, c) is very small, it means an injection at n has a small impact on the flow on c and there is little point in considering it. This can be achieved by setting the commodity_ptdf_threshold commodity parameter. Node / Monotired line combinations with ptdfs below this value will be ignored, reducing the number of coefficients in the model.
    • To more easily identify which connections are worth being monitored or which contingencies are worth being considered, you can add the contingency_is_binding output to any of your reports (via a report__output relationship). This will run the model without the security constraints, and instead write a parameter called contingency_is_binding to the output database for each pair of contingency and monitored connection. The value of the parameter will be a (possibly stochastic) time-series where a value of one will indicate that the corresponding security constraint is binding, and zero otherwise.
+PTDF-Based Powerflow · SpineOpt.jl

Power transfer distribution factors (PTDF) based DC power flow

There are two main methodologies for directly including DC powerflow in unit commitment/energy system models. One method is to directly include the bus voltage angles as variables in the model. This method is described in Nodal lossless DC Powerflow.

Here we discuss the method of using power transfer distribution factors (PTDF) for DC power flow and line outage distribution factors (lodf) for security constrained unit commitment.

Warning

The calculations for investments using the PTDF method do not consider the mutual effect of multiple simultaneous investments. In other words, the results are (increasingly more) incorrect for (more) lines that interact with each other. Yet, this method remains useful for choosing between multiple simultaneous investments that are assumed non-interacting and/or multiple mutually exclusive investments.

On the other hand, investments using the angle based method work for multiple lines but this method is slower and does not take into account the N-1 rule.

Warning

Connecting AC lines through two DC lines is also not supported in our implementation of the PTDF method but it is possible to do this with our implementation of the angle based method.

Key concepts

  1. ptdf: The power transfer distribution factors are a property of the network reactances and their derivation may be found here. ptdf(n, c) represents the fraction of an injection at node n that will flow on connection c. The flow on connection c is then the sum over all nodes of ptdf(n, c)*net_injection(c). The advantage of this method is that it introduces no additional variables into the problem and instead, introduces only one constraint for each connection whose flow we are interested in monitoring.
  2. lodf: Line outage distribution factors are a function of the network ptdfs and their derivation is also found here. lodf(c_contingency, c_monitored) represents the fraction of the pre-contingency flow on connection c_contingency that will flow on c_monitored if c_contingency is disconnected. Therefore, the post contingency flow on connection c_monitored is the pre_contingency flow plus lodf(c_contingency, c_monitored)\*pre_contingency_flow(c_contingency)). Therefore, consideration of N contingencies on M monitored lines introduces N x M constraints into the model. Usually one wishes to contain this number and methods are given below to achieve this.
  3. Defining your network To identify the network for which ptdfs, lodfs and connection_flows will be calculated according to the ptdf method, one does the following:
    • Create node objects for each bus in the model.
    • Create connection objects representing each line of the network: For each connection specify the connection_reactance parameter and the connection_type parameter. Setting connection_type=connection_type_lossless_bidirectional simplifies the amount of data that needs to be specified for an eletrical network. See connection_type for more details
    • Set the connection__to_node and connection__from_node relationships to define the topology of each connection along with the connection_capacity parameter on one or both of these relationships.
    • Set the connection_emergency_capacity parameter to define the post contingency rating if lodf-based N-1 security constraints are to be included
    • Create a commodity object and node__commodity relationships for all the nodes that comprise the electrical network for which PTDFs are to be calculated.
    • Specify the commodity_physics parameter for the commodity to :commodity_physics_ptdf if ptdf-based DC load flow is desired with no N-1 security constraints or to :commodity_physics_lodf if it is desired to include lodf-based N-1 security constraints
    • To identify the reference bus(node) specify the node_opf_type parameter for the appropriate node with the value node_opf_type_reference.
  4. Controlling problem size
    • The lines to be monitored are specified by setting the connection_monitored property for each connection for which a flow constraint is to be generated
    • The contingencies to be considered are specified by setting the connection_contingency property for the appropriate connections. For N contingencies and M monitored lines, N x M constraints will be generated.
    • If the lodf(c_contingency, c_monitored) is very small, it means the outage of c_contingency has a small impact on the flow on c_monitoredand there is little point in including this constraint in the model. This can be achieved by setting the commodity_lodf_tolerance commodity parameter. Contingency / Monotired line combinations with lodfs below this value will be ignored, reducing the size of the model.
    • If ptdf(n, c) is very small, it means an injection at n has a small impact on the flow on c and there is little point in considering it. This can be achieved by setting the commodity_ptdf_threshold commodity parameter. Node / Monotired line combinations with ptdfs below this value will be ignored, reducing the number of coefficients in the model.
    • To more easily identify which connections are worth being monitored or which contingencies are worth being considered, you can add the contingency_is_binding output to any of your reports (via a report__output relationship). This will run the model without the security constraints, and instead write a parameter called contingency_is_binding to the output database for each pair of contingency and monitored connection. The value of the parameter will be a (possibly stochastic) time-series where a value of one will indicate that the corresponding security constraint is binding, and zero otherwise.
diff --git a/dev/advanced_concepts/pressure_driven_gas_transfer/index.html b/dev/advanced_concepts/pressure_driven_gas_transfer/index.html index 3c15159320..317fd35308 100644 --- a/dev/advanced_concepts/pressure_driven_gas_transfer/index.html +++ b/dev/advanced_concepts/pressure_driven_gas_transfer/index.html @@ -1,2 +1,2 @@ -Pressure driven gas transfer · SpineOpt.jl

Pressure driven gas transfer

The generic formulation of SpineOpt is based on a trade based model. However, network physics can be different depending on the traded commodity. This chapter specifically addresses the use of pressure driven gas transfer models and enabling linepack flexibility in SpineOpt. To this date, investments in pressure driven pipelines are not yet supported within SpineOpt. The use of multiple feed-in nodes, e.g. to represent multiple commodity flows through a pipeline is not yet supported.

For the representation of pressure driven gas transfer, we use the MILP formulation, as described in Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling. Here, the non-linearities associated with the Weymouth equation are convexified through an outer approximation of the Weymouth equation through fixed pressure points.

Key concept

Here, we briefly describe the key objects and relationships required to model pressure driven gas transfers in SpineOpt.

  1. connection: A connection represents the gas pipeline being modelled. Usually the direction of flow is not known a priory. To ensure that the flow through the gas pipeline is unidirectional, the parameter has_binary_gas_flow needs to be set to true.
  2. node: Nodes with different characteristics are used for the representation of pressure driven gas transfer.
    • For each connection, there will be two nodes representing the start and end point of the pipeline. Associated with these nodes are the following parameters: the has_pressure parameter, which needs to be set to true, in order to create the variable node_pressure; the max_node_pressure and min_node_pressure to constrain the pressure variable.
    • To leverage linepack flexibility, a third node is introduced representing the linepack storage of the pipeline. To trigger the storage linepack and hence, node_state variables, the has_state parameter needs to be set to true.
  3. connection__to_node and connection__from_node To enable flows through the pipeline and into the linepack storage, each node has to have both these relationships in common with the connection pipeline. These relationships will trigger the generation of connection_flow variables in all possible directions.
  4. connection__node__node This relationship is key to the pressure driven gas transfer, holding the information about the pipeline characteristics and bringing the elements into interaction.
    • The parameter connection_linepack_constant holds the linepack constant and triggers the generation of the line pack storage constraint. Note that the first node should be the linepack storage node, while the second node should be a node_group of both, the start and the end node of the pipeline.
    • The linearization of the Weymouth equation through outer approximation relies on the use of fixed pressure points. For this purpose, the two parameters fixed_pressure_constant_1 and fixed_pressure_constant_0 hold the fixed pressure constants and trigger the generation of the constraint_fix_node_pressure_point. The constraint introduces the relationship between pressure and gas flows. Note, that the pressure constants should be entered in a way, that the first node represents the origin node, the second node the destination node. Each connection should have a connection__node__node to each combination of its start and end nodes (and associated parameters). (See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling)
    • By default, pipelines are considered to be passive. However, a compression station between two pipeline pressure nodes can be represented by defining a compression_factor. The relationship should be defined in such a manner, that the first node represents the sending node, the second node represents the receiving node, which pressure is equal or smaller to the pressure at the sending node times the compression factor.
    • Lastly, to ensure the balance between incoming/outgoing flows and flows into the linepack, the ratio between the flows need to be fixed. The average incoming flows of the node group (of the pressure start and end nodes) have to equal the flows into the linepack storage, and vice versa. Therefore, the fix_ratio_out_in_connection_flow needs to be set to a value (typically 1) for the (pressure group, linepack storage) node pair, and for the (linepack storage, pressure group) node pair.

A gas pipeline and its connected nodes are illustrated below. A complete mathematical formulation can be found here.

Illustration of gas pipeline

+Pressure driven gas transfer · SpineOpt.jl

Pressure driven gas transfer

The generic formulation of SpineOpt is based on a trade based model. However, network physics can be different depending on the traded commodity. This chapter specifically addresses the use of pressure driven gas transfer models and enabling linepack flexibility in SpineOpt. To this date, investments in pressure driven pipelines are not yet supported within SpineOpt. The use of multiple feed-in nodes, e.g. to represent multiple commodity flows through a pipeline is not yet supported.

For the representation of pressure driven gas transfer, we use the MILP formulation, as described in Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling. Here, the non-linearities associated with the Weymouth equation are convexified through an outer approximation of the Weymouth equation through fixed pressure points.

Key concept

Here, we briefly describe the key objects and relationships required to model pressure driven gas transfers in SpineOpt.

  1. connection: A connection represents the gas pipeline being modelled. Usually the direction of flow is not known a priory. To ensure that the flow through the gas pipeline is unidirectional, the parameter has_binary_gas_flow needs to be set to true.
  2. node: Nodes with different characteristics are used for the representation of pressure driven gas transfer.
    • For each connection, there will be two nodes representing the start and end point of the pipeline. Associated with these nodes are the following parameters: the has_pressure parameter, which needs to be set to true, in order to create the variable node_pressure; the max_node_pressure and min_node_pressure to constrain the pressure variable.
    • To leverage linepack flexibility, a third node is introduced representing the linepack storage of the pipeline. To trigger the storage linepack and hence, node_state variables, the has_state parameter needs to be set to true.
  3. connection__to_node and connection__from_node To enable flows through the pipeline and into the linepack storage, each node has to have both these relationships in common with the connection pipeline. These relationships will trigger the generation of connection_flow variables in all possible directions.
  4. connection__node__node This relationship is key to the pressure driven gas transfer, holding the information about the pipeline characteristics and bringing the elements into interaction.
    • The parameter connection_linepack_constant holds the linepack constant and triggers the generation of the line pack storage constraint. Note that the first node should be the linepack storage node, while the second node should be a node_group of both, the start and the end node of the pipeline.
    • The linearization of the Weymouth equation through outer approximation relies on the use of fixed pressure points. For this purpose, the two parameters fixed_pressure_constant_1 and fixed_pressure_constant_0 hold the fixed pressure constants and trigger the generation of the constraint_fix_node_pressure_point. The constraint introduces the relationship between pressure and gas flows. Note, that the pressure constants should be entered in a way, that the first node represents the origin node, the second node the destination node. Each connection should have a connection__node__node to each combination of its start and end nodes (and associated parameters). (See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling)
    • By default, pipelines are considered to be passive. However, a compression station between two pipeline pressure nodes can be represented by defining a compression_factor. The relationship should be defined in such a manner, that the first node represents the sending node, the second node represents the receiving node, which pressure is equal or smaller to the pressure at the sending node times the compression factor.
    • Lastly, to ensure the balance between incoming/outgoing flows and flows into the linepack, the ratio between the flows need to be fixed. The average incoming flows of the node group (of the pressure start and end nodes) have to equal the flows into the linepack storage, and vice versa. Therefore, the fix_ratio_out_in_connection_flow needs to be set to a value (typically 1) for the (pressure group, linepack storage) node pair, and for the (linepack storage, pressure group) node pair.

A gas pipeline and its connected nodes are illustrated below. A complete mathematical formulation can be found here.

Illustration of gas pipeline

diff --git a/dev/advanced_concepts/ramping/index.html b/dev/advanced_concepts/ramping/index.html index 303c56734a..2a85bb571a 100644 --- a/dev/advanced_concepts/ramping/index.html +++ b/dev/advanced_concepts/ramping/index.html @@ -1,2 +1,2 @@ -Ramping · SpineOpt.jl

Ramping

To enable the representation of units with a high level of technical detail, the ramping capability of units can be constrained in SpineOpt. This means that the user has the freedom to impose restrictions on the change in the output (or input) of units over time, for online (spinning) units, units starting up and units shutting down. In this section, the concept of ramps in SpineOpt will be introduced.

Relevant objects, relationships and parameters

Everything that is related to ramping is defined in parameters of either the unit__to_node or unit__from_node relationship (where the node can be a group). Generally speaking, the ramping constraints will impose restrictions on the change in the unit_flow variable between two consecutive timesteps.

All parameters that limit the ramping abilities of a unit are expressed as a fraction of the unit capacity. This means that a value of 1 indicates the full capacity of a unit.

The discussion here will be conceptual. For the mathematical formulation the reader is referred to the Ramping constraints

Constraining spinning up and down ramps

Constraining start up and shut down ramps

General principle and example use cases

The general principle of the Spine modelling ramping constraints is that all of these parameters can be defined separately for each unit. This allows the user to incorporate different units (which can either represent a single unit or a technology type) with different flexibility characteristics.

It should be noted that it is perfectly possible to omit all of the ramp constraining parameters mentioned above, or to specify only some of them. Anything that is omitted is interpreted as if it shouldn't be constrained. For example, if you only specify start_up_limit and ramp_down_limit, then only the flow increase during start up and the flow decrease during online operation will be constrained (but not any other flow increase or decrease).

Illustrative examples

Step 1: Simple case of unrestricted unit

When none of the ramping parameters mentioned above are specified, the unit is considered to have full ramping flexibility. This means that over any period of time, its flow can be any value between 0 and its capacity, regardless of what the flow of the unit was in previous timesteps, and regardless of the on- or offline status of the unit in previous timesteps (while still respecting, of course, the Unit commitment restrictions that are defined for this unit). This is equivalent to specifying the following:

  • shut_down_limit : 1
  • start_up_limit : 1
  • ramp_up_limit : 1
  • ramp_down_limit : 1

Step 2: Spinning ramp restriction

A unit which is only restricted in spinning ramping can be created by changing the ramp_up/down_limit parameters:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit flow cannot increase more than $0.2 * 200$ and cannot decrease more than $0.4 * 200$ over a period of time equal to 'one' duration_unit. For example, when the unit is running at an output of $100$ in some timestep $t$, its output for the next 'one' duration_unit must be somewhere in the interval $[20, 140]$ - unless it shuts down completely.

Step 3: Shutdown restrictions

By specifying the parameter shut_down_limit, an additional restriction is imposed on the maximum flow of the unit at the moment it goes offline:

  • shut_down_limit : 0.5
  • minimum_operating_point : 0.3

When the unit goes offline in a given timestep $t$, the output of the unit must be below $0.5 * 200 = 100$ in the timestep right before that $t$ (and of course, above $0.3 * 200 = 60$ - the minimum operating point).

Step 4: Startup restrictions

The start up restrictions are very similar to the shut down restrictions, but of course apply to units that are starting up. THey are activated by specifying start_up_limit:

  • start_up_limit : 0.4
  • minimum_operating_point : 0.2

When the unit goes online in a given timestep $t$, its output will be restricted to the interval $[40, 80]$.

Using node groups to constraint aggregated flow ramps

SpineOpt allows the user to constrain ramping abilities of units that are linked to multiple nodes by defining node groups. When a node group is defined, ramping restrictions can be imposed both on the group level (thus for the unit as a whole) as well as for the individual nodes. For example, let's assume that we have one unit and two nodes in a model. The unit is linked via unit__to_node relationships to each node individually, and on top of that, it is linked to a node group containing both nodes.

If, for example a ramp_up_limit is defined for the node group, the sum of upward ramping of the two nodes will be restricted by this parameter. However, it is still possible to limit the individual flows to the nodes as well. Let's say that our unit is capable of ramping up by 20% of its capacity and down by 40%. We might want to impose tighter restrictions for the flows towards one of the nodes (e.g. because the energy has to be provided in a shorter time than the duration_unit). One can then simply define an additional parameter for that unit__to_node relationship as follows.

  • ramp_up_limit : 0.15

Which now restricts the flow of the unit into that node to 15% of its capacity.

Please note that by default, node groups are balanced in the same way as individual nodes. So if you're using node groups for the sole purpose of constraining flow ramps, you should set the balance type of the group to balance_type_none.

Ramping with reserves

If a unit is set to provide reserves, then it should be able to provide that reserve within one duration_unit. For this reason, reserve provision must be accounted for within ramp constraints. Please see Reserves for details on how to setup a node as a reserve.

Examples

Let's assume that we have one unit and two nodes in a model, one for reserves and one for regular demand. The unit is then linked by the unit__to_node relationships to both the reserves and regular demand node.

Spinning ramp restriction

The unit can be restricted in spinning ramping by defining the ramp_up/down_limit parameters in the unit__to_node relationship for the regular demand node:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit's flow to the regular demand node cannot increase more than $0.2 * 200 - upward\_reserve\_demand$ or decrease more than $0.4 * 200 - downward\_reserve\_demand$ over one duration_unit. For example, when the unit is running at an output of $100$ and there is an upward reserve demand of $10$, then its output over the next duration_unit must be somewhere in the interval $[20, 130]$.

It can be seen in this example that the demand for reserves is subtracted from the ramping capacity of the unit that is available for regular operation. This stems from the fact that in providing reserve capacity, the unit is expected to be able to provide the demanded reserve within one duration_unit as stated above.

+Ramping · SpineOpt.jl

Ramping

To enable the representation of units with a high level of technical detail, the ramping capability of units can be constrained in SpineOpt. This means that the user has the freedom to impose restrictions on the change in the output (or input) of units over time, for online (spinning) units, units starting up and units shutting down. In this section, the concept of ramps in SpineOpt will be introduced.

Relevant objects, relationships and parameters

Everything that is related to ramping is defined in parameters of either the unit__to_node or unit__from_node relationship (where the node can be a group). Generally speaking, the ramping constraints will impose restrictions on the change in the unit_flow variable between two consecutive timesteps.

All parameters that limit the ramping abilities of a unit are expressed as a fraction of the unit capacity. This means that a value of 1 indicates the full capacity of a unit.

The discussion here will be conceptual. For the mathematical formulation the reader is referred to the Ramping constraints

Constraining spinning up and down ramps

Constraining start up and shut down ramps

General principle and example use cases

The general principle of the Spine modelling ramping constraints is that all of these parameters can be defined separately for each unit. This allows the user to incorporate different units (which can either represent a single unit or a technology type) with different flexibility characteristics.

It should be noted that it is perfectly possible to omit all of the ramp constraining parameters mentioned above, or to specify only some of them. Anything that is omitted is interpreted as if it shouldn't be constrained. For example, if you only specify start_up_limit and ramp_down_limit, then only the flow increase during start up and the flow decrease during online operation will be constrained (but not any other flow increase or decrease).

Illustrative examples

Step 1: Simple case of unrestricted unit

When none of the ramping parameters mentioned above are specified, the unit is considered to have full ramping flexibility. This means that over any period of time, its flow can be any value between 0 and its capacity, regardless of what the flow of the unit was in previous timesteps, and regardless of the on- or offline status of the unit in previous timesteps (while still respecting, of course, the Unit commitment restrictions that are defined for this unit). This is equivalent to specifying the following:

  • shut_down_limit : 1
  • start_up_limit : 1
  • ramp_up_limit : 1
  • ramp_down_limit : 1

Step 2: Spinning ramp restriction

A unit which is only restricted in spinning ramping can be created by changing the ramp_up/down_limit parameters:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit flow cannot increase more than $0.2 * 200$ and cannot decrease more than $0.4 * 200$ over a period of time equal to 'one' duration_unit. For example, when the unit is running at an output of $100$ in some timestep $t$, its output for the next 'one' duration_unit must be somewhere in the interval $[20, 140]$ - unless it shuts down completely.

Step 3: Shutdown restrictions

By specifying the parameter shut_down_limit, an additional restriction is imposed on the maximum flow of the unit at the moment it goes offline:

  • shut_down_limit : 0.5
  • minimum_operating_point : 0.3

When the unit goes offline in a given timestep $t$, the output of the unit must be below $0.5 * 200 = 100$ in the timestep right before that $t$ (and of course, above $0.3 * 200 = 60$ - the minimum operating point).

Step 4: Startup restrictions

The start up restrictions are very similar to the shut down restrictions, but of course apply to units that are starting up. THey are activated by specifying start_up_limit:

  • start_up_limit : 0.4
  • minimum_operating_point : 0.2

When the unit goes online in a given timestep $t$, its output will be restricted to the interval $[40, 80]$.

Using node groups to constraint aggregated flow ramps

SpineOpt allows the user to constrain ramping abilities of units that are linked to multiple nodes by defining node groups. When a node group is defined, ramping restrictions can be imposed both on the group level (thus for the unit as a whole) as well as for the individual nodes. For example, let's assume that we have one unit and two nodes in a model. The unit is linked via unit__to_node relationships to each node individually, and on top of that, it is linked to a node group containing both nodes.

If, for example a ramp_up_limit is defined for the node group, the sum of upward ramping of the two nodes will be restricted by this parameter. However, it is still possible to limit the individual flows to the nodes as well. Let's say that our unit is capable of ramping up by 20% of its capacity and down by 40%. We might want to impose tighter restrictions for the flows towards one of the nodes (e.g. because the energy has to be provided in a shorter time than the duration_unit). One can then simply define an additional parameter for that unit__to_node relationship as follows.

  • ramp_up_limit : 0.15

Which now restricts the flow of the unit into that node to 15% of its capacity.

Please note that by default, node groups are balanced in the same way as individual nodes. So if you're using node groups for the sole purpose of constraining flow ramps, you should set the balance type of the group to balance_type_none.

Ramping with reserves

If a unit is set to provide reserves, then it should be able to provide that reserve within one duration_unit. For this reason, reserve provision must be accounted for within ramp constraints. Please see Reserves for details on how to setup a node as a reserve.

Examples

Let's assume that we have one unit and two nodes in a model, one for reserves and one for regular demand. The unit is then linked by the unit__to_node relationships to both the reserves and regular demand node.

Spinning ramp restriction

The unit can be restricted in spinning ramping by defining the ramp_up/down_limit parameters in the unit__to_node relationship for the regular demand node:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit's flow to the regular demand node cannot increase more than $0.2 * 200 - upward\_reserve\_demand$ or decrease more than $0.4 * 200 - downward\_reserve\_demand$ over one duration_unit. For example, when the unit is running at an output of $100$ and there is an upward reserve demand of $10$, then its output over the next duration_unit must be somewhere in the interval $[20, 130]$.

It can be seen in this example that the demand for reserves is subtracted from the ramping capacity of the unit that is available for regular operation. This stems from the fact that in providing reserve capacity, the unit is expected to be able to provide the demanded reserve within one duration_unit as stated above.

diff --git a/dev/advanced_concepts/reserves/index.html b/dev/advanced_concepts/reserves/index.html index 1a16240b32..60b4e469c0 100644 --- a/dev/advanced_concepts/reserves/index.html +++ b/dev/advanced_concepts/reserves/index.html @@ -1,2 +1,2 @@ -Reserves · SpineOpt.jl

Reserves

SpineOpt provides a way to include reserve provision in a model by creating reserve nodes. Reserve provision is different from regular operations as it involves withholding capacity, rather than producing a certain commodity (e.g., energy).

This section covers the reserve concepts, but we highly recommend checking out the tutorial on reserves for a more thorough understanding of how the model is set up. You can find the reserves tutorial.

Defining a reserve node

To define a reserve node, the following parameters have to be defined for the relevant node:

  • is_reserve_node : this boolean parameter indicates that this node is a reserve node.
  • upward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns upward reserves.
  • downward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns downward reserves.
  • reserve_procurement_cost: (optional) this parameter indicates the procurement cost of a unit for a certain reserve product and can be define on a unit__to_node or unit__from_node relationship.

Defining a reserve group

The reserve group definition allows the creation of a unit flow capacity constraint where all the unit flows to different commodities, including the reserve provision, are considered to limit the maximum unit capacity.

The definition of the reserve group also allows the creation of minimum operating point, ramp up, and ramp down constraints, considering flows and reserve provisions.

The relationship between the unit and the node group (i.e., unit__to_node or unit__from_node) is essential to define the parameters needed for the constraints (e.g., unit_capacity, minimum_operating_point, ramp_up_limit, or ramp_down_limit).

Illustrative examples

In this example, we will consider a unit that can provide upward and downward reserves, along with producing electricity. Therefore, the model needs to consider both characteristics of electricity production and reserve provision in the constraints.

Let's take a look to the unit flow capacity constraint and the minimum operating point. For the illustrative example of ramping constraints and reserves, please visit the illustrative example of the reserve section.

Unit flow capacity constraint with reserve

Assuming the following parameters, we are considering a fully flexible unit taking into account the definition of the unit flow capacity constraint:

  • unit_capacity : 100
  • shut_down_limit: 1
  • start_up_limit : 1

The parameters indicate that the unit capacity is 100 (e.g., 100 MW) and the shutdown and startup limits are 1 p.u. This means that the unit can start up or shut down to its maximum capacity, making it a fully flexible unit.

Taking into account the constraint and the fact that the unit can provide upward reserve and generate electricity, the simplified version of the resulting constraint is a simplified manner:

$unit\_flow\_to\_electricity + upwards\_reserve \leq 100 \cdot units\_on$

Here, we can see that the flow to the electricity node depends on the unit's capacity and the upward reserve provision of the unit.

Minimum operating point constraint with reserve

We need to consider the following parameters for the minimum operating point constraint:

  • minimum_operating_point : 0.25

This value means that the unit has a 25% of its capacity as a minimum operating point (i.e., 25 MW). Therefore, the simplified version of the resulting constraint is:

$unit\_flow\_to\_electricity - downward\_reserve \geq 25 \cdot units\_on$

Here, the downward reserve limits the flow to the electricity node to ensure that the minimum operating point of the unit is fulfilled.

+Reserves · SpineOpt.jl

Reserves

SpineOpt provides a way to include reserve provision in a model by creating reserve nodes. Reserve provision is different from regular operations as it involves withholding capacity, rather than producing a certain commodity (e.g., energy).

This section covers the reserve concepts, but we highly recommend checking out the tutorial on reserves for a more thorough understanding of how the model is set up. You can find the reserves tutorial.

Defining a reserve node

To define a reserve node, the following parameters have to be defined for the relevant node:

  • is_reserve_node : this boolean parameter indicates that this node is a reserve node.
  • upward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns upward reserves.
  • downward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns downward reserves.
  • reserve_procurement_cost: (optional) this parameter indicates the procurement cost of a unit for a certain reserve product and can be define on a unit__to_node or unit__from_node relationship.

Defining a reserve group

The reserve group definition allows the creation of a unit flow capacity constraint where all the unit flows to different commodities, including the reserve provision, are considered to limit the maximum unit capacity.

The definition of the reserve group also allows the creation of minimum operating point, ramp up, and ramp down constraints, considering flows and reserve provisions.

The relationship between the unit and the node group (i.e., unit__to_node or unit__from_node) is essential to define the parameters needed for the constraints (e.g., unit_capacity, minimum_operating_point, ramp_up_limit, or ramp_down_limit).

Illustrative examples

In this example, we will consider a unit that can provide upward and downward reserves, along with producing electricity. Therefore, the model needs to consider both characteristics of electricity production and reserve provision in the constraints.

Let's take a look to the unit flow capacity constraint and the minimum operating point. For the illustrative example of ramping constraints and reserves, please visit the illustrative example of the reserve section.

Unit flow capacity constraint with reserve

Assuming the following parameters, we are considering a fully flexible unit taking into account the definition of the unit flow capacity constraint:

  • unit_capacity : 100
  • shut_down_limit: 1
  • start_up_limit : 1

The parameters indicate that the unit capacity is 100 (e.g., 100 MW) and the shutdown and startup limits are 1 p.u. This means that the unit can start up or shut down to its maximum capacity, making it a fully flexible unit.

Taking into account the constraint and the fact that the unit can provide upward reserve and generate electricity, the simplified version of the resulting constraint is a simplified manner:

$unit\_flow\_to\_electricity + upwards\_reserve \leq 100 \cdot units\_on$

Here, we can see that the flow to the electricity node depends on the unit's capacity and the upward reserve provision of the unit.

Minimum operating point constraint with reserve

We need to consider the following parameters for the minimum operating point constraint:

  • minimum_operating_point : 0.25

This value means that the unit has a 25% of its capacity as a minimum operating point (i.e., 25 MW). Therefore, the simplified version of the resulting constraint is:

$unit\_flow\_to\_electricity - downward\_reserve \geq 25 \cdot units\_on$

Here, the downward reserve limits the flow to the electricity node to ensure that the minimum operating point of the unit is fulfilled.

diff --git a/dev/advanced_concepts/stochastic_framework/index.html b/dev/advanced_concepts/stochastic_framework/index.html index 672b8e0730..7ddec1b853 100644 --- a/dev/advanced_concepts/stochastic_framework/index.html +++ b/dev/advanced_concepts/stochastic_framework/index.html @@ -6,4 +6,4 @@ # If not a root `stochastic_scenario` -weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

Finally, with all the pieces in place, we'll need to connect the defined stochastic_structure objects to the desired objects in the Systemic object classes using the Structural relationship classes like node__stochastic_structure etc. Here, we essentially tell which parts of the modelled system use which stochastic_structure. Since creating each of these relationships individually can be a bit of a pain, there are a few Meta relationship classes like the model__default_stochastic_structure, that can be used to set model-wide defaults that are used if specific relationships are missing.

Example of deterministic stochastics

Here, we'll demonstrate step-by-step how to create the simplest possible stochastic frame: the fully deterministic one. See the Deterministic Stochastic Structure archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create a stochastic_scenario called e.g. realization and a stochastic_structure called e.g. deterministic.
  2. We can skip the parent_stochastic_scenario__child_stochastic_scenario relationship, since there isn't a stochastic DAG in this example, and the default behaviour of each stochastic_scenario being independent works for our purposes (only one stochastic_scenario anyhow).
  3. Create the stochastic_structure__stochastic_scenario relationship for (deterministic, realization), and set its weight_relative_to_parents parameter to 1. We don't need to define the stochastic_scenario_end parameter, as we want the realization to go on indefinitely.
  4. Relate the deterministic stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of branching stochastics

Here, we'll demonstrate step-by-step how to create a simple branching stochastic tree, where one scenario branches into three at a specific point in time. See the Branching Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create four stochastic_scenario objects called e.g. realization, forecast1, forecast2, and forecast3, and a stochastic_structure called e.g. branching.
  2. Define the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (realization, forecast1), (realization, forecast2), and (realization, forecast3).
  3. Create the stochastic_structure__stochastic_scenario relationship for (branching, realization), (branching, forecast1), (branching, forecast2), and (branching, forecast3).
  4. Set the weight_relative_to_parents parameter to 1 and the stochastic_scenario_end parameter e.g. to 6h for the stochastic_structure__stochastic_scenario relationship (branching, realization). Now, the realization stochastic_scenario will end after 6 hours of time steps, and its children (forecast1, forecast2, and forecast3) will become active.
  5. Set the weight_relative_to_parents Parameters for the (branching, forecast1), (branching, forecast2), and (branching, forecast3) stochastic_structure__stochastic_scenario relationships to whatever you desire, e.g. 0.33 for equal probabilities across all forecasts.
  6. Relate the branching stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of converging stochastics

Here, we'll demonstrate step-by-step how to create a simple stochastic DAG, where both branching and converging occurs. This example relies on the previous Example of branching stochastics, but adds another stochastic_scenario at the end, which is a child of the forecast1, forecast2, and forecast3 scenarios. See the Converging Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Follow the steps 1-5 in the previous Example of branching stochastics, except call the stochastic_structure something different, e.g. converging.
  2. Create a new stochastic_scenario called e.g. converged_forecast.
  3. Alter the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (forecast1, converged_forecast), (forecast2, converged_forecast), and (forecast3, converged_forecast). Now all three forecasts will converge into a single converged_forecast.
  4. Add the stochastic_structure__stochastic_scenario relationship for (converging, converged_forecast), and set its weight_relative_to_parents parameter to 1. Now, all the probability mass in forecast1, forecast2, and forecast3 will be summed up back to the converged_forecast.
  5. Set the stochastic_scenario_end Parameters of the stochastic_structure__stochastic_scenario relationships (converging, forecast1), (converging, forecast2), and (converging, forecast3) to e.g. 1D, so that all three scenarios end at the same time and the converged_forecast becomes active.
  6. Relate the converging stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Working with stochastic updating data

Now that we've discussed how to set up stochastics for SpineOpt, let's focus on stochastic data. The most complex form of input data SpineOpt can currently handle is both stochastic and updating, meaning that the values the parameter takes can depend on both the stochastic_scenario, and the analysis time (first time step) of each solve. However, just stochastic or just updating cases are supported as well, using the same input data format.

In SpineOpt, stochastic data uses the Map data type from SpineInterface.jl. Essentially, Maps are general indexed data containers, which SpineOpt tries to interpret as stochastic data. Every time SpineOpt calls a parameter, it passes the stochastic_scenario and analysis time as keyword arguments to the parameter, but depending on the parameter type, it doesn't necessarily do anything with that information. For Map type parameters, those keyword arguments are used for navigating the indices of the Map to try and find the corresponding value. If the Map doesn't include the stochastic_scenario index it's looking for, it assumes there's no stochastic information in the Map and carries on to search for analysis time indices. This logic is useful for defining both stochastic and updating data, as well as either case by itself, as shown in the following examples.

Example of stochastic data

By stochastic data, we mean parameter values that depend only on the stochastic_scenario. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenariovalue
scenario1value1
scenario2value2

where stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 in scenario1, and value2 in scenario2. Note that since there's no analysis time index in this example, the values are used regardless of the analysis time.

Example of updating data

By updating data, we mean parameter values that depend only on the analysis time. In such a case, the input data must be formatted as a Map with the following structure

analysis timevalue
2000-01-01T00:00:00value1
2000-01-01T12:00:00value2

where the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00, and value2 if the first time step of the simulation is after 2000-01-01T12:00:00. Note that since there's no stochastic_scenario index in this example, the values are used regardless of the stochastic_scenario.

Example of stochastic updating data

By stochastic updating data, we mean parameter values that depend on both the stochastic_scenario and the analysis time. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenarioanalysis timevalue
scenario12000-01-01T00:00:00value1
scenario12000-01-01T12:00:00value2
scenario22000-01-01T00:00:00value3
scenario22000-01-01T12:00:00value4

where the stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects, and the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00 and the parameter is called in scenario1, and value3 in scenario2. If the first time step of the current simulation is after 2000-01-01T12:00:00, the parameter will take value2 in scenario1, and value4 in scenario2.

Constraint generation with stochastic path indexing

Every time a constraint might refer to variables either on different time steps or on different stochastic scenarios (meaning different nodes or units), the constraint needs to use stochastic path indexing in order to be correctly generated for arbitrary stochastic DAGs. In practise, this means following the procedure outlined below:

  1. Identify all unique full stochastic paths, meaning all the possible ways of traversing the DAG. This is done along with generating the stochastic structure, so no real impact on constraint generation.
  2. Find all the stochastic scenarios that are active on all the stochastic structures and time slices included in the constraint.
  3. Find all the unique stochastic paths by intersecting the set of active scenarios with the full stochastic paths.
  4. Generate constraints over each unique stochastic path found in step 3.

Steps 2 and 3 are the crucial ones, and are currently handled by separate constraint_<constraint_name>_indices functions. Essentially, these functions go through all the variables on all the time steps included in the constraint, collect the set of active stochastic_scenarios on each time step, and then determine the unique active stochastic paths on each time step. The functions pre-form the index set over which the constraint is then generated in the add_constraint_<constraint_name> functions.

+weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

Finally, with all the pieces in place, we'll need to connect the defined stochastic_structure objects to the desired objects in the Systemic object classes using the Structural relationship classes like node__stochastic_structure etc. Here, we essentially tell which parts of the modelled system use which stochastic_structure. Since creating each of these relationships individually can be a bit of a pain, there are a few Meta relationship classes like the model__default_stochastic_structure, that can be used to set model-wide defaults that are used if specific relationships are missing.

Example of deterministic stochastics

Here, we'll demonstrate step-by-step how to create the simplest possible stochastic frame: the fully deterministic one. See the Deterministic Stochastic Structure archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create a stochastic_scenario called e.g. realization and a stochastic_structure called e.g. deterministic.
  2. We can skip the parent_stochastic_scenario__child_stochastic_scenario relationship, since there isn't a stochastic DAG in this example, and the default behaviour of each stochastic_scenario being independent works for our purposes (only one stochastic_scenario anyhow).
  3. Create the stochastic_structure__stochastic_scenario relationship for (deterministic, realization), and set its weight_relative_to_parents parameter to 1. We don't need to define the stochastic_scenario_end parameter, as we want the realization to go on indefinitely.
  4. Relate the deterministic stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of branching stochastics

Here, we'll demonstrate step-by-step how to create a simple branching stochastic tree, where one scenario branches into three at a specific point in time. See the Branching Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create four stochastic_scenario objects called e.g. realization, forecast1, forecast2, and forecast3, and a stochastic_structure called e.g. branching.
  2. Define the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (realization, forecast1), (realization, forecast2), and (realization, forecast3).
  3. Create the stochastic_structure__stochastic_scenario relationship for (branching, realization), (branching, forecast1), (branching, forecast2), and (branching, forecast3).
  4. Set the weight_relative_to_parents parameter to 1 and the stochastic_scenario_end parameter e.g. to 6h for the stochastic_structure__stochastic_scenario relationship (branching, realization). Now, the realization stochastic_scenario will end after 6 hours of time steps, and its children (forecast1, forecast2, and forecast3) will become active.
  5. Set the weight_relative_to_parents Parameters for the (branching, forecast1), (branching, forecast2), and (branching, forecast3) stochastic_structure__stochastic_scenario relationships to whatever you desire, e.g. 0.33 for equal probabilities across all forecasts.
  6. Relate the branching stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of converging stochastics

Here, we'll demonstrate step-by-step how to create a simple stochastic DAG, where both branching and converging occurs. This example relies on the previous Example of branching stochastics, but adds another stochastic_scenario at the end, which is a child of the forecast1, forecast2, and forecast3 scenarios. See the Converging Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Follow the steps 1-5 in the previous Example of branching stochastics, except call the stochastic_structure something different, e.g. converging.
  2. Create a new stochastic_scenario called e.g. converged_forecast.
  3. Alter the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (forecast1, converged_forecast), (forecast2, converged_forecast), and (forecast3, converged_forecast). Now all three forecasts will converge into a single converged_forecast.
  4. Add the stochastic_structure__stochastic_scenario relationship for (converging, converged_forecast), and set its weight_relative_to_parents parameter to 1. Now, all the probability mass in forecast1, forecast2, and forecast3 will be summed up back to the converged_forecast.
  5. Set the stochastic_scenario_end Parameters of the stochastic_structure__stochastic_scenario relationships (converging, forecast1), (converging, forecast2), and (converging, forecast3) to e.g. 1D, so that all three scenarios end at the same time and the converged_forecast becomes active.
  6. Relate the converging stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Working with stochastic updating data

Now that we've discussed how to set up stochastics for SpineOpt, let's focus on stochastic data. The most complex form of input data SpineOpt can currently handle is both stochastic and updating, meaning that the values the parameter takes can depend on both the stochastic_scenario, and the analysis time (first time step) of each solve. However, just stochastic or just updating cases are supported as well, using the same input data format.

In SpineOpt, stochastic data uses the Map data type from SpineInterface.jl. Essentially, Maps are general indexed data containers, which SpineOpt tries to interpret as stochastic data. Every time SpineOpt calls a parameter, it passes the stochastic_scenario and analysis time as keyword arguments to the parameter, but depending on the parameter type, it doesn't necessarily do anything with that information. For Map type parameters, those keyword arguments are used for navigating the indices of the Map to try and find the corresponding value. If the Map doesn't include the stochastic_scenario index it's looking for, it assumes there's no stochastic information in the Map and carries on to search for analysis time indices. This logic is useful for defining both stochastic and updating data, as well as either case by itself, as shown in the following examples.

Example of stochastic data

By stochastic data, we mean parameter values that depend only on the stochastic_scenario. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenariovalue
scenario1value1
scenario2value2

where stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 in scenario1, and value2 in scenario2. Note that since there's no analysis time index in this example, the values are used regardless of the analysis time.

Example of updating data

By updating data, we mean parameter values that depend only on the analysis time. In such a case, the input data must be formatted as a Map with the following structure

analysis timevalue
2000-01-01T00:00:00value1
2000-01-01T12:00:00value2

where the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00, and value2 if the first time step of the simulation is after 2000-01-01T12:00:00. Note that since there's no stochastic_scenario index in this example, the values are used regardless of the stochastic_scenario.

Example of stochastic updating data

By stochastic updating data, we mean parameter values that depend on both the stochastic_scenario and the analysis time. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenarioanalysis timevalue
scenario12000-01-01T00:00:00value1
scenario12000-01-01T12:00:00value2
scenario22000-01-01T00:00:00value3
scenario22000-01-01T12:00:00value4

where the stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects, and the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00 and the parameter is called in scenario1, and value3 in scenario2. If the first time step of the current simulation is after 2000-01-01T12:00:00, the parameter will take value2 in scenario1, and value4 in scenario2.

Constraint generation with stochastic path indexing

Every time a constraint might refer to variables either on different time steps or on different stochastic scenarios (meaning different nodes or units), the constraint needs to use stochastic path indexing in order to be correctly generated for arbitrary stochastic DAGs. In practise, this means following the procedure outlined below:

  1. Identify all unique full stochastic paths, meaning all the possible ways of traversing the DAG. This is done along with generating the stochastic structure, so no real impact on constraint generation.
  2. Find all the stochastic scenarios that are active on all the stochastic structures and time slices included in the constraint.
  3. Find all the unique stochastic paths by intersecting the set of active scenarios with the full stochastic paths.
  4. Generate constraints over each unique stochastic path found in step 3.

Steps 2 and 3 are the crucial ones, and are currently handled by separate constraint_<constraint_name>_indices functions. Essentially, these functions go through all the variables on all the time steps included in the constraint, collect the set of active stochastic_scenarios on each time step, and then determine the unique active stochastic paths on each time step. The functions pre-form the index set over which the constraint is then generated in the add_constraint_<constraint_name> functions.

diff --git a/dev/advanced_concepts/temporal_framework/index.html b/dev/advanced_concepts/temporal_framework/index.html index 0d3c569de7..c07d808617 100644 --- a/dev/advanced_concepts/temporal_framework/index.html +++ b/dev/advanced_concepts/temporal_framework/index.html @@ -1,2 +1,2 @@ -Temporal Framework · SpineOpt.jl

Temporal Framework

Spine Model aims to provide a high degree of flexibility in the temporal dimension across different components of the created model. This means that the user has some freedom to choose how the temporal aspects of different components of the model are defined. This freedom increases the variety of problems that can be tackled in Spine: from very coarse, long term models, to very detailed models with a more limited horizon, or a mix of both. The choice of the user on how this flexibility is used will lead to the temporal structure of the model.

The main components of flexibility consist of the following parts:

  • The horizon that is modeled: end and start time
  • Temporal resolution
  • Possibility of a rolling optimization window
  • Support for commonly used methods such as representative days

Part of the temporal flexibility in Spine is due to the fact that these options mentioned above can be implemented differently across different components of the model, which can be very useful when different markets are coupled in a single model. The resolution and horizon of the gas market can for example be taken differently than that of the electricity market. This documentation aims to give the reader insight in how these aspects are defined, and which objects are used for this.

We start by introducing the relevant objects with their parameters, and the relevant relationship classes for the temporal structure. Afterwards, we will discuss how this setting creates flexibility and will present some of the practical approaches to create a variety of temporal structures.

Objects, relationships, and their parameters

In this section, the objects and relationships will be discussed that form the temporal structure together.

Objects relevant for the temporal framework

For the objects, the relevant parameters will also be introduced, along with the type of values that are allowed, following the format below:

  • 'parameter_name' : "Allowed value type"

model object

Each model object holds general information about the model at hand. Here we only discuss the time related parameters:

These two parameters define the model horizon. A Datetime value is to be taken for both parameters, in which case they directly mark respectively the beginning and end of the modeled time horizon.

This parameters gives the unit of duration that is used in the model calculations. The default value for this parameter is 'minute'. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In the practical approaches presented below, the rolling window optimization will be explained in more detail.

temporal_block object

A temporal block defines the properties of the optimization that is to be solved in the current window. Most importantly, it holds the necessary information about the resolution and horizon of the optimization.

  • resolution (optional): "duration value" or "array of duration values"

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run.

  • block_start (optional): "duration value" or "Date time value"

Indicates the start of this temporal block.

  • block_end(optional): "duration value" or "Date time value"

Indicates the end of this temporal block.

Relationships relevant for the temporal framework

model__temporal_block relationship

In this relationship, a model instance is linked to a temporal block. If this relationship doesn't exist - the temporal block is disregarded from this optimization model.

model__default_temporal_block relationship

Defines the default temporal block used for model objects, which will be replaced when a specific relationship is defined for a model in model__temporal_block.

node__temporal_block relationship

This relationship will link a node to a temporal block.

units_on__temporal_block relationship

This relationship links the units_on variable of a unit to a temporal block and will therefore govern the time-resolution of the unit's online/offline status.

unit__investment_temporal_block relationship

This relationship sets the temporal dimensions for investment decisions of a certain unit. The separation between this relationship and the units_on__temporal_block, allows the user for example to give a much finer resolution to a unit's on- or offline status than to it's investment decisions.

model__default_investment_temporal_block relationship

Defines the default temporal block used for investment decisions, which will be replaced when a specific relationship is defined for a unit in unit__investment_temporal_block.

General principle of the temporal framework

The general principle of the Spine modeling temporal structure is that different temporal blocks can be defined and linked to different objects in a model. This leads to great flexibility in the temporal structure of the model as a whole. To illustrate this, we will discuss some of the possibilities that arise in this framework.

One single temporal_block

Single solve with single block

The simplest case is a single solve of the entire time horizon (so roll_forward not defined) with a fixed resolution. In this case, only one temporal block has to be defined with a fixed resolution. Each node has to be linked to this temporal_block.

Alternatively, a variable resolution can be defined by choosing an array of durations for the resolution parameter. The sum of the durations in the array then have to match the length of the temporal block. The example below illustrates an optimization that spans one day for which the resolution is hourly in the beginning and then gradually decreases to a 6h resolution at the end.

  • temporal_block_1
    • block_start: 0h (Alternative DateTime: e.g. 2030-01-01T00:00:00)
    • block_end: 1D (Alternative DateTime: e.g. 2030-01-02T00:00:00)
    • resolution: [1h 1h 1h 1h 2h 2h 2h 4h 4h 6h]

Note that, as mentioned above, the block_start and block_end parameters can also be entered as absolute values, i.e. DateTime values.

Rolling window optimization with single block

A model with a single temporal_block can also be optimized in a rolling horizon framework. In this case, the roll_forward parameter has to be defined in the model object. The roll_forward parameter will then determine how much the optimization moves forward with every step, while the size of the temporal block will determine how large a time frame is optimized in each step. To see this more clearly, let's take a look at an example.

Suppose we want to model a horizon of one week, with a rolling window size of one day. The roll_forward parameter will then be a duration value of 1d. If we take the temporal_block parameters block_start and block_end to be the duration values 0h and 1d respectively, the model will optimize each day of the week separately. However, we could also take the block_end parameter to be 2d. Now the model will start by optimizing day 1 and day 2 together, after which it keeps only the values obtained for the first day, and moves forward to optimize the second and third day together.

Again, a variable resolution can be implemented for the rolling window optimization. The sum of the durations must in this case match the size of the optimized window.

Advanced usage: multiple temporal_block objects

Single solve with multiple blocks

Disconnected time periods

Multiple temporal blocks can be used to optimize disconnected periods. Let's take a look at an example in which two temporal blocks are defined.

  • temporal_block_1
    • block_start: 0h
    • block_end: 4h
  • temporal_block_2
    • block_start: 12h
    • block_end: 16h

This example will lead to an optimization of the first four hours of the model horizon, and also of hour 12 to 16. By defining exactly the same relationships for the two temporal blocks, an optimization of disconnected periods is achieved for exactly the same model components. This leads to the possibility of implementing the widely used representative days method. If desired, it is possible to choose a different temporal resolution for the different temporal_blocks.

It is worth noting that dynamic variables like node_state and units_on merit special attention when using disconnected time periods. By default, when trying to access variables Variables outside the defined temporal_blocks, SpineOpt.jl assumes such variables exist but allows them to take any values within specified bounds. If fixed initial conditions for the disconnected periods are desired, one needs to use parameters such as fix_node_state or fix_units_on.

Different regions/commodities in different resolutions

Multiple temporal blocks can also be used to model different regions or different commodities with a different resolution. This is especially useful when there is a certain region or commodity of interest, while other elements are connected to this but require less detail. For this kind of usage, the relationships that are defined for the temporal blocks will be different, as shown in the example below.

  • temporal_blocks
    • temporal_block_1
      • resolution: 1h
    • temporal_block_2
      • resolution: 2h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_1
    • node_2_temporal_block_2

Similarly, the on- and offline status of a unit can be modeled with a lower resolution than the actual output of that unit, by defining the units_on_temporal_block relationship with a different temporal block than the one used for the node_temporal_block relationship (of the node to which the unit is connected).

Rolling horizon with multiple blocks

Rolling horizon with different window sizes

Similar to what has been discussed above in Different regions/commodities in different resolutions, different commodities or regions can be modeled with a different resolution in the rolling horizon setting. The way to do it is completely analogous. Furthermore, when using the rolling horizon framework, a different window size can be chosen for the different modeled components, by simply using a different block_end parameter. However, using different block_ends e.g. for interconnected regions should be treated with care, as the variables for each region will only be generated for their respective temporal_block, which in most cases will lead to inconsistent linking constraints.

Putting it all together: rolling horizon with variable resolution that differs for different model components

Below is an example of an advanced use case in which a rolling horizon optimization is used, and different model components are optimized with a different resolution. By choosing the relevant parameters in the following way:

  • model
    • roll_forward: 4h
  • temporal_blocks
    • temporal_block_A
      • resolution: [1h 1h 2h 2h 2h 3h 3h]
      • block_end: 14h
    • temporal_block_B
      • resolution: [2h 2h 4h 6h]
      • block_end: 14h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_A
    • node_2_temporal_block_B

The two model components that are considered have a different resolution, and their own resolution is also varying within the optimization window. Note that in this case the two optimization windows have the same size, but this is not strictly necessary. The image below visualizes the first two window optimizations of this model.

temporal structure

+Temporal Framework · SpineOpt.jl

Temporal Framework

Spine Model aims to provide a high degree of flexibility in the temporal dimension across different components of the created model. This means that the user has some freedom to choose how the temporal aspects of different components of the model are defined. This freedom increases the variety of problems that can be tackled in Spine: from very coarse, long term models, to very detailed models with a more limited horizon, or a mix of both. The choice of the user on how this flexibility is used will lead to the temporal structure of the model.

The main components of flexibility consist of the following parts:

  • The horizon that is modeled: end and start time
  • Temporal resolution
  • Possibility of a rolling optimization window
  • Support for commonly used methods such as representative days

Part of the temporal flexibility in Spine is due to the fact that these options mentioned above can be implemented differently across different components of the model, which can be very useful when different markets are coupled in a single model. The resolution and horizon of the gas market can for example be taken differently than that of the electricity market. This documentation aims to give the reader insight in how these aspects are defined, and which objects are used for this.

We start by introducing the relevant objects with their parameters, and the relevant relationship classes for the temporal structure. Afterwards, we will discuss how this setting creates flexibility and will present some of the practical approaches to create a variety of temporal structures.

Objects, relationships, and their parameters

In this section, the objects and relationships will be discussed that form the temporal structure together.

Objects relevant for the temporal framework

For the objects, the relevant parameters will also be introduced, along with the type of values that are allowed, following the format below:

  • 'parameter_name' : "Allowed value type"

model object

Each model object holds general information about the model at hand. Here we only discuss the time related parameters:

These two parameters define the model horizon. A Datetime value is to be taken for both parameters, in which case they directly mark respectively the beginning and end of the modeled time horizon.

This parameters gives the unit of duration that is used in the model calculations. The default value for this parameter is 'minute'. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In the practical approaches presented below, the rolling window optimization will be explained in more detail.

temporal_block object

A temporal block defines the properties of the optimization that is to be solved in the current window. Most importantly, it holds the necessary information about the resolution and horizon of the optimization.

  • resolution (optional): "duration value" or "array of duration values"

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run.

  • block_start (optional): "duration value" or "Date time value"

Indicates the start of this temporal block.

  • block_end(optional): "duration value" or "Date time value"

Indicates the end of this temporal block.

Relationships relevant for the temporal framework

model__temporal_block relationship

In this relationship, a model instance is linked to a temporal block. If this relationship doesn't exist - the temporal block is disregarded from this optimization model.

model__default_temporal_block relationship

Defines the default temporal block used for model objects, which will be replaced when a specific relationship is defined for a model in model__temporal_block.

node__temporal_block relationship

This relationship will link a node to a temporal block.

units_on__temporal_block relationship

This relationship links the units_on variable of a unit to a temporal block and will therefore govern the time-resolution of the unit's online/offline status.

unit__investment_temporal_block relationship

This relationship sets the temporal dimensions for investment decisions of a certain unit. The separation between this relationship and the units_on__temporal_block, allows the user for example to give a much finer resolution to a unit's on- or offline status than to it's investment decisions.

model__default_investment_temporal_block relationship

Defines the default temporal block used for investment decisions, which will be replaced when a specific relationship is defined for a unit in unit__investment_temporal_block.

General principle of the temporal framework

The general principle of the Spine modeling temporal structure is that different temporal blocks can be defined and linked to different objects in a model. This leads to great flexibility in the temporal structure of the model as a whole. To illustrate this, we will discuss some of the possibilities that arise in this framework.

One single temporal_block

Single solve with single block

The simplest case is a single solve of the entire time horizon (so roll_forward not defined) with a fixed resolution. In this case, only one temporal block has to be defined with a fixed resolution. Each node has to be linked to this temporal_block.

Alternatively, a variable resolution can be defined by choosing an array of durations for the resolution parameter. The sum of the durations in the array then have to match the length of the temporal block. The example below illustrates an optimization that spans one day for which the resolution is hourly in the beginning and then gradually decreases to a 6h resolution at the end.

  • temporal_block_1
    • block_start: 0h (Alternative DateTime: e.g. 2030-01-01T00:00:00)
    • block_end: 1D (Alternative DateTime: e.g. 2030-01-02T00:00:00)
    • resolution: [1h 1h 1h 1h 2h 2h 2h 4h 4h 6h]

Note that, as mentioned above, the block_start and block_end parameters can also be entered as absolute values, i.e. DateTime values.

Rolling window optimization with single block

A model with a single temporal_block can also be optimized in a rolling horizon framework. In this case, the roll_forward parameter has to be defined in the model object. The roll_forward parameter will then determine how much the optimization moves forward with every step, while the size of the temporal block will determine how large a time frame is optimized in each step. To see this more clearly, let's take a look at an example.

Suppose we want to model a horizon of one week, with a rolling window size of one day. The roll_forward parameter will then be a duration value of 1d. If we take the temporal_block parameters block_start and block_end to be the duration values 0h and 1d respectively, the model will optimize each day of the week separately. However, we could also take the block_end parameter to be 2d. Now the model will start by optimizing day 1 and day 2 together, after which it keeps only the values obtained for the first day, and moves forward to optimize the second and third day together.

Again, a variable resolution can be implemented for the rolling window optimization. The sum of the durations must in this case match the size of the optimized window.

Advanced usage: multiple temporal_block objects

Single solve with multiple blocks

Disconnected time periods

Multiple temporal blocks can be used to optimize disconnected periods. Let's take a look at an example in which two temporal blocks are defined.

  • temporal_block_1
    • block_start: 0h
    • block_end: 4h
  • temporal_block_2
    • block_start: 12h
    • block_end: 16h

This example will lead to an optimization of the first four hours of the model horizon, and also of hour 12 to 16. By defining exactly the same relationships for the two temporal blocks, an optimization of disconnected periods is achieved for exactly the same model components. This leads to the possibility of implementing the widely used representative days method. If desired, it is possible to choose a different temporal resolution for the different temporal_blocks.

It is worth noting that dynamic variables like node_state and units_on merit special attention when using disconnected time periods. By default, when trying to access variables Variables outside the defined temporal_blocks, SpineOpt.jl assumes such variables exist but allows them to take any values within specified bounds. If fixed initial conditions for the disconnected periods are desired, one needs to use parameters such as fix_node_state or fix_units_on.

Different regions/commodities in different resolutions

Multiple temporal blocks can also be used to model different regions or different commodities with a different resolution. This is especially useful when there is a certain region or commodity of interest, while other elements are connected to this but require less detail. For this kind of usage, the relationships that are defined for the temporal blocks will be different, as shown in the example below.

  • temporal_blocks
    • temporal_block_1
      • resolution: 1h
    • temporal_block_2
      • resolution: 2h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_1
    • node_2_temporal_block_2

Similarly, the on- and offline status of a unit can be modeled with a lower resolution than the actual output of that unit, by defining the units_on_temporal_block relationship with a different temporal block than the one used for the node_temporal_block relationship (of the node to which the unit is connected).

Rolling horizon with multiple blocks

Rolling horizon with different window sizes

Similar to what has been discussed above in Different regions/commodities in different resolutions, different commodities or regions can be modeled with a different resolution in the rolling horizon setting. The way to do it is completely analogous. Furthermore, when using the rolling horizon framework, a different window size can be chosen for the different modeled components, by simply using a different block_end parameter. However, using different block_ends e.g. for interconnected regions should be treated with care, as the variables for each region will only be generated for their respective temporal_block, which in most cases will lead to inconsistent linking constraints.

Putting it all together: rolling horizon with variable resolution that differs for different model components

Below is an example of an advanced use case in which a rolling horizon optimization is used, and different model components are optimized with a different resolution. By choosing the relevant parameters in the following way:

  • model
    • roll_forward: 4h
  • temporal_blocks
    • temporal_block_A
      • resolution: [1h 1h 2h 2h 2h 3h 3h]
      • block_end: 14h
    • temporal_block_B
      • resolution: [2h 2h 4h 6h]
      • block_end: 14h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_A
    • node_2_temporal_block_B

The two model components that are considered have a different resolution, and their own resolution is also varying within the optimization window. Note that in this case the two optimization windows have the same size, but this is not strictly necessary. The image below visualizes the first two window optimizations of this model.

temporal structure

diff --git a/dev/advanced_concepts/unit_commitment/index.html b/dev/advanced_concepts/unit_commitment/index.html index d7e6bbe462..e83585bcfb 100644 --- a/dev/advanced_concepts/unit_commitment/index.html +++ b/dev/advanced_concepts/unit_commitment/index.html @@ -1,2 +1,2 @@ -Unit Commitment · SpineOpt.jl

Unit commitment

To incorporate technical detail about (clustered) unit-commitment statuses of units, the online, started and shutdown status of units can be tracked and constrained in SpineOpt. In the following, relevant relationships and parameters are introduced and the general working principle is described.

Key concepts for unit commitment

Here, we briefly describe the key concepts involved in the representation of (clustered) unit commitment models:

  • units_on is an optimization variable that holds information about the on- or offline status of a unit. Unit commitment restrictions will govern how this variable can change through time.

  • units_on__temporal_block is a relationship linking the units_on variable of this unit to a specific temporal_block object. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

  • online_variable_type is a method parameter and can take the values unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear. If the binary value is chosen, the units status is modelled as a binary (classic UC). For clustered unit commitment units, the integer type is applicable. Note that if the parameter is not defined, the default will be linear. If the units status is not crucial, this can reduce the computational burden.

  • number_of_units defines how many units of a certain unit type are available. Typically this parameter takes a binary (UC) or integer (clustered UC) value. To avoid confusion the following distinction will be made in this document: unit will be used to identify a Spine unit object, which can have multiple members. Together with the unit_availability_factor, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). The default value for this parameter is $1$. It is possible to allow the model to increase the number_of_units itself, through Investment Optimization

  • unit_availability_factor: (number value or time series). Is the fraction of the time that this unit is considered to be available, by acting as a multiplier on the capacity. A time series can be used to indicate the intermittent character of renewable generation technologies.

  • min_up_time: (duration value). Sets the minimum time that a unit has to stay online after a startup. Inclusion of this parameter will trigger the creation of the constraint on Minimum up time (basic version)

  • min_down_time: (duration value). Sets the minimum time that a unit has to stay offline after a shutdown. Inclusion of this parameter will trigger the creation of the constraint on Minimum down time (basic version)

  • minimum_operating_point: (number value) limits the minimum value of the unit_flow variable for a unit which is currently online. Inclusion of this parameter will trigger the creation of the Constraint on minimum operating point

  • start_up_cost: "number value". Cost associated with starting up a unit.

  • shut_down_cost: "number value". Cost associated with shutting down a unit.

Illustrative unit commitment examples

Step 1: defining the number of members of a unit type

A spine unit can represent multiple members. This can be incorporated in a model by setting the number_of_units parameter to a specific value. For example, if we define a single unit in a model as follows:

  • unit_1
    • number_of_units: 2

And we link the unit to a certain node_1 with a unit__to_node relationship.

  • unit_1_to__node_1

The single Spine unit defined here, now represents two members. This means that a single unit_flow variable will be created for this unit, but the restrictions as imposed by the Ramping and Reserves framework will be adapted to reflect the fact that there are two members present, thus doubling the total capacity.

Step 2: choosing the online_variable_type

Next, we have to decide the online_variable_type for this unit, which will restrict the kind of values that the units_on variable can take. This basically comes down to deciding if we are working in a classical UC framework (unit_online_variable_type_binary), a clustered UC framework (unit_online_variable_type_integer), or a relaxed clustered UC framework (unit_online_variable_type_linear), in which a non-integer number of units can be online.

The classical UC framework can only be applied when the number_of_units equals 1.

Step 3: imposing a minimum operating point

The output of an online unit to a specific node can be restricted to be above a certain minimum by choosing a value for the minimum_operating_point parameter. This parameter is defined for the unit__to_node relationship, and is given as a fraction of the unit_capacity. If we continue with the example above, and define the following objects, relationships, and parameters:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

It can be seen that in this case the unit_flow form unit_1 to node_1 must for any timestep $t$ be larger than $units\_on(t) * 0.2 * 200$

Step 4: imposing a minimum up or down time

Spine units can also be restricted in their commitment status with minimum up- or down times by choosing a value for the min_up_time or min_down_time respectively. These parameters are defined for the unit object, and should be duration values. We can continue the example and add a minimum up time for the unit:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
    • min_up_time: 2h
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

Whereas the units_on variable was restricted (before inclusion of the min_up_time parameter) to be smaller than or equal to the number_of_units for any timestep $t$, it now has to be smaller than or equal to the number_of_units decremented with the units_started_up summed over the timesteps that include t - min_up_time. This implies that a unit which has started up, has to stay online for at least the min_up_time

To consider a simple example let's assume that we have a model with a resolution of 1h. Suppose that before t, there is no member of the unit online and in timestep t -> t + 1h, one member starts up. Another member starts up in timestep t + 1h \-> t + 2h. The first startup, along with the minimum up time of 2 hours implies that the units_on variable of this unit has now changed to $1$ in timestep t -> t + 1h and can not go back to $0$ in timestep t-> t + 1h -> t + 2h. The second startup further restricts the number of units that are allowed to be online, it can be seen that the following restrictions apply when both startups are combined with the minimum up time of 2h:

  • t-> t + 1h : $units\_on = 1$
  • t + 1h -> t + 2h: $units\_on = 2$
  • t + 2h-> t + 3h: $units\_on \in {1,2}$
  • t + 3h-> t + 4h: $units\_on \in {0,1,2}$

The minimum down time restrictions operate in very much the same way, they simply impose that units that have been shut down, have to stay offline for the chosen period of time.

Step 5: allocationg a cost to startups or shutdowns

Costs can be allocated to startups or shutdowns by choosing a value for the start_up_cost or shut_down_cost respectively.

Step 6: defining unit availabilities

By defining a unit_availability_factor, the fact that typical members are not available all the time can be reflected in the model.

Typically, units are not available $100$% of the time, due to scheduled maintenance, unforeseen outages, or other things. This can be incorporated in the model by setting the unit_availability_factor to a fractional value. For each timestep in the model, an upper bound is then imposed on the units_on variable, equal to number_of_units $*$ unit_availability_factor. This parameter can not be used when the online_variable_type is binary. It should also be noted that when the online_variable_type is of integer type, the aforementioned product must be integer as well, since it will determine the value of the units_available parameter which is restricted to integer values. The default value for this parameter is $1$.

The unit_availability_factor can also be taken as a timeseries. By allowing a different availability factor for each timestep in the model, it can perfectly be used to represent intermittent technologies of which the output cannot be fully controlled.

+Unit Commitment · SpineOpt.jl

Unit commitment

To incorporate technical detail about (clustered) unit-commitment statuses of units, the online, started and shutdown status of units can be tracked and constrained in SpineOpt. In the following, relevant relationships and parameters are introduced and the general working principle is described.

Key concepts for unit commitment

Here, we briefly describe the key concepts involved in the representation of (clustered) unit commitment models:

  • units_on is an optimization variable that holds information about the on- or offline status of a unit. Unit commitment restrictions will govern how this variable can change through time.

  • units_on__temporal_block is a relationship linking the units_on variable of this unit to a specific temporal_block object. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

  • online_variable_type is a method parameter and can take the values unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear. If the binary value is chosen, the units status is modelled as a binary (classic UC). For clustered unit commitment units, the integer type is applicable. Note that if the parameter is not defined, the default will be linear. If the units status is not crucial, this can reduce the computational burden.

  • number_of_units defines how many units of a certain unit type are available. Typically this parameter takes a binary (UC) or integer (clustered UC) value. To avoid confusion the following distinction will be made in this document: unit will be used to identify a Spine unit object, which can have multiple members. Together with the unit_availability_factor, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). The default value for this parameter is $1$. It is possible to allow the model to increase the number_of_units itself, through Investment Optimization

  • unit_availability_factor: (number value or time series). Is the fraction of the time that this unit is considered to be available, by acting as a multiplier on the capacity. A time series can be used to indicate the intermittent character of renewable generation technologies.

  • min_up_time: (duration value). Sets the minimum time that a unit has to stay online after a startup. Inclusion of this parameter will trigger the creation of the constraint on Minimum up time (basic version)

  • min_down_time: (duration value). Sets the minimum time that a unit has to stay offline after a shutdown. Inclusion of this parameter will trigger the creation of the constraint on Minimum down time (basic version)

  • minimum_operating_point: (number value) limits the minimum value of the unit_flow variable for a unit which is currently online. Inclusion of this parameter will trigger the creation of the Constraint on minimum operating point

  • start_up_cost: "number value". Cost associated with starting up a unit.

  • shut_down_cost: "number value". Cost associated with shutting down a unit.

Illustrative unit commitment examples

Step 1: defining the number of members of a unit type

A spine unit can represent multiple members. This can be incorporated in a model by setting the number_of_units parameter to a specific value. For example, if we define a single unit in a model as follows:

  • unit_1
    • number_of_units: 2

And we link the unit to a certain node_1 with a unit__to_node relationship.

  • unit_1_to__node_1

The single Spine unit defined here, now represents two members. This means that a single unit_flow variable will be created for this unit, but the restrictions as imposed by the Ramping and Reserves framework will be adapted to reflect the fact that there are two members present, thus doubling the total capacity.

Step 2: choosing the online_variable_type

Next, we have to decide the online_variable_type for this unit, which will restrict the kind of values that the units_on variable can take. This basically comes down to deciding if we are working in a classical UC framework (unit_online_variable_type_binary), a clustered UC framework (unit_online_variable_type_integer), or a relaxed clustered UC framework (unit_online_variable_type_linear), in which a non-integer number of units can be online.

The classical UC framework can only be applied when the number_of_units equals 1.

Step 3: imposing a minimum operating point

The output of an online unit to a specific node can be restricted to be above a certain minimum by choosing a value for the minimum_operating_point parameter. This parameter is defined for the unit__to_node relationship, and is given as a fraction of the unit_capacity. If we continue with the example above, and define the following objects, relationships, and parameters:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

It can be seen that in this case the unit_flow form unit_1 to node_1 must for any timestep $t$ be larger than $units\_on(t) * 0.2 * 200$

Step 4: imposing a minimum up or down time

Spine units can also be restricted in their commitment status with minimum up- or down times by choosing a value for the min_up_time or min_down_time respectively. These parameters are defined for the unit object, and should be duration values. We can continue the example and add a minimum up time for the unit:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
    • min_up_time: 2h
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

Whereas the units_on variable was restricted (before inclusion of the min_up_time parameter) to be smaller than or equal to the number_of_units for any timestep $t$, it now has to be smaller than or equal to the number_of_units decremented with the units_started_up summed over the timesteps that include t - min_up_time. This implies that a unit which has started up, has to stay online for at least the min_up_time

To consider a simple example let's assume that we have a model with a resolution of 1h. Suppose that before t, there is no member of the unit online and in timestep t -> t + 1h, one member starts up. Another member starts up in timestep t + 1h \-> t + 2h. The first startup, along with the minimum up time of 2 hours implies that the units_on variable of this unit has now changed to $1$ in timestep t -> t + 1h and can not go back to $0$ in timestep t-> t + 1h -> t + 2h. The second startup further restricts the number of units that are allowed to be online, it can be seen that the following restrictions apply when both startups are combined with the minimum up time of 2h:

  • t-> t + 1h : $units\_on = 1$
  • t + 1h -> t + 2h: $units\_on = 2$
  • t + 2h-> t + 3h: $units\_on \in {1,2}$
  • t + 3h-> t + 4h: $units\_on \in {0,1,2}$

The minimum down time restrictions operate in very much the same way, they simply impose that units that have been shut down, have to stay offline for the chosen period of time.

Step 5: allocationg a cost to startups or shutdowns

Costs can be allocated to startups or shutdowns by choosing a value for the start_up_cost or shut_down_cost respectively.

Step 6: defining unit availabilities

By defining a unit_availability_factor, the fact that typical members are not available all the time can be reflected in the model.

Typically, units are not available $100$% of the time, due to scheduled maintenance, unforeseen outages, or other things. This can be incorporated in the model by setting the unit_availability_factor to a fractional value. For each timestep in the model, an upper bound is then imposed on the units_on variable, equal to number_of_units $*$ unit_availability_factor. This parameter can not be used when the online_variable_type is binary. It should also be noted that when the online_variable_type is of integer type, the aforementioned product must be integer as well, since it will determine the value of the units_available parameter which is restricted to integer values. The default value for this parameter is $1$.

The unit_availability_factor can also be taken as a timeseries. By allowing a different availability factor for each timestep in the model, it can perfectly be used to represent intermittent technologies of which the output cannot be fully controlled.

diff --git a/dev/advanced_concepts/user_constraints/index.html b/dev/advanced_concepts/user_constraints/index.html index f65c029fd8..a8c538e3b0 100644 --- a/dev/advanced_concepts/user_constraints/index.html +++ b/dev/advanced_concepts/user_constraints/index.html @@ -1,2 +1,2 @@ -User Constraints · SpineOpt.jl

User Constraints

User constraints allow the user to define arbitrary linear constraints involving most of the problem variables. This section describes this function and how to use it.

Key User Constraint Concepts

  1. The basic principle: The basic steps involved in forming a user constraint are:
  • Creating a user constraint object: One creates a new user_constraint object which will be used as a unique handle for the specific constraint and on which constraint-level parameters will be defined.
  • Specify which variables are involved in the constraint: this generally involves creating a relationship involving the user_constraint object. For example, specifying the relationship unit__from_node__user_constraint specifies that the corresponding unit_flow variable is involved in the constraint. The table below contains a complete list of variables and the corresponding relationships to set.
  • Specify the variable coefficients: this will generally involve specifying a parameter named *_coefficient on the relationship defined above to specify the coefficient on that particular variable in the constraint. For example, to define the coefficient on the unit_flow variable, one specifies the unit_flow_coefficient parameter on the approrpriate unit__from_node__user_constraint relationship. The table below contains a complete list of variables and the corresponding coefficient parameters to set.
  • Specify the right-hand-side constant term: The constraint should be formed in conventional form with all constant terms moved to the right-hand side. The right-hand-side constant term is specified by setting the right_hand_side user_constraint parameter.
    • Specify the constraint sense: this is done by setting the constraint_sense user_constraint parameter. The allowed values are ==, >= and <=.
    • Coefficients can be defined on some parameters themselves. For example, one may specify a coefficient on a node's demand parameter. This is done by specifying the relationship node__user_constraint and specifying the demand_coefficient parameter on that relationship
  1. Piecewise unit_flow coefficients: As described in operating_points, specifying this parameter decomposes the unit_flow variable into a number of sub operating segment variables named unit_flow_op in the model and with an additional index, i for the operating segment. The intention of this functionality is to allow unit_flow coefficients to be defined individually per segment to define a piecewise linear function. To accomplish this, the steps are as described above with the exception that one must define operating_points on the appropriate unit__from_node or unit__to_node as an array type with the dimension corresponding to the number of operating points and then set the unit_flow_coefficient for the appropriate unit__from_node__user_constraint relationship, also as an array type with the same number of elements. Note that if operating points is defined as an array type with more than one elements, unit_flow_coefficient may be defined as either an array or non-array type. However, if operating_points is of non-array type, corresponding unit_flow_coefficients must also be of non-array types.
  2. Variables, relationships and coefficient guide for user constraints The table below provides guidance regarding what relationships and coefficients to set for various problem variables and parameters.
Problem variable / Parameter NameRelationshipParameter
unit_flow (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow_op (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (array type)
unit_flow_op (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (array type)
connection_flow (direction=from_node)connection__from_node__user_constraintconnection_flow_coefficient
connection_flow (direction=to_node)connection__to_node__user_constraintconnection_flow_coefficient
node_statenode__user_constraintnode_state_coefficient
storages_investednode__user_constraintstorages_invested_coefficient
storages_invested_availablenode__user_constraintstorages_invested_available_coefficient
demandnode__user_constraintdemand_coefficient
units_onunit__user_constraintunits_on_coefficient
units_started_upunit__user_constraintunits_started_up_coefficient
units_investedunit__user_constraintunits_invested_coefficient
units_invested_availableunit__user_constraintunits_invested_available_coefficient
connections_investedconnection__user_constraintconnections_invested_coefficient
connections_invested_availableconnection__user_constraintconnections_invested_available_coefficient
+User Constraints · SpineOpt.jl

User Constraints

User constraints allow the user to define arbitrary linear constraints involving most of the problem variables. This section describes this function and how to use it.

Key User Constraint Concepts

  1. The basic principle: The basic steps involved in forming a user constraint are:
  • Creating a user constraint object: One creates a new user_constraint object which will be used as a unique handle for the specific constraint and on which constraint-level parameters will be defined.
  • Specify which variables are involved in the constraint: this generally involves creating a relationship involving the user_constraint object. For example, specifying the relationship unit__from_node__user_constraint specifies that the corresponding unit_flow variable is involved in the constraint. The table below contains a complete list of variables and the corresponding relationships to set.
  • Specify the variable coefficients: this will generally involve specifying a parameter named *_coefficient on the relationship defined above to specify the coefficient on that particular variable in the constraint. For example, to define the coefficient on the unit_flow variable, one specifies the unit_flow_coefficient parameter on the approrpriate unit__from_node__user_constraint relationship. The table below contains a complete list of variables and the corresponding coefficient parameters to set.
  • Specify the right-hand-side constant term: The constraint should be formed in conventional form with all constant terms moved to the right-hand side. The right-hand-side constant term is specified by setting the right_hand_side user_constraint parameter.
    • Specify the constraint sense: this is done by setting the constraint_sense user_constraint parameter. The allowed values are ==, >= and <=.
    • Coefficients can be defined on some parameters themselves. For example, one may specify a coefficient on a node's demand parameter. This is done by specifying the relationship node__user_constraint and specifying the demand_coefficient parameter on that relationship
  1. Piecewise unit_flow coefficients: As described in operating_points, specifying this parameter decomposes the unit_flow variable into a number of sub operating segment variables named unit_flow_op in the model and with an additional index, i for the operating segment. The intention of this functionality is to allow unit_flow coefficients to be defined individually per segment to define a piecewise linear function. To accomplish this, the steps are as described above with the exception that one must define operating_points on the appropriate unit__from_node or unit__to_node as an array type with the dimension corresponding to the number of operating points and then set the unit_flow_coefficient for the appropriate unit__from_node__user_constraint relationship, also as an array type with the same number of elements. Note that if operating points is defined as an array type with more than one elements, unit_flow_coefficient may be defined as either an array or non-array type. However, if operating_points is of non-array type, corresponding unit_flow_coefficients must also be of non-array types.
  2. Variables, relationships and coefficient guide for user constraints The table below provides guidance regarding what relationships and coefficients to set for various problem variables and parameters.
Problem variable / Parameter NameRelationshipParameter
unit_flow (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow_op (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (array type)
unit_flow_op (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (array type)
connection_flow (direction=from_node)connection__from_node__user_constraintconnection_flow_coefficient
connection_flow (direction=to_node)connection__to_node__user_constraintconnection_flow_coefficient
node_statenode__user_constraintnode_state_coefficient
storages_investednode__user_constraintstorages_invested_coefficient
storages_invested_availablenode__user_constraintstorages_invested_available_coefficient
demandnode__user_constraintdemand_coefficient
units_onunit__user_constraintunits_on_coefficient
units_started_upunit__user_constraintunits_started_up_coefficient
units_investedunit__user_constraintunits_invested_coefficient
units_invested_availableunit__user_constraintunits_invested_available_coefficient
connections_investedconnection__user_constraintconnections_invested_coefficient
connections_invested_availableconnection__user_constraintconnections_invested_available_coefficient
diff --git a/dev/concept_reference/Object Classes/index.html b/dev/concept_reference/Object Classes/index.html index e1baa08f69..3f33ab1435 100644 --- a/dev/concept_reference/Object Classes/index.html +++ b/dev/concept_reference/Object Classes/index.html @@ -1,2 +1,2 @@ -Object Classes · SpineOpt.jl

Object Classes

commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

Related Parameters: commodity_lodf_tolerance, commodity_physics_duration, commodity_physics, commodity_ptdf_threshold, is_active, mp_min_res_gen_to_demand_ratio_slack_penalty and mp_min_res_gen_to_demand_ratio

Related Relationship Classes: node__commodity and unit__commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

Related Parameters: benders_starting_connections_invested, candidate_connections, connection_availability_factor, connection_contingency, connection_decommissioning_cost, connection_decommissioning_time, connection_discount_rate_technology_specific, connection_investment_cost, connection_investment_econ_lifetime, connection_investment_lifetime_sense, connection_investment_tech_lifetime, connection_investment_variable_type, connection_lead_time, connection_monitored, connection_reactance_base, connection_reactance, connection_resistance, connection_type, connections_invested_big_m_mga, connections_invested_mga_weight, connections_invested_mga, fix_connections_invested_available, fix_connections_invested, graph_view_position, has_binary_gas_flow, initial_connections_invested_available, initial_connections_invested, is_active and number_of_connections

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__investment_group, connection__investment_stochastic_structure, connection__investment_temporal_block, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, connection__user_constraint and stage__output__connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

investment_group

A group of investments that need to be done together.

Related Parameters: equal_investments, maximum_capacity_invested_available, maximum_entities_invested_available, minimum_capacity_invested_available and minimum_entities_invested_available

Related Relationship Classes: connection__from_node__investment_group, connection__investment_group, connection__to_node__investment_group, node__investment_group, unit__from_node__investment_group, unit__investment_group and unit__to_node__investment_group

A group of investments that need to be done together.

model

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

Related Parameters: big_m, db_lp_solver_options, db_lp_solver, db_mip_solver_options, db_mip_solver, discount_rate, discount_year, duration_unit, is_active, max_gap, max_iterations, max_mga_iterations, max_mga_slack, min_iterations, model_algorithm, model_end, model_start, model_type, roll_forward, use_connection_intact_flow, use_economic_representation, use_milestone_years, use_tight_compact_formulations, window_duration, window_weight, write_lodf_file, write_mps_file and write_ptdf_file

Related Relationship Classes: model__default_investment_stochastic_structure, model__default_investment_temporal_block, model__default_stochastic_structure, model__default_temporal_block and model__report

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

Related Parameters: balance_type, benders_starting_storages_invested, candidate_storages, demand, downward_reserve, fix_node_pressure, fix_node_state, fix_node_voltage_angle, fix_storages_invested_available, fix_storages_invested, frac_state_loss, fractional_demand, graph_view_position, has_pressure, has_state, has_voltage_angle, initial_node_pressure, initial_node_state, initial_node_voltage_angle, initial_storages_invested_available, initial_storages_invested, is_active, is_non_spinning, is_reserve_node, max_node_pressure, max_voltage_angle, min_capacity_margin_penalty, min_capacity_margin, min_node_pressure, min_voltage_angle, minimum_reserve_activation_time, nodal_balance_sense, node_opf_type, node_slack_penalty, node_state_cap, node_state_min, number_of_storages, state_coeff, storage_decommissioning_cost, storage_decommissioning_time, storage_discount_rate_technology_specific, storage_fom_cost, storage_investment_cost, storage_investment_econ_lifetime, storage_investment_lifetime_sense, storage_investment_tech_lifetime, storage_investment_variable_type, storage_lead_time, storages_invested_big_m_mga, storages_invested_mga_weight, storages_invested_mga, tax_in_unit_flow, tax_net_unit_flow, tax_out_unit_flow and upward_reserve

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, node__commodity, node__investment_group, node__investment_stochastic_structure, node__investment_temporal_block, node__node, node__stochastic_structure, node__temporal_block, node__user_constraint, stage__output__node, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint and unit__to_node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

output

A variable name from SpineOpt whose value can be included in a report.

Related Parameters: is_active and output_resolution

Related Relationship Classes: report__output, stage__output__connection, stage__output__node and stage__output__unit

A variable name from SpineOpt whose value can be included in a report.

report

A results report from a particular SpineOpt run, including the value of specific variables.

Related Parameters: is_active and output_db_url

Related Relationship Classes: model__report and report__output

A results report from a particular SpineOpt run, including the value of specific variables.

settings

Internal SpineOpt settings. We kindly advise not to mess with this one.

Related Parameters: version

stage

An additional stage in the optimisation problem (EXPERIMENTAL)

Related Parameters: is_active and stage_scenario

Related Relationship Classes: stage__child_stage, stage__output__connection, stage__output__node and stage__output__unit

stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

Related Parameters: is_active

Related Relationship Classes: parent_stochastic_scenario__child_stochastic_scenario and stochastic_structure__stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

stochastic_structure

A group of stochastic scenarios that represent a structure.

Related Parameters: is_active

Related Relationship Classes: connection__investment_stochastic_structure, model__default_investment_stochastic_structure, model__default_stochastic_structure, node__investment_stochastic_structure, node__stochastic_structure, stochastic_structure__stochastic_scenario, unit__investment_stochastic_structure and units_on__stochastic_structure

A group of stochastic scenarios that represent a structure.

temporal_block

A length of time with a particular resolution.

Related Parameters: block_end, block_start, is_active, representative_periods_mapping, resolution and weight

Related Relationship Classes: connection__investment_temporal_block, model__default_investment_temporal_block, model__default_temporal_block, node__investment_temporal_block, node__temporal_block, unit__investment_temporal_block and units_on__temporal_block

A length of time with a particular resolution.

unit

A conversion of one/many comodities between nodes.

Related Parameters: benders_starting_units_invested, candidate_units, curtailment_cost, fix_units_invested_available, fix_units_invested, fix_units_on, fix_units_out_of_service, fom_cost, graph_view_position, initial_units_invested_available, initial_units_invested, initial_units_on, initial_units_out_of_service, is_active, is_renewable, min_down_time, min_up_time, number_of_units, online_variable_type, outage_variable_type, scheduled_outage_duration, shut_down_cost, start_up_cost, unit_availability_factor, unit_decommissioning_cost, unit_decommissioning_time, unit_discount_rate_technology_specific, unit_investment_cost, unit_investment_econ_lifetime, unit_investment_lifetime_sense, unit_investment_tech_lifetime, unit_investment_variable_type, unit_lead_time, units_invested_big_m_mga, units_invested_mga_weight, units_invested_mga, units_on_cost, units_on_non_anticipativity_margin, units_on_non_anticipativity_time and units_unavailable

Related Relationship Classes: stage__output__unit, unit__commodity, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__investment_group, unit__investment_stochastic_structure, unit__investment_temporal_block, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint, unit__to_node, unit__user_constraint, units_on__stochastic_structure and units_on__temporal_block

A conversion of one/many comodities between nodes.

user_constraint

A generic data-driven custom constraint.

Related Parameters: constraint_sense, is_active, right_hand_side and user_constraint_slack_penalty

Related Relationship Classes: connection__from_node__user_constraint, connection__to_node__user_constraint, connection__user_constraint, node__user_constraint, unit__from_node__user_constraint, unit__to_node__user_constraint and unit__user_constraint

A generic data-driven custom constraint.

+Object Classes · SpineOpt.jl

Object Classes

commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

Related Parameters: commodity_lodf_tolerance, commodity_physics_duration, commodity_physics, commodity_ptdf_threshold, is_active, mp_min_res_gen_to_demand_ratio_slack_penalty and mp_min_res_gen_to_demand_ratio

Related Relationship Classes: node__commodity and unit__commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

Related Parameters: benders_starting_connections_invested, candidate_connections, connection_availability_factor, connection_contingency, connection_decommissioning_cost, connection_decommissioning_time, connection_discount_rate_technology_specific, connection_investment_cost, connection_investment_econ_lifetime, connection_investment_lifetime_sense, connection_investment_tech_lifetime, connection_investment_variable_type, connection_lead_time, connection_monitored, connection_reactance_base, connection_reactance, connection_resistance, connection_type, connections_invested_big_m_mga, connections_invested_mga_weight, connections_invested_mga, fix_connections_invested_available, fix_connections_invested, graph_view_position, has_binary_gas_flow, initial_connections_invested_available, initial_connections_invested, is_active and number_of_connections

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__investment_group, connection__investment_stochastic_structure, connection__investment_temporal_block, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, connection__user_constraint and stage__output__connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

investment_group

A group of investments that need to be done together.

Related Parameters: equal_investments, maximum_capacity_invested_available, maximum_entities_invested_available, minimum_capacity_invested_available and minimum_entities_invested_available

Related Relationship Classes: connection__from_node__investment_group, connection__investment_group, connection__to_node__investment_group, node__investment_group, unit__from_node__investment_group, unit__investment_group and unit__to_node__investment_group

A group of investments that need to be done together.

model

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

Related Parameters: big_m, db_lp_solver_options, db_lp_solver, db_mip_solver_options, db_mip_solver, discount_rate, discount_year, duration_unit, is_active, max_gap, max_iterations, max_mga_iterations, max_mga_slack, min_iterations, model_algorithm, model_end, model_start, model_type, roll_forward, use_connection_intact_flow, use_economic_representation, use_milestone_years, use_tight_compact_formulations, window_duration, window_weight, write_lodf_file, write_mps_file and write_ptdf_file

Related Relationship Classes: model__default_investment_stochastic_structure, model__default_investment_temporal_block, model__default_stochastic_structure, model__default_temporal_block and model__report

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

Related Parameters: balance_type, benders_starting_storages_invested, candidate_storages, demand, downward_reserve, fix_node_pressure, fix_node_state, fix_node_voltage_angle, fix_storages_invested_available, fix_storages_invested, frac_state_loss, fractional_demand, graph_view_position, has_pressure, has_state, has_voltage_angle, initial_node_pressure, initial_node_state, initial_node_voltage_angle, initial_storages_invested_available, initial_storages_invested, is_active, is_non_spinning, is_reserve_node, max_node_pressure, max_voltage_angle, min_capacity_margin_penalty, min_capacity_margin, min_node_pressure, min_voltage_angle, minimum_reserve_activation_time, nodal_balance_sense, node_opf_type, node_slack_penalty, node_state_cap, node_state_min, number_of_storages, state_coeff, storage_decommissioning_cost, storage_decommissioning_time, storage_discount_rate_technology_specific, storage_fom_cost, storage_investment_cost, storage_investment_econ_lifetime, storage_investment_lifetime_sense, storage_investment_tech_lifetime, storage_investment_variable_type, storage_lead_time, storages_invested_big_m_mga, storages_invested_mga_weight, storages_invested_mga, tax_in_unit_flow, tax_net_unit_flow, tax_out_unit_flow and upward_reserve

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, node__commodity, node__investment_group, node__investment_stochastic_structure, node__investment_temporal_block, node__node, node__stochastic_structure, node__temporal_block, node__user_constraint, stage__output__node, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint and unit__to_node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

output

A variable name from SpineOpt whose value can be included in a report.

Related Parameters: is_active and output_resolution

Related Relationship Classes: report__output, stage__output__connection, stage__output__node and stage__output__unit

A variable name from SpineOpt whose value can be included in a report.

report

A results report from a particular SpineOpt run, including the value of specific variables.

Related Parameters: is_active and output_db_url

Related Relationship Classes: model__report and report__output

A results report from a particular SpineOpt run, including the value of specific variables.

settings

Internal SpineOpt settings. We kindly advise not to mess with this one.

Related Parameters: version

stage

An additional stage in the optimisation problem (EXPERIMENTAL)

Related Parameters: is_active and stage_scenario

Related Relationship Classes: stage__child_stage, stage__output__connection, stage__output__node and stage__output__unit

stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

Related Parameters: is_active

Related Relationship Classes: parent_stochastic_scenario__child_stochastic_scenario and stochastic_structure__stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

stochastic_structure

A group of stochastic scenarios that represent a structure.

Related Parameters: is_active

Related Relationship Classes: connection__investment_stochastic_structure, model__default_investment_stochastic_structure, model__default_stochastic_structure, node__investment_stochastic_structure, node__stochastic_structure, stochastic_structure__stochastic_scenario, unit__investment_stochastic_structure and units_on__stochastic_structure

A group of stochastic scenarios that represent a structure.

temporal_block

A length of time with a particular resolution.

Related Parameters: block_end, block_start, is_active, representative_periods_mapping, resolution and weight

Related Relationship Classes: connection__investment_temporal_block, model__default_investment_temporal_block, model__default_temporal_block, node__investment_temporal_block, node__temporal_block, unit__investment_temporal_block and units_on__temporal_block

A length of time with a particular resolution.

unit

A conversion of one/many comodities between nodes.

Related Parameters: benders_starting_units_invested, candidate_units, curtailment_cost, fix_units_invested_available, fix_units_invested, fix_units_on, fix_units_out_of_service, fom_cost, graph_view_position, initial_units_invested_available, initial_units_invested, initial_units_on, initial_units_out_of_service, is_active, is_renewable, min_down_time, min_up_time, number_of_units, online_variable_type, outage_variable_type, scheduled_outage_duration, shut_down_cost, start_up_cost, unit_availability_factor, unit_decommissioning_cost, unit_decommissioning_time, unit_discount_rate_technology_specific, unit_investment_cost, unit_investment_econ_lifetime, unit_investment_lifetime_sense, unit_investment_tech_lifetime, unit_investment_variable_type, unit_lead_time, units_invested_big_m_mga, units_invested_mga_weight, units_invested_mga, units_on_cost, units_on_non_anticipativity_margin, units_on_non_anticipativity_time and units_unavailable

Related Relationship Classes: stage__output__unit, unit__commodity, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__investment_group, unit__investment_stochastic_structure, unit__investment_temporal_block, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint, unit__to_node, unit__user_constraint, units_on__stochastic_structure and units_on__temporal_block

A conversion of one/many comodities between nodes.

user_constraint

A generic data-driven custom constraint.

Related Parameters: constraint_sense, is_active, right_hand_side and user_constraint_slack_penalty

Related Relationship Classes: connection__from_node__user_constraint, connection__to_node__user_constraint, connection__user_constraint, node__user_constraint, unit__from_node__user_constraint, unit__to_node__user_constraint and unit__user_constraint

A generic data-driven custom constraint.

diff --git a/dev/concept_reference/Parameter Value Lists/index.html b/dev/concept_reference/Parameter Value Lists/index.html index d415bd3fa0..7b2e4783d7 100644 --- a/dev/concept_reference/Parameter Value Lists/index.html +++ b/dev/concept_reference/Parameter Value Lists/index.html @@ -1,2 +1,2 @@ -Parameter Value Lists · SpineOpt.jl

Parameter Value Lists

balance_type_list

Possible values: balance_type_group, balance_type_node and balance_type_none

boolean_value_list

Possible values: false and true

commodity_physics_list

Possible values: commodity_physics_lodf, commodity_physics_none and commodity_physics_ptdf

connection_investment_variable_type_list

Possible values: connection_investment_variable_type_continuous and connection_investment_variable_type_integer

connection_type_list

Possible values: connection_type_lossless_bidirectional and connection_type_normal

constraint_sense_list

Possible values: <=, == and >=

db_lp_solver_list

Possible values: CDCS.jl, CDDLib.jl, COSMO.jl, CPLEX.jl, CSDP.jl, Clp.jl, ECOS.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Hypatia.jl, Ipopt.jl, KNITRO.jl, MadNLP.jl, MosekTools.jl, NLopt.jl, OSQP.jl, ProxSDP.jl, SCIP.jl, SCS.jl, SDPA.jl, SDPNAL.jl, SDPT3.jl, SeDuMi.jl and Xpress.jl

db_mip_solver_list

Possible values: CPLEX.jl, Cbc.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Juniper.jl, KNITRO.jl, MosekTools.jl, SCIP.jl and Xpress.jl

duration_unit_list

Possible values: hour and minute

model_algorithm_list

Possible values: basic_algorithm and mga_algorithm

model_type_list

Possible values: spineopt_benders, spineopt_other and spineopt_standard

node_opf_type_list

Possible values: node_opf_type_normal and node_opf_type_reference

storage_investment_variable_type_list

Possible values: storage_investment_variable_type_continuous and storage_investment_variable_type_integer

unit_investment_variable_type_list

Possible values: unit_investment_variable_type_continuous and unit_investment_variable_type_integer

unit_online_variable_type_list

Possible values: unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear and unit_online_variable_type_none

write_mps_file_list

Possible values: write_mps_always, write_mps_never and write_mps_on_no_solve

+Parameter Value Lists · SpineOpt.jl

Parameter Value Lists

balance_type_list

Possible values: balance_type_group, balance_type_node and balance_type_none

boolean_value_list

Possible values: false and true

commodity_physics_list

Possible values: commodity_physics_lodf, commodity_physics_none and commodity_physics_ptdf

connection_investment_variable_type_list

Possible values: connection_investment_variable_type_continuous and connection_investment_variable_type_integer

connection_type_list

Possible values: connection_type_lossless_bidirectional and connection_type_normal

constraint_sense_list

Possible values: <=, == and >=

db_lp_solver_list

Possible values: CDCS.jl, CDDLib.jl, COSMO.jl, CPLEX.jl, CSDP.jl, Clp.jl, ECOS.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Hypatia.jl, Ipopt.jl, KNITRO.jl, MadNLP.jl, MosekTools.jl, NLopt.jl, OSQP.jl, ProxSDP.jl, SCIP.jl, SCS.jl, SDPA.jl, SDPNAL.jl, SDPT3.jl, SeDuMi.jl and Xpress.jl

db_mip_solver_list

Possible values: CPLEX.jl, Cbc.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Juniper.jl, KNITRO.jl, MosekTools.jl, SCIP.jl and Xpress.jl

duration_unit_list

Possible values: hour and minute

model_algorithm_list

Possible values: basic_algorithm and mga_algorithm

model_type_list

Possible values: spineopt_benders, spineopt_other and spineopt_standard

node_opf_type_list

Possible values: node_opf_type_normal and node_opf_type_reference

storage_investment_variable_type_list

Possible values: storage_investment_variable_type_continuous and storage_investment_variable_type_integer

unit_investment_variable_type_list

Possible values: unit_investment_variable_type_continuous and unit_investment_variable_type_integer

unit_online_variable_type_list

Possible values: unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear and unit_online_variable_type_none

write_mps_file_list

Possible values: write_mps_always, write_mps_never and write_mps_on_no_solve

diff --git a/dev/concept_reference/Parameters/index.html b/dev/concept_reference/Parameters/index.html index 071c176781..7d1ef0cdbe 100644 --- a/dev/concept_reference/Parameters/index.html +++ b/dev/concept_reference/Parameters/index.html @@ -1,2 +1,2 @@ -Parameters · SpineOpt.jl

Parameters

balance_type

A selector for how the nodal_balance constraint should be handled.

Default value: balance_type_node

Uses Parameter Value Lists: balance_type_list

Related Object Classes: node

A selector for how the nodal_balance constraint should be handled.

benders_starting_connections_invested

Fixes the number of connections invested during the first Benders iteration

Default value: nothing

Related Object Classes: connection

benders_starting_storages_invested

Fixes the number of storages invested during the first Benders iteration

Default value: nothing

Related Object Classes: node

benders_starting_units_invested

Fixes the number of units invested during the first Benders iteration

Default value: nothing

Related Object Classes: unit

big_m

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

Default value: 1000000

Related Object Classes: model

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

block_end

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

block_start

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

candidate_connections

The number of connections that may be invested in

Default value: nothing

Related Object Classes: connection

The number of connections that may be invested in

candidate_storages

Determines the maximum number of new storages which may be invested in

Default value: nothing

Related Object Classes: node

Determines the maximum number of new storages which may be invested in

candidate_units

Number of units which may be additionally constructed

Default value: nothing

Related Object Classes: unit

Number of units which may be additionally constructed

commodity_lodf_tolerance

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

Default value: 0.1

Related Object Classes: commodity

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

commodity_physics

Defines if the commodity follows lodf or ptdf physics.

Default value: commodity_physics_none

Uses Parameter Value Lists: commodity_physics_list

Related Object Classes: commodity

Defines if the commodity follows lodf or ptdf physics.

commodity_physics_duration

For how long the commodity_physics should apply relative to the start of the window.

Default value: nothing

Related Object Classes: commodity

For how long the commodity_physics should apply relative to the start of the window.

commodity_ptdf_threshold

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

Default value: 0.001

Related Object Classes: commodity

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

compression_factor

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

Default value: nothing

Related Relationship Classes: connection__node__node

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

connection_availability_factor

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: connection

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

connection_capacity

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

connection_contingency

A boolean flag for defining a contingency connection.

Default value: nothing

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_conv_cap_to_flow

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

Default value: 1.0

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

connection_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: connection

connection_decommissioning_time

A connection's decommissioning time, i.e., the time between the moment at which a connection decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: connection

connection_discount_rate_technology_specific

connection-specific discount rate used to calculate the connection's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: connection

connection_emergency_capacity

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

connection_flow_coefficient

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

Default value: 0.0

Related Relationship Classes: connection__from_node__user_constraint and connection__to_node__user_constraint

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

connection_flow_cost

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

connection_flow_delay

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Relationship Classes: connection__node__node

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

connection_flow_non_anticipativity_margin

Margin by which connection_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_flow_non_anticipativity_time

Period of time where the value of the connection_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_margin

Margin by which connection_intact_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_time

Period of time where the value of the connection_intact_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_investment_cost

The per unit investment cost for the connection over the connection_investment_tech_lifetime

Default value: nothing

Related Object Classes: connection

The per unit investment cost for the connection over the connection_investment_tech_lifetime

connection_investment_econ_lifetime

Determines the minimum economical investment lifetime of a connection.

Default value: nothing

Related Object Classes: connection

connection_investment_lifetime_sense

A selector for connection_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: connection

connection_investment_tech_lifetime

Determines the maximum technical lifetime of a connection. Once invested, it remains in service for this long

Default value: nothing

Related Object Classes: connection

connection_investment_variable_type

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

Default value: connection_investment_variable_type_integer

Uses Parameter Value Lists: connection_investment_variable_type_list

Related Object Classes: connection

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

connection_lead_time

A connection's lead time, i.e., the time between the moment at which a connection investment decision is takien, and the moment at which the connection investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: connection

connection_linepack_constant

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

Default value: nothing

Related Relationship Classes: connection__node__node

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

connection_monitored

A boolean flag for defining a contingency connection.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_reactance

The per unit reactance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit reactance of a connection.

connection_reactance_base

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

Default value: 1

Related Object Classes: connection

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

connection_resistance

The per unit resistance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit resistance of a connection.

connection_type

A selector between a normal and a lossless bidirectional connection.

Default value: connection_type_normal

Uses Parameter Value Lists: connection_type_list

Related Object Classes: connection

A selector between a normal and a lossless bidirectional connection.

connections_invested_available_coefficient

coefficient of connections_invested_available in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

connections_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

Default value: nothing

Related Object Classes: connection

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

connections_invested_coefficient

coefficient of connections_invested in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

coefficient of connections_invested in the specific user_constraint

connections_invested_mga

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

connections_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: connection

constraint_sense

A selector for the sense of the user_constraint.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: user_constraint

A selector for the sense of the user_constraint.

curtailment_cost

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

cyclic_condition

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: node__temporal_block

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

db_lp_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_lp_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

db_lp_solver_options

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Clp.jl", Dict{String, Any}("data" => Any[Any["LogLevel", 0.0]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

db_mip_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_mip_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

db_mip_solver_options

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["mip_rel_gap", 0.01], Any["threads", 0.0], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Cbc.jl", Dict{String, Any}("data" => Any[Any["ratioGap", 0.01], Any["logLevel", 0.0]], "type" => "map", "index_type" => "str")], Any["CPLEX.jl", Dict{String, Any}("data" => Any[Any["CPX_PARAM_EPGAP", 0.01]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

demand

Demand for the commodity of a node. Energy gains can be represented using negative demand.

Default value: 0.0

Related Object Classes: node

Demand for the commodity of a node. Energy gains can be represented using negative demand.

demand_coefficient

coefficient of the specified node's demand in the specified user constraint

Default value: 0.0

Related Relationship Classes: node__user_constraint

coefficient of the specified node's demand in the specified user constraint

diff_coeff

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

Default value: 0.0

Related Relationship Classes: node__node

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

discount_rate

The discount rate used for the discounting of future cashflows

Default value: 0

Related Object Classes: model

discount_year

The year to which all cashflows are discounted.

Default value: nothing

Related Object Classes: model

downward_reserve

Identifier for nodes providing downward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing downward reserves

duration_unit

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

Default value: hour

Uses Parameter Value Lists: duration_unit_list

Related Object Classes: model

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

equal_investments

Whether all entities in the group must have the same investment decision.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: investment_group

fix_binary_gas_connection_flow

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

fix_connection_flow

Fix the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow variable.

fix_connection_intact_flow

Fix the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_intact_flow variable.

fix_connections_invested

Setting a value fixes the connections_invested variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connections_invested variable accordingly

fix_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connectionsinvestedavailable variable accordingly

fix_node_pressure

Fixes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_pressure variable to the provided value

fix_node_state

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

fix_node_voltage_angle

Fixes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_voltage_angle variable to the provided value

fix_nonspin_units_shut_down

Fix the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

Fix the nonspin_units_shut_down variable.

fix_nonspin_units_started_up

Fix the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the nonspin_units_started_up variable.

fix_ratio_in_in_unit_flow

Fix the ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows coming into the unit from the two nodes.

fix_ratio_in_out_unit_flow

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

fix_ratio_out_in_connection_flow

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

fix_ratio_out_in_unit_flow

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

fix_ratio_out_out_unit_flow

Fix the ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows going from the unit into the two nodes.

fix_storages_invested

Used to fix the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storages_invested variable

fix_storages_invested_available

Used to fix the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storagesinvestedavailable variable

fix_unit_flow

Fix the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow variable.

fix_unit_flow_op

Fix the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow_op variable.

fix_units_invested

Fix the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested variable.

fix_units_invested_available

Fix the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested_available variable

fix_units_on

Fix the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_on variable.

fix_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

fix_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

fix_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

fix_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

fix_units_out_of_service

Fix the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

fixed_pressure_constant_0

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fixed_pressure_constant_1

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fom_cost

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. Currently, the value needs to be defined per duration unit (i.e. 1 hour), E.g. EUR/MW/h

Default value: nothing

Related Object Classes: unit

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. Currently, the value needs to be defined per duration unit (i.e. 1 hour), E.g. EUR/MW/h

frac_state_loss

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

Default value: 0.0

Related Object Classes: node

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

fractional_demand

The fraction of a node group's demand applied for the node in question.

Default value: 0.0

Related Object Classes: node

The fraction of a node group's demand applied for the node in question.

fuel_cost

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

graph_view_position

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

Default value: nothing

Related Object Classes: connection, node and unit

Related Relationship Classes: connection__from_node, connection__to_node, unit__from_node__user_constraint, unit__from_node, unit__to_node__user_constraint and unit__to_node

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

has_binary_gas_flow

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

has_pressure

A boolean flag for whether a node has a node_pressure variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_pressure variable.

has_state

A boolean flag for whether a node has a node_state variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_state variable.

has_voltage_angle

A boolean flag for whether a node has a node_voltage_angle variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_voltage_angle variable.

initial_binary_gas_connection_flow

Initialize the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_flow

Initialize the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_intact_flow

Initialize the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connections_invested

Setting a value fixes the connections_invested variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_node_pressure

Initializes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

initial_node_state

Initializes the corresponding node_state variable to the provided value.

Default value: nothing

Related Object Classes: node

initial_node_voltage_angle

Initializes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

initial_nonspin_units_shut_down

Initialize the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

initial_nonspin_units_started_up

Initialize the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_storages_invested

Used to initialze the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

initial_storages_invested_available

Used to initialze the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

initial_unit_flow

Initialize the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_unit_flow_op

Initialize the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_units_invested

Initialize the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

initial_units_invested_available

Initialize the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

initial_units_on

Initialize the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

initial_units_out_of_service

Initialize the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

is_active

If false, the object is excluded from the model if the tool filter object activity control is specified

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: commodity, connection, model, node, output, report, stage, stochastic_scenario, stochastic_structure, temporal_block, unit and user_constraint

Related Relationship Classes: node__stochastic_structure, node__temporal_block, unit__from_node, unit__to_node, units_on__stochastic_structure and units_on__temporal_block

If false, the object is excluded from the model if the tool filter object activity control is specified

is_non_spinning

A boolean flag for whether a node is acting as a non-spinning reserve

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a non-spinning reserve

is_renewable

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

is_reserve_node

A boolean flag for whether a node is acting as a reserve_node

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a reserve_node

max_cum_in_unit_flow_bound

Set a maximum cumulative upper bound for a unit_flow

Default value: nothing

Related Relationship Classes: unit__commodity

Set a maximum cumulative upper bound for a unit_flow

max_gap

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

Default value: 0.05

Related Object Classes: model

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

max_iterations

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 10.0

Related Object Classes: model

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

max_mga_iterations

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

Default value: nothing

Related Object Classes: model

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

max_mga_slack

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

Default value: 0.05

Related Object Classes: model

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

max_node_pressure

Maximum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Maximum allowed gas pressure at node.

max_ratio_in_in_unit_flow

Maximum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows coming into the unit from the two nodes.

max_ratio_in_out_unit_flow

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

max_ratio_out_in_connection_flow

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

max_ratio_out_in_unit_flow

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

max_ratio_out_out_unit_flow

Maximum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows going from the unit into the two nodes.

max_total_cumulated_unit_flow_from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

max_total_cumulated_unit_flow_to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

max_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

max_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

max_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

max_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

max_voltage_angle

Maximum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Maximum allowed voltage angle at node.

maximum_capacity_invested_available

Upper bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

maximum_entities_invested_available

Upper bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

min_capacity_margin

minimum capacity margin applying to the node or node_group

Default value: nothing

Related Object Classes: node

minimum capacity margin applying to the node or node_group

min_capacity_margin_penalty

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

Default value: nothing

Related Object Classes: node

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

min_down_time

Minimum downtime of a unit after it shuts down.

Default value: nothing

Related Object Classes: unit

Minimum downtime of a unit after it shuts down.

min_iterations

Specifies the minimum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 1.0

Related Object Classes: model

min_node_pressure

Minimum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Minimum allowed gas pressure at node.

min_ratio_in_in_unit_flow

Minimum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows coming into the unit from the two nodes.

min_ratio_in_out_unit_flow

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

min_ratio_out_in_connection_flow

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

min_ratio_out_in_unit_flow

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

min_ratio_out_out_unit_flow

Minimum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows going from the unit into the two nodes.

min_total_cumulated_unit_flow_from_node

Bound on the minimum cumulated flows of a unit group from a node group.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the minimum cumulated flows of a unit group from a node group.

min_total_cumulated_unit_flow_to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

min_unit_flow

Set lower bound of the unit_flow variable.

Default value: 0.0

Related Relationship Classes: unit__from_node and unit__to_node

min_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

min_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

min_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

min_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

min_up_time

Minimum uptime of a unit after it starts up.

Default value: nothing

Related Object Classes: unit

Minimum uptime of a unit after it starts up.

min_voltage_angle

Minimum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Minimum allowed voltage angle at node.

minimum_capacity_invested_available

Lower bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_entities_invested_available

Lower bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_operating_point

Minimum level for the unit_flow relative to the units_on online capacity.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Minimum level for the unit_flow relative to the units_on online capacity.

minimum_reserve_activation_time

Duration a certain reserve product needs to be online/available

Default value: nothing

Related Object Classes: node

Duration a certain reserve product needs to be online/available

model_algorithm

The algorithm to run (e.g., basic, MGA)

Default value: basic_algorithm

Uses Parameter Value Lists: model_algorithm_list

Related Object Classes: model

model_end

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

Default value: Dict{String, Any}("data" => "2000-01-02T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

model_start

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

Default value: Dict{String, Any}("data" => "2000-01-01T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

model_type

The model type which gives the solution method (e.g. standerd, Benders)

Default value: spineopt_standard

Uses Parameter Value Lists: model_type_list

Related Object Classes: model

The model type which gives the solution method (e.g. standerd, Benders)

mp_min_res_gen_to_demand_ratio

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

Default value: nothing

Related Object Classes: commodity

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

mp_min_res_gen_to_demand_ratio_slack_penalty

Penalty for violating the minimum renewable generation to demand ratio.

Default value: nothing

Related Object Classes: commodity

Penalty for violating the minimum renewable generation to demand ratio.

nodal_balance_sense

A selector for nodal_balance constraint sense.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

A selector for nodal_balance constraint sense.

node_opf_type

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

Default value: node_opf_type_normal

Uses Parameter Value Lists: node_opf_type_list

Related Object Classes: node

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

node_slack_penalty

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

Default value: nothing

Related Object Classes: node

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

node_state_cap

The maximum permitted value for a node_state variable.

Default value: nothing

Related Object Classes: node

The maximum permitted value for a node_state variable.

node_state_coefficient

Coefficient of the specified node's state variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's state variable in the specified user constraint.

node_state_min

The minimum permitted value for a node_state variable.

Default value: 0.0

Related Object Classes: node

The minimum permitted value for a node_state variable.

number_of_connections

Denotes the number of 'sub connections' aggregated to form the modelled connection.

Default value: 1.0

Related Object Classes: connection

number_of_storages

Denotes the number of 'sub storages' aggregated to form the modelled node.

Default value: 1.0

Related Object Classes: node

number_of_units

Denotes the number of 'sub units' aggregated to form the modelled unit.

Default value: 1.0

Related Object Classes: unit

Denotes the number of 'sub units' aggregated to form the modelled unit.

online_variable_type

A selector for how the units_on variable is represented within the model.

Default value: unit_online_variable_type_linear

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

A selector for how the units_on variable is represented within the model.

operating_points

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

ordered_unit_flow_op

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: unit__from_node and unit__to_node

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

outage_variable_type

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

Default value: unit_online_variable_type_none

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

output_db_url

Database url for SpineOpt output.

Default value: nothing

Related Object Classes: report

Database url for SpineOpt output.

output_resolution

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output__connection, stage__output__node, stage__output__unit: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

Default value: nothing

Related Object Classes: output

Related Relationship Classes: stage__output__connection, stage__output__node and stage__output__unit

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output__connection, stage__output__node, stage__output__unit: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

overwrite_results_on_rolling

Whether or not results from further windows should overwrite results from previous ones.

Default value: true

Related Relationship Classes: report__output

Whether or not results from further windows should overwrite results from previous ones.

ramp_down_limit

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

ramp_up_limit

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

representative_periods_mapping

Map from date time to representative temporal block name

Default value: nothing

Related Object Classes: temporal_block

Map from date time to representative temporal block name

reserve_procurement_cost

Procurement cost for reserves

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Procurement cost for reserves

resolution

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

Default value: Dict{String, Any}("data" => "1h", "type" => "duration")

Related Object Classes: temporal_block

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

right_hand_side

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

Default value: 0.0

Related Object Classes: user_constraint

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

roll_forward

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

Default value: nothing

Related Object Classes: model

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

scheduled_outage_duration

Specifies the amount of time a unit must be out of service for maintenance as a single block over the course of the optimisation window

Default value: nothing

Related Object Classes: unit

shut_down_cost

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

Default value: nothing

Related Object Classes: unit

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

shut_down_limit

Maximum ramp-down during shutdowns

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-down during shutdowns

stage_scenario

The scenario that this stage should run (EXPERIMENTAL).

Default value: nothing

Related Object Classes: stage

start_up_cost

Costs of starting up a 'sub unit', e.g. EUR/startup.

Default value: nothing

Related Object Classes: unit

Costs of starting up a 'sub unit', e.g. EUR/startup.

start_up_limit

Maximum ramp-up during startups

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-up during startups

state_coeff

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

Default value: 1.0

Related Object Classes: node

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

stochastic_scenario_end

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

Default value: nothing

Related Relationship Classes: stochastic_structure__stochastic_scenario

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

storage_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: node

storage_decommissioning_time

A storage's decommissioning time, i.e., the time between the moment at which a storage decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: node

storage_discount_rate_technology_specific

storage-specific discount rate used to calculate the storage's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: node

storage_fom_cost

Fixed operation and maintenance costs of a node. Essentially, a cost coefficient on the number of installed units and node_state_cap parameters. E.g. EUR/MWh

Default value: nothing

Related Object Classes: node

storage_investment_cost

Determines the investment cost per unit state_cap over the investment life of a storage

Default value: nothing

Related Object Classes: node

Determines the investment cost per unit state_cap over the investment life of a storage

storage_investment_econ_lifetime

Economic lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

storage_investment_lifetime_sense

A selector for storage_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

storage_investment_tech_lifetime

Maximum technical lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

storage_investment_variable_type

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

Default value: storage_investment_variable_type_integer

Uses Parameter Value Lists: storage_investment_variable_type_list

Related Object Classes: node

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

storage_lead_time

A storage's lead time, i.e., the time between the moment at which a storage investment decision is takien, and the moment at which the storage investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: node

storages_invested_available_coefficient

Coefficient of the specified node's storages invested available variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

storages_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

Default value: nothing

Related Object Classes: node

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

storages_invested_coefficient

Coefficient of the specified node's storage investment variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's storage investment variable in the specified user constraint.

storages_invested_mga

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

storages_invested_mga_weight

Used to scale mga variables. For weighted-sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: node

tax_in_unit_flow

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

tax_net_unit_flow

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

Default value: nothing

Related Object Classes: node

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

tax_out_unit_flow

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

unit_availability_factor

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: unit

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

unit_capacity

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

unit_conv_cap_to_flow

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

Default value: 1.0

Related Relationship Classes: unit__from_node and unit__to_node

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

unit_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: unit

unit_decommissioning_time

A unit's decommissioning time, i.e., the time between the moment at which a unit decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: unit

unit_discount_rate_technology_specific

Unit-specific discount rate used to calculate the unit's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: unit

unit_flow_coefficient

Coefficient of a unit_flow variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__from_node__user_constraint and unit__to_node__user_constraint

Coefficient of a unit_flow variable for a custom user_constraint.

unit_flow_non_anticipativity_margin

Margin by which unit_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_flow_non_anticipativity_time

Period of time where the value of the unit_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_investment_cost

Investment cost per 'sub unit' built.

Default value: nothing

Related Object Classes: unit

Investment cost per 'sub unit' built.

unit_investment_econ_lifetime

Economic lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

unit_investment_lifetime_sense

A selector for unit_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: unit

unit_investment_tech_lifetime

Maximum technical lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

unit_investment_variable_type

Determines whether investment variable is integer or continuous.

Default value: unit_investment_variable_type_continuous

Uses Parameter Value Lists: unit_investment_variable_type_list

Related Object Classes: unit

Determines whether investment variable is integer or continuous.

unit_lead_time

A unit's lead time, i.e., the time between the moment at which a unit investment decision is takien, and the moment at which the unit investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: unit

unit_start_flow

Flow from node1 that is incurred when a unit is started up.

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 that is incurred when a unit is started up.

units_invested_available_coefficient

Coefficient of the units_invested_available variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

units_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

Default value: nothing

Related Object Classes: unit

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

units_invested_coefficient

Coefficient of the units_invested variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of the units_invested variable in the specified user_constraint.

units_invested_mga

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

units_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: unit

units_on_coefficient

Coefficient of a units_on variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_on variable for a custom user_constraint.

units_on_cost

Objective function coefficient on units_on. An idling cost, for example

Default value: nothing

Related Object Classes: unit

Objective function coefficient on units_on. An idling cost, for example

units_on_non_anticipativity_margin

Margin by which units_on variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Object Classes: unit

units_on_non_anticipativity_time

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

Default value: nothing

Related Object Classes: unit

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

units_started_up_coefficient

Coefficient of a units_started_up variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_started_up variable for a custom user_constraint.

units_unavailable

Represents the number of units out of service

Default value: 0

Related Object Classes: unit

Represents the number of units out of service

upward_reserve

Identifier for nodes providing upward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing upward reserves

use_connection_intact_flow

Whether to use connection_intact_flow variables, to capture the impact of connection investments on network characteristics via line outage distribution factors (LODF).

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_economic_representation

If set to true, the investment models uses economic represenation, i.e., multi-year investments will be modeled considering discounts etc.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_milestone_years

If set to true, the investment models uses milestone years. In other words, operational temporal blocks for one (milestone) year will be scaled up by the discounted duration to represent the entire investment period.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_tight_compact_formulations

Whether to use tight and compact constraint formulations.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

user_constraint_slack_penalty

A penalty for violating a user constraint.

Default value: nothing

Related Object Classes: user_constraint

version

Current version of the SpineOpt data structure. Modify it at your own risk (but please don't).

Default value: 15

Related Object Classes: settings

vom_cost

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

weight

Weighting factor of the temporal block associated with the objective function

Default value: 1.0

Related Object Classes: temporal_block

Weighting factor of the temporal block associated with the objective function

weight_relative_to_parents

The weight of the stochastic_scenario in the objective function relative to its parents.

Default value: 1.0

Related Relationship Classes: stochastic_structure__stochastic_scenario

The weight of the stochastic_scenario in the objective function relative to its parents.

window_duration

The duration of the window in case it differs from roll_forward

Default value: nothing

Related Object Classes: model

window_weight

The weight of the window in the rolling subproblem

Default value: 1

Related Object Classes: model

The weight of the window in the rolling subproblem

write_lodf_file

A boolean flag for whether the LODF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the LODF values should be written to a results file.

write_mps_file

A selector for writing an .mps file of the model.

Default value: nothing

Uses Parameter Value Lists: write_mps_file_list

Related Object Classes: model

A selector for writing an .mps file of the model.

write_ptdf_file

A boolean flag for whether the PTDF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the PTDF values should be written to a results file.

+Parameters · SpineOpt.jl

Parameters

balance_type

A selector for how the nodal_balance constraint should be handled.

Default value: balance_type_node

Uses Parameter Value Lists: balance_type_list

Related Object Classes: node

A selector for how the nodal_balance constraint should be handled.

benders_starting_connections_invested

Fixes the number of connections invested during the first Benders iteration

Default value: nothing

Related Object Classes: connection

benders_starting_storages_invested

Fixes the number of storages invested during the first Benders iteration

Default value: nothing

Related Object Classes: node

benders_starting_units_invested

Fixes the number of units invested during the first Benders iteration

Default value: nothing

Related Object Classes: unit

big_m

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

Default value: 1000000

Related Object Classes: model

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

block_end

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

block_start

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

candidate_connections

The number of connections that may be invested in

Default value: nothing

Related Object Classes: connection

The number of connections that may be invested in

candidate_storages

Determines the maximum number of new storages which may be invested in

Default value: nothing

Related Object Classes: node

Determines the maximum number of new storages which may be invested in

candidate_units

Number of units which may be additionally constructed

Default value: nothing

Related Object Classes: unit

Number of units which may be additionally constructed

commodity_lodf_tolerance

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

Default value: 0.1

Related Object Classes: commodity

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

commodity_physics

Defines if the commodity follows lodf or ptdf physics.

Default value: commodity_physics_none

Uses Parameter Value Lists: commodity_physics_list

Related Object Classes: commodity

Defines if the commodity follows lodf or ptdf physics.

commodity_physics_duration

For how long the commodity_physics should apply relative to the start of the window.

Default value: nothing

Related Object Classes: commodity

For how long the commodity_physics should apply relative to the start of the window.

commodity_ptdf_threshold

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

Default value: 0.001

Related Object Classes: commodity

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

compression_factor

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

Default value: nothing

Related Relationship Classes: connection__node__node

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

connection_availability_factor

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: connection

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

connection_capacity

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

connection_contingency

A boolean flag for defining a contingency connection.

Default value: nothing

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_conv_cap_to_flow

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

Default value: 1.0

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

connection_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: connection

connection_decommissioning_time

A connection's decommissioning time, i.e., the time between the moment at which a connection decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: connection

connection_discount_rate_technology_specific

connection-specific discount rate used to calculate the connection's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: connection

connection_emergency_capacity

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

connection_flow_coefficient

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

Default value: 0.0

Related Relationship Classes: connection__from_node__user_constraint and connection__to_node__user_constraint

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

connection_flow_cost

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

connection_flow_delay

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Relationship Classes: connection__node__node

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

connection_flow_non_anticipativity_margin

Margin by which connection_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_flow_non_anticipativity_time

Period of time where the value of the connection_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_margin

Margin by which connection_intact_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_time

Period of time where the value of the connection_intact_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_investment_cost

The per unit investment cost for the connection over the connection_investment_tech_lifetime

Default value: nothing

Related Object Classes: connection

The per unit investment cost for the connection over the connection_investment_tech_lifetime

connection_investment_econ_lifetime

Determines the minimum economical investment lifetime of a connection.

Default value: nothing

Related Object Classes: connection

connection_investment_lifetime_sense

A selector for connection_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: connection

connection_investment_tech_lifetime

Determines the maximum technical lifetime of a connection. Once invested, it remains in service for this long

Default value: nothing

Related Object Classes: connection

connection_investment_variable_type

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

Default value: connection_investment_variable_type_integer

Uses Parameter Value Lists: connection_investment_variable_type_list

Related Object Classes: connection

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

connection_lead_time

A connection's lead time, i.e., the time between the moment at which a connection investment decision is takien, and the moment at which the connection investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: connection

connection_linepack_constant

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

Default value: nothing

Related Relationship Classes: connection__node__node

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

connection_monitored

A boolean flag for defining a contingency connection.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_reactance

The per unit reactance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit reactance of a connection.

connection_reactance_base

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

Default value: 1

Related Object Classes: connection

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

connection_resistance

The per unit resistance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit resistance of a connection.

connection_type

A selector between a normal and a lossless bidirectional connection.

Default value: connection_type_normal

Uses Parameter Value Lists: connection_type_list

Related Object Classes: connection

A selector between a normal and a lossless bidirectional connection.

connections_invested_available_coefficient

coefficient of connections_invested_available in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

connections_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

Default value: nothing

Related Object Classes: connection

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

connections_invested_coefficient

coefficient of connections_invested in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

coefficient of connections_invested in the specific user_constraint

connections_invested_mga

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

connections_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: connection

constraint_sense

A selector for the sense of the user_constraint.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: user_constraint

A selector for the sense of the user_constraint.

curtailment_cost

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

cyclic_condition

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: node__temporal_block

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

db_lp_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_lp_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

db_lp_solver_options

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Clp.jl", Dict{String, Any}("data" => Any[Any["LogLevel", 0.0]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

db_mip_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_mip_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

db_mip_solver_options

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["mip_rel_gap", 0.01], Any["threads", 0.0], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Cbc.jl", Dict{String, Any}("data" => Any[Any["ratioGap", 0.01], Any["logLevel", 0.0]], "type" => "map", "index_type" => "str")], Any["CPLEX.jl", Dict{String, Any}("data" => Any[Any["CPX_PARAM_EPGAP", 0.01]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

demand

Demand for the commodity of a node. Energy gains can be represented using negative demand.

Default value: 0.0

Related Object Classes: node

Demand for the commodity of a node. Energy gains can be represented using negative demand.

demand_coefficient

coefficient of the specified node's demand in the specified user constraint

Default value: 0.0

Related Relationship Classes: node__user_constraint

coefficient of the specified node's demand in the specified user constraint

diff_coeff

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

Default value: 0.0

Related Relationship Classes: node__node

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

discount_rate

The discount rate used for the discounting of future cashflows

Default value: 0

Related Object Classes: model

discount_year

The year to which all cashflows are discounted.

Default value: nothing

Related Object Classes: model

downward_reserve

Identifier for nodes providing downward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing downward reserves

duration_unit

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

Default value: hour

Uses Parameter Value Lists: duration_unit_list

Related Object Classes: model

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

equal_investments

Whether all entities in the group must have the same investment decision.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: investment_group

fix_binary_gas_connection_flow

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

fix_connection_flow

Fix the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow variable.

fix_connection_intact_flow

Fix the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_intact_flow variable.

fix_connections_invested

Setting a value fixes the connections_invested variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connections_invested variable accordingly

fix_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connectionsinvestedavailable variable accordingly

fix_node_pressure

Fixes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_pressure variable to the provided value

fix_node_state

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

fix_node_voltage_angle

Fixes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_voltage_angle variable to the provided value

fix_nonspin_units_shut_down

Fix the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

Fix the nonspin_units_shut_down variable.

fix_nonspin_units_started_up

Fix the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the nonspin_units_started_up variable.

fix_ratio_in_in_unit_flow

Fix the ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows coming into the unit from the two nodes.

fix_ratio_in_out_unit_flow

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

fix_ratio_out_in_connection_flow

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

fix_ratio_out_in_unit_flow

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

fix_ratio_out_out_unit_flow

Fix the ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows going from the unit into the two nodes.

fix_storages_invested

Used to fix the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storages_invested variable

fix_storages_invested_available

Used to fix the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storagesinvestedavailable variable

fix_unit_flow

Fix the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow variable.

fix_unit_flow_op

Fix the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow_op variable.

fix_units_invested

Fix the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested variable.

fix_units_invested_available

Fix the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested_available variable

fix_units_on

Fix the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_on variable.

fix_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

fix_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

fix_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

fix_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

fix_units_out_of_service

Fix the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

fixed_pressure_constant_0

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fixed_pressure_constant_1

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fom_cost

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. Currently, the value needs to be defined per duration unit (i.e. 1 hour), E.g. EUR/MW/h

Default value: nothing

Related Object Classes: unit

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. Currently, the value needs to be defined per duration unit (i.e. 1 hour), E.g. EUR/MW/h

frac_state_loss

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

Default value: 0.0

Related Object Classes: node

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

fractional_demand

The fraction of a node group's demand applied for the node in question.

Default value: 0.0

Related Object Classes: node

The fraction of a node group's demand applied for the node in question.

fuel_cost

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

graph_view_position

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

Default value: nothing

Related Object Classes: connection, node and unit

Related Relationship Classes: connection__from_node, connection__to_node, unit__from_node__user_constraint, unit__from_node, unit__to_node__user_constraint and unit__to_node

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

has_binary_gas_flow

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

has_pressure

A boolean flag for whether a node has a node_pressure variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_pressure variable.

has_state

A boolean flag for whether a node has a node_state variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_state variable.

has_voltage_angle

A boolean flag for whether a node has a node_voltage_angle variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_voltage_angle variable.

initial_binary_gas_connection_flow

Initialize the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_flow

Initialize the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_intact_flow

Initialize the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connections_invested

Setting a value fixes the connections_invested variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_node_pressure

Initializes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

initial_node_state

Initializes the corresponding node_state variable to the provided value.

Default value: nothing

Related Object Classes: node

initial_node_voltage_angle

Initializes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

initial_nonspin_units_shut_down

Initialize the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

initial_nonspin_units_started_up

Initialize the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_storages_invested

Used to initialze the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

initial_storages_invested_available

Used to initialze the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

initial_unit_flow

Initialize the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_unit_flow_op

Initialize the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_units_invested

Initialize the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

initial_units_invested_available

Initialize the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

initial_units_on

Initialize the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

initial_units_out_of_service

Initialize the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

is_active

If false, the object is excluded from the model if the tool filter object activity control is specified

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: commodity, connection, model, node, output, report, stage, stochastic_scenario, stochastic_structure, temporal_block, unit and user_constraint

Related Relationship Classes: node__stochastic_structure, node__temporal_block, unit__from_node, unit__to_node, units_on__stochastic_structure and units_on__temporal_block

If false, the object is excluded from the model if the tool filter object activity control is specified

is_non_spinning

A boolean flag for whether a node is acting as a non-spinning reserve

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a non-spinning reserve

is_renewable

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

is_reserve_node

A boolean flag for whether a node is acting as a reserve_node

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a reserve_node

max_cum_in_unit_flow_bound

Set a maximum cumulative upper bound for a unit_flow

Default value: nothing

Related Relationship Classes: unit__commodity

Set a maximum cumulative upper bound for a unit_flow

max_gap

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

Default value: 0.05

Related Object Classes: model

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

max_iterations

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 10.0

Related Object Classes: model

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

max_mga_iterations

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

Default value: nothing

Related Object Classes: model

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

max_mga_slack

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

Default value: 0.05

Related Object Classes: model

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

max_node_pressure

Maximum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Maximum allowed gas pressure at node.

max_ratio_in_in_unit_flow

Maximum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows coming into the unit from the two nodes.

max_ratio_in_out_unit_flow

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

max_ratio_out_in_connection_flow

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

max_ratio_out_in_unit_flow

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

max_ratio_out_out_unit_flow

Maximum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows going from the unit into the two nodes.

max_total_cumulated_unit_flow_from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

max_total_cumulated_unit_flow_to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

max_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

max_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

max_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

max_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

max_voltage_angle

Maximum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Maximum allowed voltage angle at node.

maximum_capacity_invested_available

Upper bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

maximum_entities_invested_available

Upper bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

min_capacity_margin

minimum capacity margin applying to the node or node_group

Default value: nothing

Related Object Classes: node

minimum capacity margin applying to the node or node_group

min_capacity_margin_penalty

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

Default value: nothing

Related Object Classes: node

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

min_down_time

Minimum downtime of a unit after it shuts down.

Default value: nothing

Related Object Classes: unit

Minimum downtime of a unit after it shuts down.

min_iterations

Specifies the minimum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 1.0

Related Object Classes: model

min_node_pressure

Minimum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Minimum allowed gas pressure at node.

min_ratio_in_in_unit_flow

Minimum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows coming into the unit from the two nodes.

min_ratio_in_out_unit_flow

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

min_ratio_out_in_connection_flow

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

min_ratio_out_in_unit_flow

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

min_ratio_out_out_unit_flow

Minimum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows going from the unit into the two nodes.

min_total_cumulated_unit_flow_from_node

Bound on the minimum cumulated flows of a unit group from a node group.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the minimum cumulated flows of a unit group from a node group.

min_total_cumulated_unit_flow_to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

min_unit_flow

Set lower bound of the unit_flow variable.

Default value: 0.0

Related Relationship Classes: unit__from_node and unit__to_node

min_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

min_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

min_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

min_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

min_up_time

Minimum uptime of a unit after it starts up.

Default value: nothing

Related Object Classes: unit

Minimum uptime of a unit after it starts up.

min_voltage_angle

Minimum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Minimum allowed voltage angle at node.

minimum_capacity_invested_available

Lower bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_entities_invested_available

Lower bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_operating_point

Minimum level for the unit_flow relative to the units_on online capacity.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Minimum level for the unit_flow relative to the units_on online capacity.

minimum_reserve_activation_time

Duration a certain reserve product needs to be online/available

Default value: nothing

Related Object Classes: node

Duration a certain reserve product needs to be online/available

model_algorithm

The algorithm to run (e.g., basic, MGA)

Default value: basic_algorithm

Uses Parameter Value Lists: model_algorithm_list

Related Object Classes: model

model_end

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

Default value: Dict{String, Any}("data" => "2000-01-02T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

model_start

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

Default value: Dict{String, Any}("data" => "2000-01-01T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

model_type

The model type which gives the solution method (e.g. standerd, Benders)

Default value: spineopt_standard

Uses Parameter Value Lists: model_type_list

Related Object Classes: model

The model type which gives the solution method (e.g. standerd, Benders)

mp_min_res_gen_to_demand_ratio

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

Default value: nothing

Related Object Classes: commodity

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

mp_min_res_gen_to_demand_ratio_slack_penalty

Penalty for violating the minimum renewable generation to demand ratio.

Default value: nothing

Related Object Classes: commodity

Penalty for violating the minimum renewable generation to demand ratio.

nodal_balance_sense

A selector for nodal_balance constraint sense.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

A selector for nodal_balance constraint sense.

node_opf_type

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

Default value: node_opf_type_normal

Uses Parameter Value Lists: node_opf_type_list

Related Object Classes: node

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

node_slack_penalty

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

Default value: nothing

Related Object Classes: node

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

node_state_cap

The maximum permitted value for a node_state variable.

Default value: nothing

Related Object Classes: node

The maximum permitted value for a node_state variable.

node_state_coefficient

Coefficient of the specified node's state variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's state variable in the specified user constraint.

node_state_min

The minimum permitted value for a node_state variable.

Default value: 0.0

Related Object Classes: node

The minimum permitted value for a node_state variable.

number_of_connections

Denotes the number of 'sub connections' aggregated to form the modelled connection.

Default value: 1.0

Related Object Classes: connection

number_of_storages

Denotes the number of 'sub storages' aggregated to form the modelled node.

Default value: 1.0

Related Object Classes: node

number_of_units

Denotes the number of 'sub units' aggregated to form the modelled unit.

Default value: 1.0

Related Object Classes: unit

Denotes the number of 'sub units' aggregated to form the modelled unit.

online_variable_type

A selector for how the units_on variable is represented within the model.

Default value: unit_online_variable_type_linear

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

A selector for how the units_on variable is represented within the model.

operating_points

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

ordered_unit_flow_op

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: unit__from_node and unit__to_node

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

outage_variable_type

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

Default value: unit_online_variable_type_none

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

output_db_url

Database url for SpineOpt output.

Default value: nothing

Related Object Classes: report

Database url for SpineOpt output.

output_resolution

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output__connection, stage__output__node, stage__output__unit: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

Default value: nothing

Related Object Classes: output

Related Relationship Classes: stage__output__connection, stage__output__node and stage__output__unit

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output__connection, stage__output__node, stage__output__unit: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

overwrite_results_on_rolling

Whether or not results from further windows should overwrite results from previous ones.

Default value: true

Related Relationship Classes: report__output

Whether or not results from further windows should overwrite results from previous ones.

ramp_down_limit

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

ramp_up_limit

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

representative_periods_mapping

Map from date time to representative temporal block name

Default value: nothing

Related Object Classes: temporal_block

Map from date time to representative temporal block name

reserve_procurement_cost

Procurement cost for reserves

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Procurement cost for reserves

resolution

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

Default value: Dict{String, Any}("data" => "1h", "type" => "duration")

Related Object Classes: temporal_block

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

right_hand_side

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

Default value: 0.0

Related Object Classes: user_constraint

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

roll_forward

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

Default value: nothing

Related Object Classes: model

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

scheduled_outage_duration

Specifies the amount of time a unit must be out of service for maintenance as a single block over the course of the optimisation window

Default value: nothing

Related Object Classes: unit

shut_down_cost

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

Default value: nothing

Related Object Classes: unit

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

shut_down_limit

Maximum ramp-down during shutdowns

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-down during shutdowns

stage_scenario

The scenario that this stage should run (EXPERIMENTAL).

Default value: nothing

Related Object Classes: stage

start_up_cost

Costs of starting up a 'sub unit', e.g. EUR/startup.

Default value: nothing

Related Object Classes: unit

Costs of starting up a 'sub unit', e.g. EUR/startup.

start_up_limit

Maximum ramp-up during startups

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-up during startups

state_coeff

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

Default value: 1.0

Related Object Classes: node

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

stochastic_scenario_end

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

Default value: nothing

Related Relationship Classes: stochastic_structure__stochastic_scenario

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

storage_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: node

storage_decommissioning_time

A storage's decommissioning time, i.e., the time between the moment at which a storage decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: node

storage_discount_rate_technology_specific

storage-specific discount rate used to calculate the storage's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: node

storage_fom_cost

Fixed operation and maintenance costs of a node. Essentially, a cost coefficient on the number of installed units and node_state_cap parameters. E.g. EUR/MWh

Default value: nothing

Related Object Classes: node

storage_investment_cost

Determines the investment cost per unit state_cap over the investment life of a storage

Default value: nothing

Related Object Classes: node

Determines the investment cost per unit state_cap over the investment life of a storage

storage_investment_econ_lifetime

Economic lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

storage_investment_lifetime_sense

A selector for storage_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

storage_investment_tech_lifetime

Maximum technical lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

storage_investment_variable_type

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

Default value: storage_investment_variable_type_integer

Uses Parameter Value Lists: storage_investment_variable_type_list

Related Object Classes: node

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

storage_lead_time

A storage's lead time, i.e., the time between the moment at which a storage investment decision is takien, and the moment at which the storage investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: node

storages_invested_available_coefficient

Coefficient of the specified node's storages invested available variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

storages_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

Default value: nothing

Related Object Classes: node

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

storages_invested_coefficient

Coefficient of the specified node's storage investment variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's storage investment variable in the specified user constraint.

storages_invested_mga

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

storages_invested_mga_weight

Used to scale mga variables. For weighted-sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: node

tax_in_unit_flow

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

tax_net_unit_flow

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

Default value: nothing

Related Object Classes: node

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

tax_out_unit_flow

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

unit_availability_factor

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: unit

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

unit_capacity

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

unit_conv_cap_to_flow

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

Default value: 1.0

Related Relationship Classes: unit__from_node and unit__to_node

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

unit_decommissioning_cost

Costs associated with decommissioning a power plant. The costs will b discounted to the discount_year`at distribted equally over the decommissioning time.

Default value: nothing

Related Object Classes: unit

unit_decommissioning_time

A unit's decommissioning time, i.e., the time between the moment at which a unit decommissioning decision is takien, and the moment at which decommissioning is complete.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: unit

unit_discount_rate_technology_specific

Unit-specific discount rate used to calculate the unit's investment costs. If not specified, the model discount rate is used.

Default value: 0.0

Related Object Classes: unit

unit_flow_coefficient

Coefficient of a unit_flow variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__from_node__user_constraint and unit__to_node__user_constraint

Coefficient of a unit_flow variable for a custom user_constraint.

unit_flow_non_anticipativity_margin

Margin by which unit_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_flow_non_anticipativity_time

Period of time where the value of the unit_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_investment_cost

Investment cost per 'sub unit' built.

Default value: nothing

Related Object Classes: unit

Investment cost per 'sub unit' built.

unit_investment_econ_lifetime

Economic lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

unit_investment_lifetime_sense

A selector for unit_lifetime constraint sense.

Default value: >=

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: unit

unit_investment_tech_lifetime

Maximum technical lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

unit_investment_variable_type

Determines whether investment variable is integer or continuous.

Default value: unit_investment_variable_type_continuous

Uses Parameter Value Lists: unit_investment_variable_type_list

Related Object Classes: unit

Determines whether investment variable is integer or continuous.

unit_lead_time

A unit's lead time, i.e., the time between the moment at which a unit investment decision is takien, and the moment at which the unit investment becomes operational.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Object Classes: unit

unit_start_flow

Flow from node1 that is incurred when a unit is started up.

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 that is incurred when a unit is started up.

units_invested_available_coefficient

Coefficient of the units_invested_available variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

units_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

Default value: nothing

Related Object Classes: unit

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

units_invested_coefficient

Coefficient of the units_invested variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of the units_invested variable in the specified user_constraint.

units_invested_mga

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

units_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: unit

units_on_coefficient

Coefficient of a units_on variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_on variable for a custom user_constraint.

units_on_cost

Objective function coefficient on units_on. An idling cost, for example

Default value: nothing

Related Object Classes: unit

Objective function coefficient on units_on. An idling cost, for example

units_on_non_anticipativity_margin

Margin by which units_on variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Object Classes: unit

units_on_non_anticipativity_time

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

Default value: nothing

Related Object Classes: unit

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

units_started_up_coefficient

Coefficient of a units_started_up variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_started_up variable for a custom user_constraint.

units_unavailable

Represents the number of units out of service

Default value: 0

Related Object Classes: unit

Represents the number of units out of service

upward_reserve

Identifier for nodes providing upward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing upward reserves

use_connection_intact_flow

Whether to use connection_intact_flow variables, to capture the impact of connection investments on network characteristics via line outage distribution factors (LODF).

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_economic_representation

If set to true, the investment models uses economic represenation, i.e., multi-year investments will be modeled considering discounts etc.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_milestone_years

If set to true, the investment models uses milestone years. In other words, operational temporal blocks for one (milestone) year will be scaled up by the discounted duration to represent the entire investment period.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

use_tight_compact_formulations

Whether to use tight and compact constraint formulations.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

user_constraint_slack_penalty

A penalty for violating a user constraint.

Default value: nothing

Related Object Classes: user_constraint

version

Current version of the SpineOpt data structure. Modify it at your own risk (but please don't).

Default value: 15

Related Object Classes: settings

vom_cost

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

weight

Weighting factor of the temporal block associated with the objective function

Default value: 1.0

Related Object Classes: temporal_block

Weighting factor of the temporal block associated with the objective function

weight_relative_to_parents

The weight of the stochastic_scenario in the objective function relative to its parents.

Default value: 1.0

Related Relationship Classes: stochastic_structure__stochastic_scenario

The weight of the stochastic_scenario in the objective function relative to its parents.

window_duration

The duration of the window in case it differs from roll_forward

Default value: nothing

Related Object Classes: model

window_weight

The weight of the window in the rolling subproblem

Default value: 1

Related Object Classes: model

The weight of the window in the rolling subproblem

write_lodf_file

A boolean flag for whether the LODF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the LODF values should be written to a results file.

write_mps_file

A selector for writing an .mps file of the model.

Default value: nothing

Uses Parameter Value Lists: write_mps_file_list

Related Object Classes: model

A selector for writing an .mps file of the model.

write_ptdf_file

A boolean flag for whether the PTDF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the PTDF values should be written to a results file.

diff --git a/dev/concept_reference/Relationship Classes/index.html b/dev/concept_reference/Relationship Classes/index.html index cf322c8ea0..c101e532be 100644 --- a/dev/concept_reference/Relationship Classes/index.html +++ b/dev/concept_reference/Relationship Classes/index.html @@ -1,2 +1,2 @@ -Relationship Classes · SpineOpt.jl

Relationship Classes

connection__from_node

A flow on a connection from a node.

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection from a node.

connection__from_node__investment_group

A flow on a connection from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__from_node__user_constraint

A flow on a connection from a node constrained by a user_constraint.

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__investment_group

A connection that belongs in an investment_group.

Related Object Classes: connection and investment_group

connection__investment_stochastic_structure

The stochastic_structure of a connection investment.

Related Object Classes: connection and stochastic_structure

The stochastic_structure of a connection investment.

connection__investment_temporal_block

The temporal_block of a connection investment.

Related Object Classes: connection and temporal_block

The temporal_block of a connection investment.

connection__node__node

A connection acting over two nodes.

Related Object Classes: connection and node

Related Parameters: compression_factor, connection_flow_delay, connection_linepack_constant, fix_ratio_out_in_connection_flow, fixed_pressure_constant_0, fixed_pressure_constant_1, max_ratio_out_in_connection_flow and min_ratio_out_in_connection_flow

A connection acting over two nodes.

connection__to_node

A flow on a connection to a node .

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection to a node .

connection__to_node__investment_group

A flow on a connection to a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__to_node__user_constraint

A flow on a connection to a node constrained by a `user_constraint

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__user_constraint

A connection investment constrained by a user_constraint.

Related Object Classes: connection and user_constraint

Related Parameters: connections_invested_available_coefficient and connections_invested_coefficient

model__default_investment_stochastic_structure

The default stochastic_structure of all investments in the model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of all investments in the model.

model__default_investment_temporal_block

The default temporal_block of all investments in the model.

Related Object Classes: model and temporal_block

The default temporal_block of all investments in the model.

model__default_stochastic_structure

The default stochastic_structure of the `model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of the `model.

model__default_temporal_block

The default temporal_block of the model.

Related Object Classes: model and temporal_block

The default temporal_block of the model.

model__report

A report that should be written for the model.

Related Object Classes: model and report

A report that should be written for the model.

node__commodity

A commodity for a node. Only a single commodity is permitted per node.

Related Object Classes: commodity and node

A commodity for a node. Only a single commodity is permitted per node.

node__investment_group

A node that belongs in an investment_group.

Related Object Classes: investment_group and node

node__investment_stochastic_structure

The stochastic_structure of a node storage investment.

Related Object Classes: node and stochastic_structure

The stochastic_structure of a node storage investment.

node__investment_temporal_block

The temporal_block of a node storage investment.

Related Object Classes: node and temporal_block

The temporal_block of a node storage investment.

node__node

An interaction between two nodes.

Related Object Classes: node

Related Parameters: diff_coeff

An interaction between two nodes.

node__stochastic_structure

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

Related Object Classes: node and stochastic_structure

Related Parameters: is_active

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

node__temporal_block

The temporal_block of a node and the corresponding flow variables.

Related Object Classes: node and temporal_block

Related Parameters: cyclic_condition and is_active

The temporal_block of a node and the corresponding flow variables.

node__user_constraint

A node state constrained by a user_constraint, or a node demand included in a user_constraint.

Related Object Classes: node and user_constraint

Related Parameters: demand_coefficient, node_state_coefficient, storages_invested_available_coefficient and storages_invested_coefficient

parent_stochastic_scenario__child_stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

Related Object Classes: stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

report__output

An output that should be included in a report.

Related Object Classes: output and report

Related Parameters: overwrite_results_on_rolling

An output that should be included in a report.

stage__child_stage

A parent-child relationship between two stages (EXPERIMENTAL).

Related Object Classes: stage

stage__output__connection

An output that should be fixed by a stage for a connection in all its children (EXPERIMENTAL).

Related Object Classes: connection, output and stage

Related Parameters: output_resolution

stage__output__node

An output that should be fixed by a stage for a node in all its children (EXPERIMENTAL).

Related Object Classes: node, output and stage

Related Parameters: output_resolution

stage__output__unit

An output that should be fixed by a stage for a unit in all its children (EXPERIMENTAL).

Related Object Classes: output, stage and unit

Related Parameters: output_resolution

stochastic_structure__stochastic_scenario

A stochastic_scenarios that belongs in a stochastic_structure.

Related Object Classes: stochastic_scenario and stochastic_structure

Related Parameters: stochastic_scenario_end and weight_relative_to_parents

A stochastic_scenarios that belongs in a stochastic_structure.

unit__commodity

Holds parameters for commodities used by the unit.

Related Object Classes: commodity and unit

Related Parameters: max_cum_in_unit_flow_bound

Holds parameters for commodities used by the unit.

unit__from_node

A flow on a unit from a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_from_node, min_total_cumulated_unit_flow_from_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit from a node.

unit__from_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__from_node__user_constraint

A flow on a unit from a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__investment_group

A unit that belongs in an investment_group.

Related Object Classes: investment_group and unit

unit__investment_stochastic_structure

The stochastic_structure of a unit investment.

Related Object Classes: stochastic_structure and unit

The stochastic_structure of a unit investment.

unit__investment_temporal_block

The temporal_block of a unit investment.

Related Object Classes: temporal_block and unit

The temporal_block of a unit investment.

unit__node__node

A unit acting over two nodes.

Related Object Classes: node and unit

Related Parameters: fix_ratio_in_in_unit_flow, fix_ratio_in_out_unit_flow, fix_ratio_out_in_unit_flow, fix_ratio_out_out_unit_flow, fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, fix_units_on_coefficient_out_out, max_ratio_in_in_unit_flow, max_ratio_in_out_unit_flow, max_ratio_out_in_unit_flow, max_ratio_out_out_unit_flow, max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, max_units_on_coefficient_out_out, min_ratio_in_in_unit_flow, min_ratio_in_out_unit_flow, min_ratio_out_in_unit_flow, min_ratio_out_out_unit_flow, min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, min_units_on_coefficient_out_out and unit_start_flow

A unit acting over two nodes.

unit__to_node

A flow on a unit to a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_shut_down, fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_shut_down, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_to_node, min_total_cumulated_unit_flow_to_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit to a node.

unit__to_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__to_node__user_constraint

A flow on a unit to a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__user_constraint

A unit commitment constrained by a user_constraint.

Related Object Classes: unit and user_constraint

Related Parameters: units_invested_available_coefficient, units_invested_coefficient, units_on_coefficient and units_started_up_coefficient

units_on__stochastic_structure

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

Related Object Classes: stochastic_structure and unit

Related Parameters: is_active

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

units_on__temporal_block

The temporal_block of a unit commitment.

Related Object Classes: temporal_block and unit

Related Parameters: is_active

The temporal_block of a unit commitment.

+Relationship Classes · SpineOpt.jl

Relationship Classes

connection__from_node

A flow on a connection from a node.

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection from a node.

connection__from_node__investment_group

A flow on a connection from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__from_node__user_constraint

A flow on a connection from a node constrained by a user_constraint.

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__investment_group

A connection that belongs in an investment_group.

Related Object Classes: connection and investment_group

connection__investment_stochastic_structure

The stochastic_structure of a connection investment.

Related Object Classes: connection and stochastic_structure

The stochastic_structure of a connection investment.

connection__investment_temporal_block

The temporal_block of a connection investment.

Related Object Classes: connection and temporal_block

The temporal_block of a connection investment.

connection__node__node

A connection acting over two nodes.

Related Object Classes: connection and node

Related Parameters: compression_factor, connection_flow_delay, connection_linepack_constant, fix_ratio_out_in_connection_flow, fixed_pressure_constant_0, fixed_pressure_constant_1, max_ratio_out_in_connection_flow and min_ratio_out_in_connection_flow

A connection acting over two nodes.

connection__to_node

A flow on a connection to a node .

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection to a node .

connection__to_node__investment_group

A flow on a connection to a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__to_node__user_constraint

A flow on a connection to a node constrained by a `user_constraint

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__user_constraint

A connection investment constrained by a user_constraint.

Related Object Classes: connection and user_constraint

Related Parameters: connections_invested_available_coefficient and connections_invested_coefficient

model__default_investment_stochastic_structure

The default stochastic_structure of all investments in the model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of all investments in the model.

model__default_investment_temporal_block

The default temporal_block of all investments in the model.

Related Object Classes: model and temporal_block

The default temporal_block of all investments in the model.

model__default_stochastic_structure

The default stochastic_structure of the `model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of the `model.

model__default_temporal_block

The default temporal_block of the model.

Related Object Classes: model and temporal_block

The default temporal_block of the model.

model__report

A report that should be written for the model.

Related Object Classes: model and report

A report that should be written for the model.

node__commodity

A commodity for a node. Only a single commodity is permitted per node.

Related Object Classes: commodity and node

A commodity for a node. Only a single commodity is permitted per node.

node__investment_group

A node that belongs in an investment_group.

Related Object Classes: investment_group and node

node__investment_stochastic_structure

The stochastic_structure of a node storage investment.

Related Object Classes: node and stochastic_structure

The stochastic_structure of a node storage investment.

node__investment_temporal_block

The temporal_block of a node storage investment.

Related Object Classes: node and temporal_block

The temporal_block of a node storage investment.

node__node

An interaction between two nodes.

Related Object Classes: node

Related Parameters: diff_coeff

An interaction between two nodes.

node__stochastic_structure

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

Related Object Classes: node and stochastic_structure

Related Parameters: is_active

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

node__temporal_block

The temporal_block of a node and the corresponding flow variables.

Related Object Classes: node and temporal_block

Related Parameters: cyclic_condition and is_active

The temporal_block of a node and the corresponding flow variables.

node__user_constraint

A node state constrained by a user_constraint, or a node demand included in a user_constraint.

Related Object Classes: node and user_constraint

Related Parameters: demand_coefficient, node_state_coefficient, storages_invested_available_coefficient and storages_invested_coefficient

parent_stochastic_scenario__child_stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

Related Object Classes: stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

report__output

An output that should be included in a report.

Related Object Classes: output and report

Related Parameters: overwrite_results_on_rolling

An output that should be included in a report.

stage__child_stage

A parent-child relationship between two stages (EXPERIMENTAL).

Related Object Classes: stage

stage__output__connection

An output that should be fixed by a stage for a connection in all its children (EXPERIMENTAL).

Related Object Classes: connection, output and stage

Related Parameters: output_resolution

stage__output__node

An output that should be fixed by a stage for a node in all its children (EXPERIMENTAL).

Related Object Classes: node, output and stage

Related Parameters: output_resolution

stage__output__unit

An output that should be fixed by a stage for a unit in all its children (EXPERIMENTAL).

Related Object Classes: output, stage and unit

Related Parameters: output_resolution

stochastic_structure__stochastic_scenario

A stochastic_scenarios that belongs in a stochastic_structure.

Related Object Classes: stochastic_scenario and stochastic_structure

Related Parameters: stochastic_scenario_end and weight_relative_to_parents

A stochastic_scenarios that belongs in a stochastic_structure.

unit__commodity

Holds parameters for commodities used by the unit.

Related Object Classes: commodity and unit

Related Parameters: max_cum_in_unit_flow_bound

Holds parameters for commodities used by the unit.

unit__from_node

A flow on a unit from a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_from_node, min_total_cumulated_unit_flow_from_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit from a node.

unit__from_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__from_node__user_constraint

A flow on a unit from a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__investment_group

A unit that belongs in an investment_group.

Related Object Classes: investment_group and unit

unit__investment_stochastic_structure

The stochastic_structure of a unit investment.

Related Object Classes: stochastic_structure and unit

The stochastic_structure of a unit investment.

unit__investment_temporal_block

The temporal_block of a unit investment.

Related Object Classes: temporal_block and unit

The temporal_block of a unit investment.

unit__node__node

A unit acting over two nodes.

Related Object Classes: node and unit

Related Parameters: fix_ratio_in_in_unit_flow, fix_ratio_in_out_unit_flow, fix_ratio_out_in_unit_flow, fix_ratio_out_out_unit_flow, fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, fix_units_on_coefficient_out_out, max_ratio_in_in_unit_flow, max_ratio_in_out_unit_flow, max_ratio_out_in_unit_flow, max_ratio_out_out_unit_flow, max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, max_units_on_coefficient_out_out, min_ratio_in_in_unit_flow, min_ratio_in_out_unit_flow, min_ratio_out_in_unit_flow, min_ratio_out_out_unit_flow, min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, min_units_on_coefficient_out_out and unit_start_flow

A unit acting over two nodes.

unit__to_node

A flow on a unit to a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_shut_down, fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_shut_down, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_to_node, min_total_cumulated_unit_flow_to_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit to a node.

unit__to_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__to_node__user_constraint

A flow on a unit to a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__user_constraint

A unit commitment constrained by a user_constraint.

Related Object Classes: unit and user_constraint

Related Parameters: units_invested_available_coefficient, units_invested_coefficient, units_on_coefficient and units_started_up_coefficient

units_on__stochastic_structure

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

Related Object Classes: stochastic_structure and unit

Related Parameters: is_active

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

units_on__temporal_block

The temporal_block of a unit commitment.

Related Object Classes: temporal_block and unit

Related Parameters: is_active

The temporal_block of a unit commitment.

diff --git a/dev/concept_reference/_example/index.html b/dev/concept_reference/_example/index.html index 3b8518eff4..e46763283d 100644 --- a/dev/concept_reference/_example/index.html +++ b/dev/concept_reference/_example/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

AN EXAMPLE DESCRIPTION FOR HOW THE AUTOGENERATION OF CONCEPT REFERENCE BASED ON SPINEOPT TEMPLATE WORKS

References to other sections, e.g. node are handled like this. Don't use the grave accents around the reference name, as it breaks the reference! Grave accents in Documenter.jl refer to docstrings in the code instead of sections in the documentation.

+- · SpineOpt.jl

AN EXAMPLE DESCRIPTION FOR HOW THE AUTOGENERATION OF CONCEPT REFERENCE BASED ON SPINEOPT TEMPLATE WORKS

References to other sections, e.g. node are handled like this. Don't use the grave accents around the reference name, as it breaks the reference! Grave accents in Documenter.jl refer to docstrings in the code instead of sections in the documentation.

diff --git a/dev/concept_reference/archetypes/index.html b/dev/concept_reference/archetypes/index.html index 40ef8afc6b..16e33a63c6 100644 --- a/dev/concept_reference/archetypes/index.html +++ b/dev/concept_reference/archetypes/index.html @@ -1,2 +1,2 @@ -Archetypes · SpineOpt.jl

Archetypes

Archetypes are essentially ready-made templates for different aspects of SpineOpt.jl. They are intended to serve both as examples for how the data structure in SpineOpt.jl works, as well as pre-made modular parts that can be imported on top of existing model input data.

The templates/models/basic_model_template.json contains a ready-made template for simple energy system models, with uniform time resolution and deterministic stochastic structure. Essentially, it serves as a basis for testing how the modelled system is set up, without having to worry about setting up the temporal and stochastic structures.

The rest of the different archetypes are included under templates/archetypes in the SpineOpt.jl repository. Each archetype is stored as a .json file containing the necessary objects, relationships, and parameters to form a functioning pre-made part for a SpineOpt.jl model. The archetypes aren't completely plug-and-play, as there are always some relationships required to connect the archetype to the other input data correctly. Regardless, the following sections explain the different archetypes included in the SpineOpt.jl repository, as well as what steps the user needs to take to connect said archetype to their input data correctly.

Loading the SpineOpt Template and Archetypes into Your Model

To load the latest version of the SpineOpt template, in the Spine DB Editor, from the menu (three horizontal bars in the top right), click on import as follows:

importing the SpineOpt Template

Change the file type to JSON and click on spineopt_template.json as follows:

importing the SpineOpt Template

Click on spineopttemplate.json and press Open. If you don't see spineopttemplate.json make sure you have navigated to Spine\SpineOpt.jl\templates.

Loading the latest version of the SpineOpt template in this way will update your datastore with the latest version of the data structure.

Branching Stochastic Tree

templates/archetypes/branching_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called branching, representing a branching scenario tree. The stochastic_structure starts out as a single stochastic_scenario called realistic, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. This archetype is the final product of following the steps in the Example of branching stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Converging Stochastic Tree

templates/archetypes/converging_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called converging, representing a converging scenario tree (technically a directed acyclic graph DAG). The stochastic_structure starts out as a single stochastic_scenario called realization, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. Then, after 24 hours (1 day), these three forecasts converge into a single stochastic_scenario called converged_forecast. This archetype is the final product of following the steps in the Example of converging stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Deterministic Stochastic Structure

templates/archetypes/deterministic_stochastic_structure.json

This archetype contains the definitions required for an example stochastic_structure called deterministic, representing a simple deterministic modelling case. The stochastic_structure contains only a single stochastic_scenario called realization, which continues indefinitely. This archetype is the final product of following the steps in the Example of deterministic stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

+Archetypes · SpineOpt.jl

Archetypes

Archetypes are essentially ready-made templates for different aspects of SpineOpt.jl. They are intended to serve both as examples for how the data structure in SpineOpt.jl works, as well as pre-made modular parts that can be imported on top of existing model input data.

The templates/models/basic_model_template.json contains a ready-made template for simple energy system models, with uniform time resolution and deterministic stochastic structure. Essentially, it serves as a basis for testing how the modelled system is set up, without having to worry about setting up the temporal and stochastic structures.

The rest of the different archetypes are included under templates/archetypes in the SpineOpt.jl repository. Each archetype is stored as a .json file containing the necessary objects, relationships, and parameters to form a functioning pre-made part for a SpineOpt.jl model. The archetypes aren't completely plug-and-play, as there are always some relationships required to connect the archetype to the other input data correctly. Regardless, the following sections explain the different archetypes included in the SpineOpt.jl repository, as well as what steps the user needs to take to connect said archetype to their input data correctly.

Loading the SpineOpt Template and Archetypes into Your Model

To load the latest version of the SpineOpt template, in the Spine DB Editor, from the menu (three horizontal bars in the top right), click on import as follows:

importing the SpineOpt Template

Change the file type to JSON and click on spineopt_template.json as follows:

importing the SpineOpt Template

Click on spineopttemplate.json and press Open. If you don't see spineopttemplate.json make sure you have navigated to Spine\SpineOpt.jl\templates.

Loading the latest version of the SpineOpt template in this way will update your datastore with the latest version of the data structure.

Branching Stochastic Tree

templates/archetypes/branching_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called branching, representing a branching scenario tree. The stochastic_structure starts out as a single stochastic_scenario called realistic, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. This archetype is the final product of following the steps in the Example of branching stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Converging Stochastic Tree

templates/archetypes/converging_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called converging, representing a converging scenario tree (technically a directed acyclic graph DAG). The stochastic_structure starts out as a single stochastic_scenario called realization, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. Then, after 24 hours (1 day), these three forecasts converge into a single stochastic_scenario called converged_forecast. This archetype is the final product of following the steps in the Example of converging stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Deterministic Stochastic Structure

templates/archetypes/deterministic_stochastic_structure.json

This archetype contains the definitions required for an example stochastic_structure called deterministic, representing a simple deterministic modelling case. The stochastic_structure contains only a single stochastic_scenario called realization, which continues indefinitely. This archetype is the final product of following the steps in the Example of deterministic stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

diff --git a/dev/concept_reference/balance_type/index.html b/dev/concept_reference/balance_type/index.html index 96d3afb708..b5aef27e8a 100644 --- a/dev/concept_reference/balance_type/index.html +++ b/dev/concept_reference/balance_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The balance_type parameter determines whether or not a node needs to be balanced, in the classical sense that the sum of flows entering the node is equal to the sum of flows leaving it.

The values balance_type_node (the default) and balance_type_group mean that the node is always balanced. The only exception is if the node belongs in a group that has itself balance_type equal to balance_type_group. The value balance_type_none means that the node doesn't need to be balanced.

+- · SpineOpt.jl

The balance_type parameter determines whether or not a node needs to be balanced, in the classical sense that the sum of flows entering the node is equal to the sum of flows leaving it.

The values balance_type_node (the default) and balance_type_group mean that the node is always balanced. The only exception is if the node belongs in a group that has itself balance_type equal to balance_type_group. The value balance_type_none means that the node doesn't need to be balanced.

diff --git a/dev/concept_reference/balance_type_list/index.html b/dev/concept_reference/balance_type_list/index.html index dd14d7c1b1..d282b63609 100644 --- a/dev/concept_reference/balance_type_list/index.html +++ b/dev/concept_reference/balance_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/big_m/index.html b/dev/concept_reference/big_m/index.html index 9d97958cc9..4c51f5919d 100644 --- a/dev/concept_reference/big_m/index.html +++ b/dev/concept_reference/big_m/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The big_m parameter is a property of the model object. The bigM method is commonly used for the purpose of recasting non-linear constraints into a mixed-integer reformulation. In SpineOpt, the bigM formulation is used to describe the sign of gas flow through a connection (if a pressure driven gas transfer model is used). The big_m parameter in combination with the binary variable binary_gas_connection_flow is used in the constraints on the gas flow capacity and the fixed node pressure points and ensures that the average flow through a pipeline is only in one direction and is constraint by the fixed pressure points from the outer approximation of the Weymouth equation. See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling for reference.

+- · SpineOpt.jl

The big_m parameter is a property of the model object. The bigM method is commonly used for the purpose of recasting non-linear constraints into a mixed-integer reformulation. In SpineOpt, the bigM formulation is used to describe the sign of gas flow through a connection (if a pressure driven gas transfer model is used). The big_m parameter in combination with the binary variable binary_gas_connection_flow is used in the constraints on the gas flow capacity and the fixed node pressure points and ensures that the average flow through a pipeline is only in one direction and is constraint by the fixed pressure points from the outer approximation of the Weymouth equation. See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling for reference.

diff --git a/dev/concept_reference/block_end/index.html b/dev/concept_reference/block_end/index.html index be371be7fa..0e94e54301 100644 --- a/dev/concept_reference/block_end/index.html +++ b/dev/concept_reference/block_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Indicates the end of this temporal block. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the end of the optimization for this temporal block. In a single solve optimization, a combination of block_start and block_end can easily be used to run optimizations that cover only part of the model horizon. Multiple temporal_block objects can then be used to create optimizations for disconnected time periods, which is commonly used in the method of representative days. The default value coincides with the model_end.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The block_end parameter will in this case determine the size of the optimization window, with respect to the start of each optimization window. If multiple temporal blocks with different block_end parameters exist, the maximum value will determine the size of the optimization window. Note, this is different from the roll_forward parameter, which determines how much the window moves for after each optimization. For more info, see One single temporal_block. The default value is equal to the roll_forward parameter.

+- · SpineOpt.jl

Indicates the end of this temporal block. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the end of the optimization for this temporal block. In a single solve optimization, a combination of block_start and block_end can easily be used to run optimizations that cover only part of the model horizon. Multiple temporal_block objects can then be used to create optimizations for disconnected time periods, which is commonly used in the method of representative days. The default value coincides with the model_end.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The block_end parameter will in this case determine the size of the optimization window, with respect to the start of each optimization window. If multiple temporal blocks with different block_end parameters exist, the maximum value will determine the size of the optimization window. Note, this is different from the roll_forward parameter, which determines how much the window moves for after each optimization. For more info, see One single temporal_block. The default value is equal to the roll_forward parameter.

diff --git a/dev/concept_reference/block_start/index.html b/dev/concept_reference/block_start/index.html index 52c8d6bf47..56a559f919 100644 --- a/dev/concept_reference/block_start/index.html +++ b/dev/concept_reference/block_start/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Indicates the start of this temporal block. The main use of this parameter is to create an offset from the model start. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the start of the optimization for this temporal block. When a duration is chosen, it is added to the model_start to obtain the start of this temporal_block. In the case of a duration, the chosen value directly marks the offset of the optimization with respect to the model_start. The default value for this parameter is the model_start.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The temporal block_start will again mark the offset of the optimization start but now with respect to the start of each optimization window.

+- · SpineOpt.jl

Indicates the start of this temporal block. The main use of this parameter is to create an offset from the model start. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the start of the optimization for this temporal block. When a duration is chosen, it is added to the model_start to obtain the start of this temporal_block. In the case of a duration, the chosen value directly marks the offset of the optimization with respect to the model_start. The default value for this parameter is the model_start.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The temporal block_start will again mark the offset of the optimization start but now with respect to the start of each optimization window.

diff --git a/dev/concept_reference/boolean_value_list/index.html b/dev/concept_reference/boolean_value_list/index.html index 66e7beb32d..74f8ac171d 100644 --- a/dev/concept_reference/boolean_value_list/index.html +++ b/dev/concept_reference/boolean_value_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A list of boolean values (True or False).

+- · SpineOpt.jl

A list of boolean values (True or False).

diff --git a/dev/concept_reference/candidate_connections/index.html b/dev/concept_reference/candidate_connections/index.html index b83c0a1622..0ec25e474a 100644 --- a/dev/concept_reference/candidate_connections/index.html +++ b/dev/concept_reference/candidate_connections/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/candidate_storages/index.html b/dev/concept_reference/candidate_storages/index.html index 6ae5e3c521..28f4b67213 100644 --- a/dev/concept_reference/candidate_storages/index.html +++ b/dev/concept_reference/candidate_storages/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem candidate_storages determines the upper bound on the storages investment decision variable in constraint storages_invested_available. In constraint node_state_cap the maximum node state will be the product of the storages investment variable and node_state_cap. Thus, the interpretation of candidate_storages depends on storage_investment_variable_type which determines the investment decision variable type. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages of size node_state_cap that may be invested in at the corresponding node. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a maximum storage capacity with node_state_cap being analagous to a scaling parameter.

Note that candidate_storages is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for storages at the corresponding node. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and storage_investment_variable_type

+- · SpineOpt.jl

Within an investments problem candidate_storages determines the upper bound on the storages investment decision variable in constraint storages_invested_available. In constraint node_state_cap the maximum node state will be the product of the storages investment variable and node_state_cap. Thus, the interpretation of candidate_storages depends on storage_investment_variable_type which determines the investment decision variable type. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages of size node_state_cap that may be invested in at the corresponding node. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a maximum storage capacity with node_state_cap being analagous to a scaling parameter.

Note that candidate_storages is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for storages at the corresponding node. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and storage_investment_variable_type

diff --git a/dev/concept_reference/candidate_units/index.html b/dev/concept_reference/candidate_units/index.html index f71ecfd6ae..891ed3fa5b 100644 --- a/dev/concept_reference/candidate_units/index.html +++ b/dev/concept_reference/candidate_units/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem candidate_units determines the upper bound on the unit investment decision variable in constraint units_invested_available. In constraint unit_flow_capacity the maximum unit_flow will be the product of the units_invested_available and the corresponding unit_capacity. Thus, the interpretation of candidate_units depends on unit_investment_variable_type which determines the unit investment decision variable type. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested in. If unit_investment_variable_type is continuous, candidate_units is more analagous to a maximum storage capacity.

Note that candidate_units is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for the unit. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and unit_investment_variable_type

+- · SpineOpt.jl

Within an investments problem candidate_units determines the upper bound on the unit investment decision variable in constraint units_invested_available. In constraint unit_flow_capacity the maximum unit_flow will be the product of the units_invested_available and the corresponding unit_capacity. Thus, the interpretation of candidate_units depends on unit_investment_variable_type which determines the unit investment decision variable type. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested in. If unit_investment_variable_type is continuous, candidate_units is more analagous to a maximum storage capacity.

Note that candidate_units is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for the unit. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and unit_investment_variable_type

diff --git a/dev/concept_reference/commodity/index.html b/dev/concept_reference/commodity/index.html index 4ec12550cb..5706182f0b 100644 --- a/dev/concept_reference/commodity/index.html +++ b/dev/concept_reference/commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Commodities correspond to the type of energy traded. When associated with a node through the node__commodity relationship, a specific form of energy, i.e. commodity, can be associated with a specific location. Furthermore, by linking commodities with units, it is possible to track the flows of a certain commodity and impose limitations on the use of a certain commodity (See also max_cum_in_unit_flow_bound). For the representation of specific commodity physics, related to e.g. the representation of the electric network, designated parameters can be defined to enforce commodity specific behaviour. (See also commodity_physics)

+- · SpineOpt.jl

Commodities correspond to the type of energy traded. When associated with a node through the node__commodity relationship, a specific form of energy, i.e. commodity, can be associated with a specific location. Furthermore, by linking commodities with units, it is possible to track the flows of a certain commodity and impose limitations on the use of a certain commodity (See also max_cum_in_unit_flow_bound). For the representation of specific commodity physics, related to e.g. the representation of the electric network, designated parameters can be defined to enforce commodity specific behaviour. (See also commodity_physics)

diff --git a/dev/concept_reference/commodity_lodf_tolerance/index.html b/dev/concept_reference/commodity_lodf_tolerance/index.html index eaecf381e0..804135b74c 100644 --- a/dev/concept_reference/commodity_lodf_tolerance/index.html +++ b/dev/concept_reference/commodity_lodf_tolerance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Given two connections, the line outage distribution factor (LODF) is the fraction of the pre-contingency flow on the first one, that will flow on the second after the contingency. commodity_lodf_tolerance is the minimum absolute value of the LODF that is considered meaningful. Any value below this tolerance (in absolute value) will be treated as zero.

The LODFs are used to model contingencies on some connections and their impact on some other connections. To model contingencies on a connection, set connection_contingency to true; to study the impact of such contingencies on another connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to commodity_physics_lodf, and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

+- · SpineOpt.jl

Given two connections, the line outage distribution factor (LODF) is the fraction of the pre-contingency flow on the first one, that will flow on the second after the contingency. commodity_lodf_tolerance is the minimum absolute value of the LODF that is considered meaningful. Any value below this tolerance (in absolute value) will be treated as zero.

The LODFs are used to model contingencies on some connections and their impact on some other connections. To model contingencies on a connection, set connection_contingency to true; to study the impact of such contingencies on another connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to commodity_physics_lodf, and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

diff --git a/dev/concept_reference/commodity_physics/index.html b/dev/concept_reference/commodity_physics/index.html index fdb1f62503..baa22281b7 100644 --- a/dev/concept_reference/commodity_physics/index.html +++ b/dev/concept_reference/commodity_physics/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter determines the specific formulation used to carry out dc load flow within a model. To enable power transfer distribution factor (ptdf) based load flow for a network of nodes and connections, all nodes must be related to a commodity with commodity_physics set to commodity_physics_ptdf. To enable security constraint unit comment based on ptdfs and line outage distribution factors (lodf) all nodes must be related to a commodity with commodity_physics set to commodity_physics_lodf.

See also powerflow

+- · SpineOpt.jl

This parameter determines the specific formulation used to carry out dc load flow within a model. To enable power transfer distribution factor (ptdf) based load flow for a network of nodes and connections, all nodes must be related to a commodity with commodity_physics set to commodity_physics_ptdf. To enable security constraint unit comment based on ptdfs and line outage distribution factors (lodf) all nodes must be related to a commodity with commodity_physics set to commodity_physics_lodf.

See also powerflow

diff --git a/dev/concept_reference/commodity_physics_duration/index.html b/dev/concept_reference/commodity_physics_duration/index.html index 0cfce5df27..d9b6901a22 100644 --- a/dev/concept_reference/commodity_physics_duration/index.html +++ b/dev/concept_reference/commodity_physics_duration/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter determines the duration, relative to the start of the optimisation window, over which the physics determined by commodity_physics should be applied. This is useful when the optimisation window includes a long look-ahead where the detailed physics are not necessary. In this case one can set commodity_physics_duration to a shorter value to reduce problem size and increase performace.

See also powerflow

+- · SpineOpt.jl

This parameter determines the duration, relative to the start of the optimisation window, over which the physics determined by commodity_physics should be applied. This is useful when the optimisation window includes a long look-ahead where the detailed physics are not necessary. In this case one can set commodity_physics_duration to a shorter value to reduce problem size and increase performace.

See also powerflow

diff --git a/dev/concept_reference/commodity_physics_list/index.html b/dev/concept_reference/commodity_physics_list/index.html index b5e4a73116..18510fc641 100644 --- a/dev/concept_reference/commodity_physics_list/index.html +++ b/dev/concept_reference/commodity_physics_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/commodity_ptdf_threshold/index.html b/dev/concept_reference/commodity_ptdf_threshold/index.html index 1da04bed17..735dc75d59 100644 --- a/dev/concept_reference/commodity_ptdf_threshold/index.html +++ b/dev/concept_reference/commodity_ptdf_threshold/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Given a connection and a node, the power transfer distribution factor (PTDF) is the fraction of the flow injected into the node that will flow on the connection. commodity_ptdf_threshold is the minimum absolute value of the PTDF that is considered meaningful. Any value below this threshold (in absolute value) will be treated as zero.

The PTDFs are used to model DC power flow on certain connections. To model DC power flow on a connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to either commodity_physics_ptdf, or commodity_physics_lodf. and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

+- · SpineOpt.jl

Given a connection and a node, the power transfer distribution factor (PTDF) is the fraction of the flow injected into the node that will flow on the connection. commodity_ptdf_threshold is the minimum absolute value of the PTDF that is considered meaningful. Any value below this threshold (in absolute value) will be treated as zero.

The PTDFs are used to model DC power flow on certain connections. To model DC power flow on a connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to either commodity_physics_ptdf, or commodity_physics_lodf. and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

diff --git a/dev/concept_reference/compression_factor/index.html b/dev/concept_reference/compression_factor/index.html index e8594c5db8..d23415b7e6 100644 --- a/dev/concept_reference/compression_factor/index.html +++ b/dev/concept_reference/compression_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is specific to the use of pressure driven gas transfer. To represent a compression between two nodes in the gas network, the compression_factor can be defined. This factor ensures that the pressure of a node is equal to (or lower than) the pressure at the sending node times the compression_factor. The relationship connection__node__node that hosts this parameter should be defined in a way that the first node represents the origin node and the second node represents the compressed node.

+- · SpineOpt.jl

This parameter is specific to the use of pressure driven gas transfer. To represent a compression between two nodes in the gas network, the compression_factor can be defined. This factor ensures that the pressure of a node is equal to (or lower than) the pressure at the sending node times the compression_factor. The relationship connection__node__node that hosts this parameter should be defined in a way that the first node represents the origin node and the second node represents the compressed node.

diff --git a/dev/concept_reference/connection/index.html b/dev/concept_reference/connection/index.html index caa2006866..0a8eae7afd 100644 --- a/dev/concept_reference/connection/index.html +++ b/dev/concept_reference/connection/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A connection represents a transfer of one commodity over space. For example, an electricity transmission line, a gas pipe, a river branch, can be modelled using a connection.

A connection always takes commodities from one or more nodes, and releases them to one or more (possibly the same) nodes. The former are specificed through the connection__from_node relationship, and the latter through connection__to_node. Every connection inherits the temporal and stochastic structures from the associated nodes. The model will generate connection_flow variables for every combination of connection, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the connection is specified through a number of parameter values. For example, the capacity of the connection, as the maximum amount of energy that can enter or leave it, is given by connection_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_connection_flow, max_ratio_out_in_connection_flow, and min_ratio_out_in_connection_flow parameters in the connection__node__node relationship. The delay on a connection, as the time it takes for the energy to go from one end to the other, is given by connection_flow_delay.

+- · SpineOpt.jl

A connection represents a transfer of one commodity over space. For example, an electricity transmission line, a gas pipe, a river branch, can be modelled using a connection.

A connection always takes commodities from one or more nodes, and releases them to one or more (possibly the same) nodes. The former are specificed through the connection__from_node relationship, and the latter through connection__to_node. Every connection inherits the temporal and stochastic structures from the associated nodes. The model will generate connection_flow variables for every combination of connection, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the connection is specified through a number of parameter values. For example, the capacity of the connection, as the maximum amount of energy that can enter or leave it, is given by connection_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_connection_flow, max_ratio_out_in_connection_flow, and min_ratio_out_in_connection_flow parameters in the connection__node__node relationship. The delay on a connection, as the time it takes for the energy to go from one end to the other, is given by connection_flow_delay.

diff --git a/dev/concept_reference/connection__from_node/index.html b/dev/concept_reference/connection__from_node/index.html index d909bdefdc..002371312a 100644 --- a/dev/concept_reference/connection__from_node/index.html +++ b/dev/concept_reference/connection__from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__from_node is a two-dimensional relationship between a connection and a node and implies a connection_flow to the connection from the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:from_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

+- · SpineOpt.jl

connection__from_node is a two-dimensional relationship between a connection and a node and implies a connection_flow to the connection from the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:from_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

diff --git a/dev/concept_reference/connection__from_node__unit_constraint/index.html b/dev/concept_reference/connection__from_node__unit_constraint/index.html index 516093465b..798de78649 100644 --- a/dev/concept_reference/connection__from_node__unit_constraint/index.html +++ b/dev/concept_reference/connection__from_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__from_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable to the specified connection from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__from_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

+- · SpineOpt.jl

connection__from_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable to the specified connection from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__from_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/connection__investment_stochastic_structure/index.html b/dev/concept_reference/connection__investment_stochastic_structure/index.html index ab743cdc52..ea3310740f 100644 --- a/dev/concept_reference/connection__investment_stochastic_structure/index.html +++ b/dev/concept_reference/connection__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection__investment_temporal_block/index.html b/dev/concept_reference/connection__investment_temporal_block/index.html index ad4af8afe3..065db65a80 100644 --- a/dev/concept_reference/connection__investment_temporal_block/index.html +++ b/dev/concept_reference/connection__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__investment_temporal_block is a two-dimensional relationship between a connection and a temporal_block. This relationship defines the temporal resolution and scope of a connection's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no connection__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if connection__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified connection.

See also Investment Optimization

+- · SpineOpt.jl

connection__investment_temporal_block is a two-dimensional relationship between a connection and a temporal_block. This relationship defines the temporal resolution and scope of a connection's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no connection__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if connection__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified connection.

See also Investment Optimization

diff --git a/dev/concept_reference/connection__node__node/index.html b/dev/concept_reference/connection__node__node/index.html index 97a63e5131..761e3a1805 100644 --- a/dev/concept_reference/connection__node__node/index.html +++ b/dev/concept_reference/connection__node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__node__node is a three-dimensional relationship between a connection, a node (node 1) and another node (node 2). connection__node__node infers a conversion and a direction with respect to that conversion. Node 1 is assumed to be the input node and node 2 is assumed to be the output node. For example, the fix_ratio_out_in_connection_flow parameter defined on connection__node__node relates the output connection_flow to node 2 to the intput connection_flow from node 1

+- · SpineOpt.jl

connection__node__node is a three-dimensional relationship between a connection, a node (node 1) and another node (node 2). connection__node__node infers a conversion and a direction with respect to that conversion. Node 1 is assumed to be the input node and node 2 is assumed to be the output node. For example, the fix_ratio_out_in_connection_flow parameter defined on connection__node__node relates the output connection_flow to node 2 to the intput connection_flow from node 1

diff --git a/dev/concept_reference/connection__to_node/index.html b/dev/concept_reference/connection__to_node/index.html index 2b0222020d..c19bde00e9 100644 --- a/dev/concept_reference/connection__to_node/index.html +++ b/dev/concept_reference/connection__to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__to_node is a two-dimensional relationship between a connection and a node and implies a connection_flow from the connection to the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:to_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

+- · SpineOpt.jl

connection__to_node is a two-dimensional relationship between a connection and a node and implies a connection_flow from the connection to the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:to_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

diff --git a/dev/concept_reference/connection__to_node__unit_constraint/index.html b/dev/concept_reference/connection__to_node__unit_constraint/index.html index 386c7b3240..4deefd2f48 100644 --- a/dev/concept_reference/connection__to_node__unit_constraint/index.html +++ b/dev/concept_reference/connection__to_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__to_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable from the specified connection to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__to_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

+- · SpineOpt.jl

connection__to_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable from the specified connection to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__to_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/connection_availability_factor/index.html b/dev/concept_reference/connection_availability_factor/index.html index 3d2e007a18..2ade59685b 100644 --- a/dev/concept_reference/connection_availability_factor/index.html +++ b/dev/concept_reference/connection_availability_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To indicate that a connection is only available to a certain extent or at certain times of the optimization, the connection_availability_factor can be used. A typical use case could be an availability timeseries for connection with expected outage times. By default the availability factor is set to 1. The availability is, among others, used in the constraint_connection_flow_capacity.

+- · SpineOpt.jl

To indicate that a connection is only available to a certain extent or at certain times of the optimization, the connection_availability_factor can be used. A typical use case could be an availability timeseries for connection with expected outage times. By default the availability factor is set to 1. The availability is, among others, used in the constraint_connection_flow_capacity.

diff --git a/dev/concept_reference/connection_capacity/index.html b/dev/concept_reference/connection_capacity/index.html index af1cdafba9..238d6f21a5 100644 --- a/dev/concept_reference/connection_capacity/index.html +++ b/dev/concept_reference/connection_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Defines the upper bound on the corresponding connection_flow variable. If the connection is a candidate connection, the effective connection_flow upper bound is the product of the investment variable, connections_invested_available and connection_capacity. If ptdf based dc load flow is enabled, connection_capacity represents the normal rating of a connection (line) while connection_emergency_capacity represents the maximum post contingency flow.

+- · SpineOpt.jl

Defines the upper bound on the corresponding connection_flow variable. If the connection is a candidate connection, the effective connection_flow upper bound is the product of the investment variable, connections_invested_available and connection_capacity. If ptdf based dc load flow is enabled, connection_capacity represents the normal rating of a connection (line) while connection_emergency_capacity represents the maximum post contingency flow.

diff --git a/dev/concept_reference/connection_contingency/index.html b/dev/concept_reference/connection_contingency/index.html index cad935be3e..615d312b7d 100644 --- a/dev/concept_reference/connection_contingency/index.html +++ b/dev/concept_reference/connection_contingency/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies that the connection in question is to be included as a contingency when security constrained unit commitment is enabled. When using security constrained unit commitment by setting commodity_physics to commodity_physics_lodf, an N-1 security constraint is created for each monitored line (connection_monitored = true) for each specified contingency (connection_contingency = true).

See also powerflow

+- · SpineOpt.jl

Specifies that the connection in question is to be included as a contingency when security constrained unit commitment is enabled. When using security constrained unit commitment by setting commodity_physics to commodity_physics_lodf, an N-1 security constraint is created for each monitored line (connection_monitored = true) for each specified contingency (connection_contingency = true).

See also powerflow

diff --git a/dev/concept_reference/connection_conv_cap_to_flow/index.html b/dev/concept_reference/connection_conv_cap_to_flow/index.html index b29eafe3d2..9fcc0afa86 100644 --- a/dev/concept_reference/connection_conv_cap_to_flow/index.html +++ b/dev/concept_reference/connection_conv_cap_to_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_emergency_capacity/index.html b/dev/concept_reference/connection_emergency_capacity/index.html index 669dc6945a..a094a2828c 100644 --- a/dev/concept_reference/connection_emergency_capacity/index.html +++ b/dev/concept_reference/connection_emergency_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_flow_coefficient/index.html b/dev/concept_reference/connection_flow_coefficient/index.html index 703b0dd644..7f88269eba 100644 --- a/dev/concept_reference/connection_flow_coefficient/index.html +++ b/dev/concept_reference/connection_flow_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_flow_cost/index.html b/dev/concept_reference/connection_flow_cost/index.html index f39cadda31..a0895b113c 100644 --- a/dev/concept_reference/connection_flow_cost/index.html +++ b/dev/concept_reference/connection_flow_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the connection_flow_cost parameter for a specific connection, a cost term will be added to the objective function that values all connection_flow variables associated with that connection during the current optimization window.

+- · SpineOpt.jl

By defining the connection_flow_cost parameter for a specific connection, a cost term will be added to the objective function that values all connection_flow variables associated with that connection during the current optimization window.

diff --git a/dev/concept_reference/connection_flow_delay/index.html b/dev/concept_reference/connection_flow_delay/index.html index d8db8929f6..dc76f62742 100644 --- a/dev/concept_reference/connection_flow_delay/index.html +++ b/dev/concept_reference/connection_flow_delay/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_investment_cost/index.html b/dev/concept_reference/connection_investment_cost/index.html index 69ab923ade..42803accd0 100644 --- a/dev/concept_reference/connection_investment_cost/index.html +++ b/dev/concept_reference/connection_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the connection_investment_cost parameter for a specific connection, a cost term will be added to the objective function whenever a connection investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the connection_investment_cost parameter for a specific connection, a cost term will be added to the objective function whenever a connection investment is made during the current optimization window.

diff --git a/dev/concept_reference/connection_investment_lifetime/index.html b/dev/concept_reference/connection_investment_lifetime/index.html index 4b480d0e5b..2ae531676b 100644 --- a/dev/concept_reference/connection_investment_lifetime/index.html +++ b/dev/concept_reference/connection_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection_investment_lifetime is the minimum amount of time that a connection has to stay in operation once it's invested-in. Only after that time, the connection can be decomissioned. Note that connection_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

+- · SpineOpt.jl

connection_investment_lifetime is the minimum amount of time that a connection has to stay in operation once it's invested-in. Only after that time, the connection can be decomissioned. Note that connection_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

diff --git a/dev/concept_reference/connection_investment_variable_type/index.html b/dev/concept_reference/connection_investment_variable_type/index.html index 49e556b8bf..4611c502c4 100644 --- a/dev/concept_reference/connection_investment_variable_type/index.html +++ b/dev/concept_reference/connection_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The connection_investment_variable_type parameter represents the type of the connections_invested_available decision variable.

The default value, variable_type_integer, means that only integer factors of the connection_capacity can be invested in. The value variable_type_continuous means that any fractional factor can also be invested in. The value variable_type_binary means that only a factor of 1 or zero are possible.

+- · SpineOpt.jl

The connection_investment_variable_type parameter represents the type of the connections_invested_available decision variable.

The default value, variable_type_integer, means that only integer factors of the connection_capacity can be invested in. The value variable_type_continuous means that any fractional factor can also be invested in. The value variable_type_binary means that only a factor of 1 or zero are possible.

diff --git a/dev/concept_reference/connection_investment_variable_type_list/index.html b/dev/concept_reference/connection_investment_variable_type_list/index.html index f36cf4d85b..14269c8616 100644 --- a/dev/concept_reference/connection_investment_variable_type_list/index.html +++ b/dev/concept_reference/connection_investment_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_linepack_constant/index.html b/dev/concept_reference/connection_linepack_constant/index.html index 53486bf555..172b8e7d5a 100644 --- a/dev/concept_reference/connection_linepack_constant/index.html +++ b/dev/concept_reference/connection_linepack_constant/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The linepack constant is a physical property of a connection representing a pipeline and holds information on how the linepack flexibility relates to pressures of the adjacent nodes. If, and only if, this parameter is defined, the linepack flexibility of a pipeline can be modelled. The existence of the parameter triggers the generation of the constraint on line pack storage. The connection_linepack_constant should always be defined on the tuple (connection pipeline, linepack storage node, node group (containing both pressure nodes, i.e. start and end of the pipeline)). See also.

+- · SpineOpt.jl

The linepack constant is a physical property of a connection representing a pipeline and holds information on how the linepack flexibility relates to pressures of the adjacent nodes. If, and only if, this parameter is defined, the linepack flexibility of a pipeline can be modelled. The existence of the parameter triggers the generation of the constraint on line pack storage. The connection_linepack_constant should always be defined on the tuple (connection pipeline, linepack storage node, node group (containing both pressure nodes, i.e. start and end of the pipeline)). See also.

diff --git a/dev/concept_reference/connection_monitored/index.html b/dev/concept_reference/connection_monitored/index.html index 754adf11b0..ad212f9b57 100644 --- a/dev/concept_reference/connection_monitored/index.html +++ b/dev/concept_reference/connection_monitored/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_reactance/index.html b/dev/concept_reference/connection_reactance/index.html index 6a766dff0f..a909993bb3 100644 --- a/dev/concept_reference/connection_reactance/index.html +++ b/dev/concept_reference/connection_reactance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The per unit reactance of a transmission line. Used in ptdf based dc load flow where the relative reactances of lines determine the ptdfs of the network and in lossless dc powerflow where the flow on a line is given by flow = 1/x(theta_to-theta_from) where x is the reatance of the line, thetato is the voltage angle of the remote node and thetafrom is the voltage angle of the sending node.

+- · SpineOpt.jl

The per unit reactance of a transmission line. Used in ptdf based dc load flow where the relative reactances of lines determine the ptdfs of the network and in lossless dc powerflow where the flow on a line is given by flow = 1/x(theta_to-theta_from) where x is the reatance of the line, thetato is the voltage angle of the remote node and thetafrom is the voltage angle of the sending node.

diff --git a/dev/concept_reference/connection_reactance_base/index.html b/dev/concept_reference/connection_reactance_base/index.html index 1bc1cff195..7699f2d0c2 100644 --- a/dev/concept_reference/connection_reactance_base/index.html +++ b/dev/concept_reference/connection_reactance_base/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_resistance/index.html b/dev/concept_reference/connection_resistance/index.html index 793f2f8c73..d8027614e7 100644 --- a/dev/concept_reference/connection_resistance/index.html +++ b/dev/concept_reference/connection_resistance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The per unit resistance of a transmission line. Currently unimplemented!

+- · SpineOpt.jl

The per unit resistance of a transmission line. Currently unimplemented!

diff --git a/dev/concept_reference/connection_type/index.html b/dev/concept_reference/connection_type/index.html index 061f69c5cf..5d7fc68cac 100644 --- a/dev/concept_reference/connection_type/index.html +++ b/dev/concept_reference/connection_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to control specific pre-processing actions on connections. Currently, the primary purpose of connection_type is to simplify the data that is required to define a simple bi-directional, lossless line. If connection_type=:connection_type_lossless_bidirectional, it is only necessary to specify the following minimum data:

If connection_type=:connection_type_lossless_bidirectional the following pre-processing actions are taken:

+- · SpineOpt.jl

Used to control specific pre-processing actions on connections. Currently, the primary purpose of connection_type is to simplify the data that is required to define a simple bi-directional, lossless line. If connection_type=:connection_type_lossless_bidirectional, it is only necessary to specify the following minimum data:

If connection_type=:connection_type_lossless_bidirectional the following pre-processing actions are taken:

diff --git a/dev/concept_reference/connection_type_list/index.html b/dev/concept_reference/connection_type_list/index.html index 50a6ad7f07..f0af7aa54b 100644 --- a/dev/concept_reference/connection_type_list/index.html +++ b/dev/concept_reference/connection_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_avaiable_coefficient/index.html b/dev/concept_reference/connections_invested_avaiable_coefficient/index.html index 470a2f01e9..8f722c8a90 100644 --- a/dev/concept_reference/connections_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/connections_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_big_m_mga/index.html b/dev/concept_reference/connections_invested_big_m_mga/index.html index 194d000b86..2374cc8177 100644 --- a/dev/concept_reference/connections_invested_big_m_mga/index.html +++ b/dev/concept_reference/connections_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The connections_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_connections could suffice.)

+- · SpineOpt.jl

The connections_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_connections could suffice.)

diff --git a/dev/concept_reference/connections_invested_coefficient/index.html b/dev/concept_reference/connections_invested_coefficient/index.html index 61715fe6ed..0541c5d662 100644 --- a/dev/concept_reference/connections_invested_coefficient/index.html +++ b/dev/concept_reference/connections_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_mga/index.html b/dev/concept_reference/connections_invested_mga/index.html index 2e4df03ef3..3d5dcb2ba7 100644 --- a/dev/concept_reference/connections_invested_mga/index.html +++ b/dev/concept_reference/connections_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/constraint_sense/index.html b/dev/concept_reference/constraint_sense/index.html index a306bd2764..3f9c2bb8a2 100644 --- a/dev/concept_reference/constraint_sense/index.html +++ b/dev/concept_reference/constraint_sense/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/constraint_sense_list/index.html b/dev/concept_reference/constraint_sense_list/index.html index 7eff8f5877..b5aa9db4ea 100644 --- a/dev/concept_reference/constraint_sense_list/index.html +++ b/dev/concept_reference/constraint_sense_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/curtailment_cost/index.html b/dev/concept_reference/curtailment_cost/index.html index ddf33f7d61..58c2ecd476 100644 --- a/dev/concept_reference/curtailment_cost/index.html +++ b/dev/concept_reference/curtailment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the curtailment_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit's available capacity exceeds its activity (i.e., the unit_flow variable) over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the curtailment_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit's available capacity exceeds its activity (i.e., the unit_flow variable) over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/cyclic_condition/index.html b/dev/concept_reference/cyclic_condition/index.html index e3865147af..25123fcb04 100644 --- a/dev/concept_reference/cyclic_condition/index.html +++ b/dev/concept_reference/cyclic_condition/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/db_lp_solver/index.html b/dev/concept_reference/db_lp_solver/index.html index a153f2f630..fa989b69f6 100644 --- a/dev/concept_reference/db_lp_solver/index.html +++ b/dev/concept_reference/db_lp_solver/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Linear Programming Problems (LPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Clp.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support LP problems. Solver options are specified using the db_lp_solver_options parameter for the model. Note also that if run_spineopt() is called with the lp_solver keyword argument specified, this will override this parameter.

+- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Linear Programming Problems (LPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Clp.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support LP problems. Solver options are specified using the db_lp_solver_options parameter for the model. Note also that if run_spineopt() is called with the lp_solver keyword argument specified, this will override this parameter.

diff --git a/dev/concept_reference/db_lp_solver_list/index.html b/dev/concept_reference/db_lp_solver_list/index.html index c64cb77db2..e9c253a30e 100644 --- a/dev/concept_reference/db_lp_solver_list/index.html +++ b/dev/concept_reference/db_lp_solver_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

List of supported LP solvers which may be specified for the db_lp_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Clp.jl) and is case sensitive.

+- · SpineOpt.jl

List of supported LP solvers which may be specified for the db_lp_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Clp.jl) and is case sensitive.

diff --git a/dev/concept_reference/db_lp_solver_options/index.html b/dev/concept_reference/db_lp_solver_options/index.html index eeb7e9664d..4ede04f60c 100644 --- a/dev/concept_reference/db_lp_solver_options/index.html +++ b/dev/concept_reference/db_lp_solver_options/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

LP solver options are specified for a model using the db_lp_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Clp.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_lp_solver_options map parameter

+- · SpineOpt.jl

LP solver options are specified for a model using the db_lp_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Clp.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_lp_solver_options map parameter

diff --git a/dev/concept_reference/db_mip_solver/index.html b/dev/concept_reference/db_mip_solver/index.html index 185e74afb8..35c047192a 100644 --- a/dev/concept_reference/db_mip_solver/index.html +++ b/dev/concept_reference/db_mip_solver/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Mixed Integer Programming Problems (MIPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Cbc.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support MIP problems. Solver options are specified using the db_mip_solver_options parameter for the model. Note also that if run_spineopt() is called with the mip_solver keyword argument specified, this will override this parameter.

+- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Mixed Integer Programming Problems (MIPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Cbc.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support MIP problems. Solver options are specified using the db_mip_solver_options parameter for the model. Note also that if run_spineopt() is called with the mip_solver keyword argument specified, this will override this parameter.

diff --git a/dev/concept_reference/db_mip_solver_list/index.html b/dev/concept_reference/db_mip_solver_list/index.html index a82115711f..d79458b76b 100644 --- a/dev/concept_reference/db_mip_solver_list/index.html +++ b/dev/concept_reference/db_mip_solver_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

List of supported MIP solvers which may be specified for the db_mip_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Cbc.jl) and is case sensitive.

+- · SpineOpt.jl

List of supported MIP solvers which may be specified for the db_mip_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Cbc.jl) and is case sensitive.

diff --git a/dev/concept_reference/db_mip_solver_options/index.html b/dev/concept_reference/db_mip_solver_options/index.html index 082135111a..9680340433 100644 --- a/dev/concept_reference/db_mip_solver_options/index.html +++ b/dev/concept_reference/db_mip_solver_options/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

MIP solver options are specified for a model using the db_mip_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Cbc.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_mip_solver_options map parameter

+- · SpineOpt.jl

MIP solver options are specified for a model using the db_mip_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Cbc.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_mip_solver_options map parameter

diff --git a/dev/concept_reference/demand/index.html b/dev/concept_reference/demand/index.html index 1206212bb5..de38f5c840 100644 --- a/dev/concept_reference/demand/index.html +++ b/dev/concept_reference/demand/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The demand parameter represents a "demand" or a "load" of a commodity on a node. It appears in the node injection constraint, with positive values interpreted as "demand" or "load" for the modelled system, while negative values provide the system with "influx" or "gain". When the node is part of a group, the fractional_demand parameter can be used to split demand into fractions, when desired. See also: Introduction to groups of objects

The demand parameter can also be included in custom user_constraints using the demand_coefficient parameter for the node__user_constraint relationship.

+- · SpineOpt.jl

The demand parameter represents a "demand" or a "load" of a commodity on a node. It appears in the node injection constraint, with positive values interpreted as "demand" or "load" for the modelled system, while negative values provide the system with "influx" or "gain". When the node is part of a group, the fractional_demand parameter can be used to split demand into fractions, when desired. See also: Introduction to groups of objects

The demand parameter can also be included in custom user_constraints using the demand_coefficient parameter for the node__user_constraint relationship.

diff --git a/dev/concept_reference/demand_coefficient/index.html b/dev/concept_reference/demand_coefficient/index.html index 20022222ee..8de7c9e67c 100644 --- a/dev/concept_reference/demand_coefficient/index.html +++ b/dev/concept_reference/demand_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/diff_coeff/index.html b/dev/concept_reference/diff_coeff/index.html index ddeb1d88b7..4efca7f6a5 100644 --- a/dev/concept_reference/diff_coeff/index.html +++ b/dev/concept_reference/diff_coeff/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The diff_coeff parameter represents diffusion of a commodity between the two nodes in the node__node relationship. It appears as a coefficient on the node_state variable in the node injection constraint, essentially representing diffusion power per unit of state. Note that the diff_coeff is interpreted as one-directional, meaning that if one defines

diff_coeff(node1=n1, node2=n2),

there will only be diffusion from n1 to n2, but not vice versa. Symmetric diffusion is likely used in most cases, requiring defining the diff_coeff both ways

diff_coeff(node1=n1, node2=n2) == diff_coeff(node1=n2, node2=n1).
+- · SpineOpt.jl

The diff_coeff parameter represents diffusion of a commodity between the two nodes in the node__node relationship. It appears as a coefficient on the node_state variable in the node injection constraint, essentially representing diffusion power per unit of state. Note that the diff_coeff is interpreted as one-directional, meaning that if one defines

diff_coeff(node1=n1, node2=n2),

there will only be diffusion from n1 to n2, but not vice versa. Symmetric diffusion is likely used in most cases, requiring defining the diff_coeff both ways

diff_coeff(node1=n1, node2=n2) == diff_coeff(node1=n2, node2=n1).
diff --git a/dev/concept_reference/downward_reserve/index.html b/dev/concept_reference/downward_reserve/index.html index e6a6e23373..37b3cab9d7 100644 --- a/dev/concept_reference/downward_reserve/index.html +++ b/dev/concept_reference/downward_reserve/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

+- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

diff --git a/dev/concept_reference/duration_unit/index.html b/dev/concept_reference/duration_unit/index.html index 7ad38ea01b..43b4fa6829 100644 --- a/dev/concept_reference/duration_unit/index.html +++ b/dev/concept_reference/duration_unit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The duration_unit parameter specifies the base unit of time in a model. Two values are currently supported, hour and the default minute. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

+- · SpineOpt.jl

The duration_unit parameter specifies the base unit of time in a model. Two values are currently supported, hour and the default minute. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

diff --git a/dev/concept_reference/duration_unit_list/index.html b/dev/concept_reference/duration_unit_list/index.html index e9749d6ace..180115bdd3 100644 --- a/dev/concept_reference/duration_unit_list/index.html +++ b/dev/concept_reference/duration_unit_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_binary_gas_connection_flow/index.html b/dev/concept_reference/fix_binary_gas_connection_flow/index.html index b1aa0c835b..94eb429224 100644 --- a/dev/concept_reference/fix_binary_gas_connection_flow/index.html +++ b/dev/concept_reference/fix_binary_gas_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connection_flow/index.html b/dev/concept_reference/fix_connection_flow/index.html index 846a5e3ab2..553bfa52d1 100644 --- a/dev/concept_reference/fix_connection_flow/index.html +++ b/dev/concept_reference/fix_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connection_intact_flow/index.html b/dev/concept_reference/fix_connection_intact_flow/index.html index d372ea0a8b..1cdf269360 100644 --- a/dev/concept_reference/fix_connection_intact_flow/index.html +++ b/dev/concept_reference/fix_connection_intact_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connections_invested/index.html b/dev/concept_reference/fix_connections_invested/index.html index 09356919a0..a2fefa77b1 100644 --- a/dev/concept_reference/fix_connections_invested/index.html +++ b/dev/concept_reference/fix_connections_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connections_invested_available/index.html b/dev/concept_reference/fix_connections_invested_available/index.html index a2b6c01707..bbe8dc73fd 100644 --- a/dev/concept_reference/fix_connections_invested_available/index.html +++ b/dev/concept_reference/fix_connections_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_node_pressure/index.html b/dev/concept_reference/fix_node_pressure/index.html index 95b0ed2e58..f2f21282ab 100644 --- a/dev/concept_reference/fix_node_pressure/index.html +++ b/dev/concept_reference/fix_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In a pressure driven gas model, gas network nodes are associated with the node_pressure variable. In order to fix the pressure at a certain node or to give intial conditions the fix_node_pressure parameter can be used.

+- · SpineOpt.jl

In a pressure driven gas model, gas network nodes are associated with the node_pressure variable. In order to fix the pressure at a certain node or to give intial conditions the fix_node_pressure parameter can be used.

diff --git a/dev/concept_reference/fix_node_state/index.html b/dev/concept_reference/fix_node_state/index.html index 25a7557db2..4c3e255ba7 100644 --- a/dev/concept_reference/fix_node_state/index.html +++ b/dev/concept_reference/fix_node_state/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_node_state parameter simply fixes the value of the node_state variable to the provided value, if one is found. Common uses for the parameter include e.g. providing initial values for node_state variables, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the node_state variables are only fixed for time steps with defined fix_node_state parameter values.

+- · SpineOpt.jl

The fix_node_state parameter simply fixes the value of the node_state variable to the provided value, if one is found. Common uses for the parameter include e.g. providing initial values for node_state variables, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the node_state variables are only fixed for time steps with defined fix_node_state parameter values.

diff --git a/dev/concept_reference/fix_node_voltage_angle/index.html b/dev/concept_reference/fix_node_voltage_angle/index.html index 37cd3a213b..ee0a84a370 100644 --- a/dev/concept_reference/fix_node_voltage_angle/index.html +++ b/dev/concept_reference/fix_node_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For a lossless nodal DC power flow network, each node is associated with a node_voltage_angle variable. In order to fix the voltage angle at a certain node or to give initial conditions the fix_node_voltage_angle parameter can be used.

+- · SpineOpt.jl

For a lossless nodal DC power flow network, each node is associated with a node_voltage_angle variable. In order to fix the voltage angle at a certain node or to give initial conditions the fix_node_voltage_angle parameter can be used.

diff --git a/dev/concept_reference/fix_nonspin_units_shut_down/index.html b/dev/concept_reference/fix_nonspin_units_shut_down/index.html index c0b903dc00..67220e8731 100644 --- a/dev/concept_reference/fix_nonspin_units_shut_down/index.html +++ b/dev/concept_reference/fix_nonspin_units_shut_down/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_nonspin_units_shut_down parameter simply fixes the value of the nonspin_units_shut_down variable to the provided value. As such, it determines directly how many member units are involved in providing downward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_nonspin_units_shut_down parameter simply fixes the value of the nonspin_units_shut_down variable to the provided value. As such, it determines directly how many member units are involved in providing downward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_nonspin_units_started_up/index.html b/dev/concept_reference/fix_nonspin_units_started_up/index.html index 79f586c874..065faa8e52 100644 --- a/dev/concept_reference/fix_nonspin_units_started_up/index.html +++ b/dev/concept_reference/fix_nonspin_units_started_up/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_nonspin_units_started_up parameter simply fixes the value of the nonspin_units_started_up variable to the provided value. As such, it determines directly how many member units are involved in providing upward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_nonspin_units_started_up parameter simply fixes the value of the nonspin_units_started_up variable to the provided value. As such, it determines directly how many member units are involved in providing upward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html b/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html index c0185f2a6a..29ced3f77c 100644 --- a/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_in_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_in_unit_flow and fixes the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a fixed share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the fix_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the fix_ratio_in_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_in_unit_flow and fixes the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a fixed share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the fix_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html b/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html index f1b8295b0d..c6b8eed612 100644 --- a/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_in_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_out_unit_flow and fixes the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node,i i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flows to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

To implement a piecewise linear ratio, the parameter should be specified as an array type. It is then used in conjunction with the unit parameter operating_points which should also be defined as an array type of equal dimension. When defined as an array type, fix\_ratio\_in\_out\_unit\_flow[i] is the effective incremental ratio between operating_points [i-1] (or zero if i=1) and operating_points[i]. Note that operating_points is defined on a capacity-normalized basis so if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity. Note also that the formulation assumes a convex, monotonically increasing function. The formulation relies on optimality to load the segments in the correct order and no additional integer variables are created to enforce the correct loading order.

+- · SpineOpt.jl

The definition of the fix_ratio_in_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_out_unit_flow and fixes the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node,i i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flows to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

To implement a piecewise linear ratio, the parameter should be specified as an array type. It is then used in conjunction with the unit parameter operating_points which should also be defined as an array type of equal dimension. When defined as an array type, fix\_ratio\_in\_out\_unit\_flow[i] is the effective incremental ratio between operating_points [i-1] (or zero if i=1) and operating_points[i]. Note that operating_points is defined on a capacity-normalized basis so if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity. Note also that the formulation assumes a convex, monotonically increasing function. The formulation relies on optimality to load the segments in the correct order and no additional integer variables are created to enforce the correct loading order.

diff --git a/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html b/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html index c027726c3b..df36c43edb 100644 --- a/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_in_connection_flow parameter triggers the generation of the constraint_fix_ratio_out_in_connection_flow and fixes the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. In most cases the fix_ratio_out_in_connection_flow parameter is set to equal or lower than 1, linking the flows entering to the flows leaving the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right order. The parameter can be used to e.g. account for losses over a connection in a certain direction.

To enforce e.g. a fixed ratio of 0.8 for a connection conn between its outgoing electricity flow to node el1 and its incoming flows from the node node el2, the fix_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship u__el1__el2.

+- · SpineOpt.jl

The definition of the fix_ratio_out_in_connection_flow parameter triggers the generation of the constraint_fix_ratio_out_in_connection_flow and fixes the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. In most cases the fix_ratio_out_in_connection_flow parameter is set to equal or lower than 1, linking the flows entering to the flows leaving the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right order. The parameter can be used to e.g. account for losses over a connection in a certain direction.

To enforce e.g. a fixed ratio of 0.8 for a connection conn between its outgoing electricity flow to node el1 and its incoming flows from the node node el2, the fix_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship u__el1__el2.

diff --git a/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html b/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html index 1167414bf6..e6965186ef 100644 --- a/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_in_unit_flow and fixes the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ngthe fix_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the fix_ratio_out_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_in_unit_flow and fixes the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ngthe fix_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html b/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html index 1f60a22d76..d5d5a3303c 100644 --- a/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_out_unit_flow and fixes the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a fixed ratio between two products of a unit u, e.g. fixing the share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the fix_ratio_out_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_out_unit_flow and fixes the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a fixed ratio between two products of a unit u, e.g. fixing the share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/fix_storages_invested/index.html b/dev/concept_reference/fix_storages_invested/index.html index f99af73a9b..c15d94ef83 100644 --- a/dev/concept_reference/fix_storages_invested/index.html +++ b/dev/concept_reference/fix_storages_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_storages_invested_available/index.html b/dev/concept_reference/fix_storages_invested_available/index.html index 33743264d0..cdc14a97db 100644 --- a/dev/concept_reference/fix_storages_invested_available/index.html +++ b/dev/concept_reference/fix_storages_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used primarily to fix the value of the storages_invested_available variable which represents the storages investment decision variable and how many candidate storages are available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also candidate_storages and Investment Optimization

+- · SpineOpt.jl

Used primarily to fix the value of the storages_invested_available variable which represents the storages investment decision variable and how many candidate storages are available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also candidate_storages and Investment Optimization

diff --git a/dev/concept_reference/fix_unit_flow/index.html b/dev/concept_reference/fix_unit_flow/index.html index 153753ef57..c7e6dc3993 100644 --- a/dev/concept_reference/fix_unit_flow/index.html +++ b/dev/concept_reference/fix_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_unit_flow parameter fixes the value of the unit_flow variable to the provided value, if the parameter is defined.

Common uses for the parameter include e.g. providing initial values for the unit_flow variable, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the unit_flow variable is only fixed for time steps with defined fix_unit_flow parameter values.

Other uses can include e.g. a constant or time-varying exogenous commodity flow from or to a unit.

+- · SpineOpt.jl

The fix_unit_flow parameter fixes the value of the unit_flow variable to the provided value, if the parameter is defined.

Common uses for the parameter include e.g. providing initial values for the unit_flow variable, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the unit_flow variable is only fixed for time steps with defined fix_unit_flow parameter values.

Other uses can include e.g. a constant or time-varying exogenous commodity flow from or to a unit.

diff --git a/dev/concept_reference/fix_unit_flow_op/index.html b/dev/concept_reference/fix_unit_flow_op/index.html index 7a9faa83e8..c8e47bdc33 100644 --- a/dev/concept_reference/fix_unit_flow_op/index.html +++ b/dev/concept_reference/fix_unit_flow_op/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If operating_points is defined on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub-variables, unit_flow_op one for each operating point, with an additional index, i to reference the specific operating point. fix_unit_flow_op can thus be used to fix the value of one or more of the variables as desired.

+- · SpineOpt.jl

If operating_points is defined on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub-variables, unit_flow_op one for each operating point, with an additional index, i to reference the specific operating point. fix_unit_flow_op can thus be used to fix the value of one or more of the variables as desired.

diff --git a/dev/concept_reference/fix_units_invested/index.html b/dev/concept_reference/fix_units_invested/index.html index e834b9887a..a55290b882 100644 --- a/dev/concept_reference/fix_units_invested/index.html +++ b/dev/concept_reference/fix_units_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_units_invested_available/index.html b/dev/concept_reference/fix_units_invested_available/index.html index 9fb4c1728f..4ae4203ffb 100644 --- a/dev/concept_reference/fix_units_invested_available/index.html +++ b/dev/concept_reference/fix_units_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used primarily to fix the value of the units_invested_available variable which represents the unit investment decision variable and how many candidate units are invested-in and available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also Investment Optimization, candidate_units and unit_investment_variable_type

+- · SpineOpt.jl

Used primarily to fix the value of the units_invested_available variable which represents the unit investment decision variable and how many candidate units are invested-in and available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also Investment Optimization, candidate_units and unit_investment_variable_type

diff --git a/dev/concept_reference/fix_units_on/index.html b/dev/concept_reference/fix_units_on/index.html index 7343c5b119..18050d758a 100644 --- a/dev/concept_reference/fix_units_on/index.html +++ b/dev/concept_reference/fix_units_on/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on parameter simply fixes the value of the units_on variable to the provided value. As such, it determines directly how many members of the specific unit will be online throughout the model when a single value is selected. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_units_on parameter simply fixes the value of the units_on variable to the provided value. As such, it determines directly how many members of the specific unit will be online throughout the model when a single value is selected. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_units_on_coefficient_in_in/index.html b/dev/concept_reference/fix_units_on_coefficient_in_in/index.html index f9d67df983..19eea786ed 100644 --- a/dev/concept_reference/fix_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the fix_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_in and max_units_on_coefficient_in_in.

+- · SpineOpt.jl

The fix_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the fix_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_in and max_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/fix_units_on_coefficient_in_out/index.html b/dev/concept_reference/fix_units_on_coefficient_in_out/index.html index a46484d0f0..64cc83ad7a 100644 --- a/dev/concept_reference/fix_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the fix_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_out and max_units_on_coefficient_in_out.

+- · SpineOpt.jl

The fix_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the fix_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_out and max_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/fix_units_on_coefficient_out_in/index.html b/dev/concept_reference/fix_units_on_coefficient_out_in/index.html index 0336a3e20f..9fa30ed84a 100644 --- a/dev/concept_reference/fix_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the fix_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_in and max_units_on_coefficient_out_in.

+- · SpineOpt.jl

The fix_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the fix_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_in and max_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/fix_units_on_coefficient_out_out/index.html b/dev/concept_reference/fix_units_on_coefficient_out_out/index.html index f950e00c5b..f216faa4eb 100644 --- a/dev/concept_reference/fix_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the fix_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_out and max_units_on_coefficient_out_out.

+- · SpineOpt.jl

The fix_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the fix_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_out and max_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/fixed_pressure_constant_0/index.html b/dev/concept_reference/fixed_pressure_constant_0/index.html index f5588985a0..a33668e38c 100644 --- a/dev/concept_reference/fixed_pressure_constant_0/index.html +++ b/dev/concept_reference/fixed_pressure_constant_0/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The second fixed pressure constant, which will be multiplied with the pressure of the destination node, is represented by an Array value of the fixed_pressure_constant_0. The first pressure constant corresponds to the related parameter fixed_pressure_constant_1. Note that the fixed_pressure_constant_0 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

+- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The second fixed pressure constant, which will be multiplied with the pressure of the destination node, is represented by an Array value of the fixed_pressure_constant_0. The first pressure constant corresponds to the related parameter fixed_pressure_constant_1. Note that the fixed_pressure_constant_0 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

diff --git a/dev/concept_reference/fixed_pressure_constant_1/index.html b/dev/concept_reference/fixed_pressure_constant_1/index.html index d1391a8613..286e2c5a23 100644 --- a/dev/concept_reference/fixed_pressure_constant_1/index.html +++ b/dev/concept_reference/fixed_pressure_constant_1/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The first fixed pressure constant, which will be multiplied with the pressure of the origin node, is represented by an Array value of the fixed_pressure_constant_1. The second pressure constant corresponds to the related parameter fixed_pressure_constant_0. Note that the fixed_pressure_constant_1 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

+- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The first fixed pressure constant, which will be multiplied with the pressure of the origin node, is represented by an Array value of the fixed_pressure_constant_1. The second pressure constant corresponds to the related parameter fixed_pressure_constant_0. Note that the fixed_pressure_constant_1 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

diff --git a/dev/concept_reference/fom_cost/index.html b/dev/concept_reference/fom_cost/index.html index 9d06f8e60c..c5195656ab 100644 --- a/dev/concept_reference/fom_cost/index.html +++ b/dev/concept_reference/fom_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the fom_cost parameter for a specific unit, a cost term will be added to the objective function to account for the fixed operation and maintenance costs associated with that unit during the current optimization window. fom_cost differs from units_on_cost in a way that the fixed operation and maintenance costs apply to both the online and offline unit.

+- · SpineOpt.jl

By defining the fom_cost parameter for a specific unit, a cost term will be added to the objective function to account for the fixed operation and maintenance costs associated with that unit during the current optimization window. fom_cost differs from units_on_cost in a way that the fixed operation and maintenance costs apply to both the online and offline unit.

diff --git a/dev/concept_reference/frac_state_loss/index.html b/dev/concept_reference/frac_state_loss/index.html index 5fc1d58dc5..7fa8655cc8 100644 --- a/dev/concept_reference/frac_state_loss/index.html +++ b/dev/concept_reference/frac_state_loss/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The frac_state_loss parameter allows setting self-discharge losses for nodes with the node_state variables enabled using the has_state variable. Effectively, the frac_state_loss parameter acts as a coefficient on the node_state variable in the node injection constraint, imposing losses for the node. In simple cases, storage losses are typically fractional, e.g. a frac_state_loss parameter value of 0.01 would represent 1% of node_state lost per unit of time. However, a more general definition of what the frac_state_loss parameter represents in SpineOpt would be loss power per unit of node_state.

+- · SpineOpt.jl

The frac_state_loss parameter allows setting self-discharge losses for nodes with the node_state variables enabled using the has_state variable. Effectively, the frac_state_loss parameter acts as a coefficient on the node_state variable in the node injection constraint, imposing losses for the node. In simple cases, storage losses are typically fractional, e.g. a frac_state_loss parameter value of 0.01 would represent 1% of node_state lost per unit of time. However, a more general definition of what the frac_state_loss parameter represents in SpineOpt would be loss power per unit of node_state.

diff --git a/dev/concept_reference/fractional_demand/index.html b/dev/concept_reference/fractional_demand/index.html index 78fe51675b..1527dbfafd 100644 --- a/dev/concept_reference/fractional_demand/index.html +++ b/dev/concept_reference/fractional_demand/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fuel_cost/index.html b/dev/concept_reference/fuel_cost/index.html index 7c7dc0c870..8952e4a225 100644 --- a/dev/concept_reference/fuel_cost/index.html +++ b/dev/concept_reference/fuel_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the fuel_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for costs associated with the unit's fuel usage over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the fuel_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for costs associated with the unit's fuel usage over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/graph_view_position/index.html b/dev/concept_reference/graph_view_position/index.html index 968e22aa10..ac02afc5d4 100644 --- a/dev/concept_reference/graph_view_position/index.html +++ b/dev/concept_reference/graph_view_position/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The graph_view_position parameter can be used to fix the positions of various objects and relationships when plotted using the Spine Toolbox Graph View. If not defined, Spine Toolbox simply plots the element in question wherever it sees fit in the graph.

+- · SpineOpt.jl

The graph_view_position parameter can be used to fix the positions of various objects and relationships when plotted using the Spine Toolbox Graph View. If not defined, Spine Toolbox simply plots the element in question wherever it sees fit in the graph.

diff --git a/dev/concept_reference/has_binary_gas_flow/index.html b/dev/concept_reference/has_binary_gas_flow/index.html index a930a6d762..462ce636be 100644 --- a/dev/concept_reference/has_binary_gas_flow/index.html +++ b/dev/concept_reference/has_binary_gas_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is necessary for the use of pressure driven gas transfer, for which the direction of flow is not known a priori. The parameter has_binary_gas_flow is a booelean method parameter, which - when set to true - triggers the generation of the binary variables binary_gas_connection_flow, which (together with the big_m parameter) forces the average flow through a pipeline to be unidirectional.

+- · SpineOpt.jl

This parameter is necessary for the use of pressure driven gas transfer, for which the direction of flow is not known a priori. The parameter has_binary_gas_flow is a booelean method parameter, which - when set to true - triggers the generation of the binary variables binary_gas_connection_flow, which (together with the big_m parameter) forces the average flow through a pipeline to be unidirectional.

diff --git a/dev/concept_reference/has_pressure/index.html b/dev/concept_reference/has_pressure/index.html index c180d05493..ed4fd617f8 100644 --- a/dev/concept_reference/has_pressure/index.html +++ b/dev/concept_reference/has_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node is to represent a node in a pressure driven gas network, the boolean parameter has_pressure should be set true, in order to trigger the generation of the node_pressure variable. The pressure at a certain node can also be constrainted through the parameters max_node_pressure and min_node_pressure. More details on the use of pressure driven gas transfer are described here

+- · SpineOpt.jl

If a node is to represent a node in a pressure driven gas network, the boolean parameter has_pressure should be set true, in order to trigger the generation of the node_pressure variable. The pressure at a certain node can also be constrainted through the parameters max_node_pressure and min_node_pressure. More details on the use of pressure driven gas transfer are described here

diff --git a/dev/concept_reference/has_state/index.html b/dev/concept_reference/has_state/index.html index 06419046c5..9219b0491a 100644 --- a/dev/concept_reference/has_state/index.html +++ b/dev/concept_reference/has_state/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The has_state parameter is simply a Bool flag for whether a node has a node_state variable. By default, it is set to false, so the nodes enforce instantaneous commodity balance according to the nodal balance and node injection constraints. If set to true, the node will have a node_state variable generated for it, allowing for commodity storage at the node. Note that you'll also have to specify a value for the state_coeff parameter, as otherwise the node_state variable has zero commodity capacity.

+- · SpineOpt.jl

The has_state parameter is simply a Bool flag for whether a node has a node_state variable. By default, it is set to false, so the nodes enforce instantaneous commodity balance according to the nodal balance and node injection constraints. If set to true, the node will have a node_state variable generated for it, allowing for commodity storage at the node. Note that you'll also have to specify a value for the state_coeff parameter, as otherwise the node_state variable has zero commodity capacity.

diff --git a/dev/concept_reference/has_voltage_angle/index.html b/dev/concept_reference/has_voltage_angle/index.html index e2dc71374b..bf0febc9f9 100644 --- a/dev/concept_reference/has_voltage_angle/index.html +++ b/dev/concept_reference/has_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the use of node-based lossless DC powerflow, each node will be associated with a node_voltage_angle variable. To enable the generation of the variable in the optimization model, the boolean parameter has_voltage_angle should be set true. The voltage angle at a certain node can also be constrained through the parameters max_voltage_angle and min_voltage_angle. More details on the use of lossless nodal DC power flows are described here

+- · SpineOpt.jl

For the use of node-based lossless DC powerflow, each node will be associated with a node_voltage_angle variable. To enable the generation of the variable in the optimization model, the boolean parameter has_voltage_angle should be set true. The voltage angle at a certain node can also be constrained through the parameters max_voltage_angle and min_voltage_angle. More details on the use of lossless nodal DC power flows are described here

diff --git a/dev/concept_reference/investment_group/index.html b/dev/concept_reference/investment_group/index.html index 6e1063c6d1..583b91b3ee 100644 --- a/dev/concept_reference/investment_group/index.html +++ b/dev/concept_reference/investment_group/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The investment_group class represents a group of investments that need to be done together. For example, a storage investment on a node might only make sense if done together with a unit or a connection investment.

To use this functionality, you must first create an investment_group and then specify any number of unit__investment_group, node__investment_group, and/or connection__investment_group relationships between your investment_group and the unit, node, and/or connection investments that you want to be done together. This will ensure that the investment variables of all the entities in the investment_group have the same value.

+- · SpineOpt.jl

The investment_group class represents a group of investments that need to be done together. For example, a storage investment on a node might only make sense if done together with a unit or a connection investment.

To use this functionality, you must first create an investment_group and then specify any number of unit__investment_group, node__investment_group, and/or connection__investment_group relationships between your investment_group and the unit, node, and/or connection investments that you want to be done together. This will ensure that the investment variables of all the entities in the investment_group have the same value.

diff --git a/dev/concept_reference/is_active/index.html b/dev/concept_reference/is_active/index.html index 23c80bf1c3..115b2f0edc 100644 --- a/dev/concept_reference/is_active/index.html +++ b/dev/concept_reference/is_active/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

is_acive is a universal, utility parameter that is defined for every object class. When used in conjunction with the activity_control feature, the is_active parameter allows one to control whether or not a specific object is active within a model or not.

+- · SpineOpt.jl

is_acive is a universal, utility parameter that is defined for every object class. When used in conjunction with the activity_control feature, the is_active parameter allows one to control whether or not a specific object is active within a model or not.

diff --git a/dev/concept_reference/is_non_spinning/index.html b/dev/concept_reference/is_non_spinning/index.html index f29a3ba832..ccf86dde00 100644 --- a/dev/concept_reference/is_non_spinning/index.html +++ b/dev/concept_reference/is_non_spinning/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By setting the parameter is_non_spinning to true, a node is treated as a non-spinning reserve node. Note that this is only to differentiate spinning from non-spinning reserves. It is still necessary to set is_reserve_node to true. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

+- · SpineOpt.jl

By setting the parameter is_non_spinning to true, a node is treated as a non-spinning reserve node. Note that this is only to differentiate spinning from non-spinning reserves. It is still necessary to set is_reserve_node to true. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

diff --git a/dev/concept_reference/is_renewable/index.html b/dev/concept_reference/is_renewable/index.html index 0c560f8c42..16c2f913eb 100644 --- a/dev/concept_reference/is_renewable/index.html +++ b/dev/concept_reference/is_renewable/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A boolean value indicating whether a unit is a renewable energy source (RES). If true, then the unit contributes to the share of the demand that is supplied by RES in the context of mp_min_res_gen_to_demand_ratio.

+- · SpineOpt.jl

A boolean value indicating whether a unit is a renewable energy source (RES). If true, then the unit contributes to the share of the demand that is supplied by RES in the context of mp_min_res_gen_to_demand_ratio.

diff --git a/dev/concept_reference/is_reserve_node/index.html b/dev/concept_reference/is_reserve_node/index.html index 4eb92eda98..1ad557021e 100644 --- a/dev/concept_reference/is_reserve_node/index.html +++ b/dev/concept_reference/is_reserve_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By setting the parameter is_reserve_node to true, a node is treated as a reserve node in the model. Units that are linked through a unit__to_node relationship will be able to provide balancing services to the reserve node, but within their technical feasibility. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

+- · SpineOpt.jl

By setting the parameter is_reserve_node to true, a node is treated as a reserve node in the model. Units that are linked through a unit__to_node relationship will be able to provide balancing services to the reserve node, but within their technical feasibility. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

diff --git a/dev/concept_reference/max_cum_in_unit_flow_bound/index.html b/dev/concept_reference/max_cum_in_unit_flow_bound/index.html index e14fca5fb1..e60b2fc4ba 100644 --- a/dev/concept_reference/max_cum_in_unit_flow_bound/index.html +++ b/dev/concept_reference/max_cum_in_unit_flow_bound/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To impose a limit on the cumulative in flows to a unit for the entire modelling horizon, e.g. to enforce limits on emissions, the max_cum_in_unit_flow_bound parameter can be used. Defining this parameter triggers the generation of the constraint_max_cum_in_unit_flow_bound.

Assuming for instance that the total intake of a unit u_A should not exceed 10MWh for the entire modelling horizon, then the max_cum_in_unit_flow_bound would need to take the value 10. (Assuming here that the unit_flow variable is in MW, and the model duration_unit is hours)

+- · SpineOpt.jl

To impose a limit on the cumulative in flows to a unit for the entire modelling horizon, e.g. to enforce limits on emissions, the max_cum_in_unit_flow_bound parameter can be used. Defining this parameter triggers the generation of the constraint_max_cum_in_unit_flow_bound.

Assuming for instance that the total intake of a unit u_A should not exceed 10MWh for the entire modelling horizon, then the max_cum_in_unit_flow_bound would need to take the value 10. (Assuming here that the unit_flow variable is in MW, and the model duration_unit is hours)

diff --git a/dev/concept_reference/max_gap/index.html b/dev/concept_reference/max_gap/index.html index b591522624..9dc4c8532c 100644 --- a/dev/concept_reference/max_gap/index.html +++ b/dev/concept_reference/max_gap/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This determines the optimality convergence criterion and is the benders gap tolerance for the master problem in a decomposed investments model. The benders gap is the relative difference between the current objective function upper bound(zupper) and lower bound (zlower) and is defined as 2*(zupper-zlower)/(zupper + zlower). When this value is lower than max_gap the benders algorithm will terminate having achieved satisfactory optimality.

+- · SpineOpt.jl

This determines the optimality convergence criterion and is the benders gap tolerance for the master problem in a decomposed investments model. The benders gap is the relative difference between the current objective function upper bound(zupper) and lower bound (zlower) and is defined as 2*(zupper-zlower)/(zupper + zlower). When this value is lower than max_gap the benders algorithm will terminate having achieved satisfactory optimality.

diff --git a/dev/concept_reference/max_iterations/index.html b/dev/concept_reference/max_iterations/index.html index 8ba82e36ea..12bf896e9e 100644 --- a/dev/concept_reference/max_iterations/index.html +++ b/dev/concept_reference/max_iterations/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

When the model in question is of type :spineopt_benders_master, this determines the maximum number of Benders iterations.

+- · SpineOpt.jl

When the model in question is of type :spineopt_benders_master, this determines the maximum number of Benders iterations.

diff --git a/dev/concept_reference/max_mga_iterations/index.html b/dev/concept_reference/max_mga_iterations/index.html index 7395e5d2ee..7743d67072 100644 --- a/dev/concept_reference/max_mga_iterations/index.html +++ b/dev/concept_reference/max_mga_iterations/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_iterations defines how many MGA iterations will be performed, i.e. how many near-optimal solutions will be generated.

+- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_iterations defines how many MGA iterations will be performed, i.e. how many near-optimal solutions will be generated.

diff --git a/dev/concept_reference/max_mga_slack/index.html b/dev/concept_reference/max_mga_slack/index.html index c9deb93ba2..71b979b8ce 100644 --- a/dev/concept_reference/max_mga_slack/index.html +++ b/dev/concept_reference/max_mga_slack/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_slack defines how far from the optimum the new solutions can maximally be (e.g. a value of 0.05 would alow for a 5% increase of the orginal objective value).

+- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_slack defines how far from the optimum the new solutions can maximally be (e.g. a value of 0.05 would alow for a 5% increase of the orginal objective value).

diff --git a/dev/concept_reference/max_node_pressure/index.html b/dev/concept_reference/max_node_pressure/index.html index b3ded166a3..6d69692b8e 100644 --- a/dev/concept_reference/max_node_pressure/index.html +++ b/dev/concept_reference/max_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/max_ratio_in_in_unit_flow/index.html b/dev/concept_reference/max_ratio_in_in_unit_flow/index.html index e58559bab9..79b7e04b03 100644 --- a/dev/concept_reference/max_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_in_in_unit_flow parameter triggers the generation of the constraint_max_ratio_in_in_unit_flow and enforces an upper bound on the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a maximum share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the max_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the max_ratio_in_in_unit_flow parameter triggers the generation of the constraint_max_ratio_in_in_unit_flow and enforces an upper bound on the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a maximum share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the max_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/max_ratio_in_out_unit_flow/index.html b/dev/concept_reference/max_ratio_in_out_unit_flow/index.html index b4e4554039..864f43f544 100644 --- a/dev/concept_reference/max_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_in_out_unit_flow parameter triggers the generation of the constraint_max_ratio_in_out_unit_flow and sets an upper bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node, i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the max_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

+- · SpineOpt.jl

The definition of the max_ratio_in_out_unit_flow parameter triggers the generation of the constraint_max_ratio_in_out_unit_flow and sets an upper bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node, i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the max_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

diff --git a/dev/concept_reference/max_ratio_out_in_connection_flow/index.html b/dev/concept_reference/max_ratio_out_in_connection_flow/index.html index 2ac2229c79..211d490f56 100644 --- a/dev/concept_reference/max_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/max_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_in_connection_flow parameter triggers the generation of the constraint_max_ratio_out_in_connection_flow and sets an upper bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the max_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

Note that the ratio can also be defined for connection__node__node relationships where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

+- · SpineOpt.jl

The definition of the max_ratio_out_in_connection_flow parameter triggers the generation of the constraint_max_ratio_out_in_connection_flow and sets an upper bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the max_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

Note that the ratio can also be defined for connection__node__node relationships where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

diff --git a/dev/concept_reference/max_ratio_out_in_unit_flow/index.html b/dev/concept_reference/max_ratio_out_in_unit_flow/index.html index c03f477585..73387129eb 100644 --- a/dev/concept_reference/max_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_in_unit_flow parameter triggers the generation of the constraint_max_ratio_out_in_unit_flow and enforces an upper bound on the ratio between outgoing and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the max_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the max_ratio_out_in_unit_flow parameter triggers the generation of the constraint_max_ratio_out_in_unit_flow and enforces an upper bound on the ratio between outgoing and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the max_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/max_ratio_out_out_unit_flow/index.html b/dev/concept_reference/max_ratio_out_out_unit_flow/index.html index cdc16ceac0..bd65520d0c 100644 --- a/dev/concept_reference/max_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_out_unit_flow parameter triggers the generation of the constraint_max_ratio_out_out_unit_flow and sets an upper bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a maximum ratio between two products of a unit u, e.g. setting the maximum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the max_ratio_out_out_unit_flow parameter triggers the generation of the constraint_max_ratio_out_out_unit_flow and sets an upper bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a maximum ratio between two products of a unit u, e.g. setting the maximum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html b/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html index 2e3a8153bd..94f688f3cf 100644 --- a/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html +++ b/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is limiting the consumption of commodities such as oil or gas. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is limiting the consumption of commodities such as oil or gas. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html b/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html index 2177118d7e..dd83867222 100644 --- a/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html +++ b/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is the capping of CO2 emissions. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is the capping of CO2 emissions. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/max_units_on_coefficient_in_in/index.html b/dev/concept_reference/max_units_on_coefficient_in_in/index.html index a5425b3dde..526cf7ab1b 100644 --- a/dev/concept_reference/max_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/max_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the max_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

+- · SpineOpt.jl

The max_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the max_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/max_units_on_coefficient_in_out/index.html b/dev/concept_reference/max_units_on_coefficient_in_out/index.html index d2c4a09b86..4f5f837ddb 100644 --- a/dev/concept_reference/max_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/max_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the max_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

+- · SpineOpt.jl

The max_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the max_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/max_units_on_coefficient_out_in/index.html b/dev/concept_reference/max_units_on_coefficient_out_in/index.html index d67e08453d..9fae00902d 100644 --- a/dev/concept_reference/max_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/max_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the max_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

+- · SpineOpt.jl

The max_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the max_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/max_units_on_coefficient_out_out/index.html b/dev/concept_reference/max_units_on_coefficient_out_out/index.html index dd4f4281e2..ce3c17304f 100644 --- a/dev/concept_reference/max_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/max_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the max_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_in_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

+- · SpineOpt.jl

The max_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the max_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_in_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/max_voltage_angle/index.html b/dev/concept_reference/max_voltage_angle/index.html index ffae4d5dc2..ec4968aa41 100644 --- a/dev/concept_reference/max_voltage_angle/index.html +++ b/dev/concept_reference/max_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/mga_diff_relative/index.html b/dev/concept_reference/mga_diff_relative/index.html index 0c4b970a1c..2293628890 100644 --- a/dev/concept_reference/mga_diff_relative/index.html +++ b/dev/concept_reference/mga_diff_relative/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Currently, the MGA algorithm (see mga-advanced) only supports absolute differences between MGA variables (e.g. absolute differences between units_invested_available etc.). Hence, the default for this parameter is false and should not be changed for now.

+- · SpineOpt.jl

Currently, the MGA algorithm (see mga-advanced) only supports absolute differences between MGA variables (e.g. absolute differences between units_invested_available etc.). Hence, the default for this parameter is false and should not be changed for now.

diff --git a/dev/concept_reference/min_capacity_margin/index.html b/dev/concept_reference/min_capacity_margin/index.html index 032490c352..8f6cd2086f 100644 --- a/dev/concept_reference/min_capacity_margin/index.html +++ b/dev/concept_reference/min_capacity_margin/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parameter min_capacity_margin triggers the creation of a constraint of the same name which ensures that the difference between available unit capacity and demand at the corresponding node is at least min_capacity_margin. In the calculation of capacity_margin, storage units' actual flows are used in place of the capacity. Defining a min_capacity_margin can be useful for scheduling unit maintenance outages (see scheduled_outage_duration for how to define a unit outage requirement) and for triggering unit investments due to capacity shortage. The min_capacity_margin constraint can be softened by defining min_capacity_margin_penalty this allows violation of the constraint which are penalised in the objective function.

+- · SpineOpt.jl

The parameter min_capacity_margin triggers the creation of a constraint of the same name which ensures that the difference between available unit capacity and demand at the corresponding node is at least min_capacity_margin. In the calculation of capacity_margin, storage units' actual flows are used in place of the capacity. Defining a min_capacity_margin can be useful for scheduling unit maintenance outages (see scheduled_outage_duration for how to define a unit outage requirement) and for triggering unit investments due to capacity shortage. The min_capacity_margin constraint can be softened by defining min_capacity_margin_penalty this allows violation of the constraint which are penalised in the objective function.

diff --git a/dev/concept_reference/min_capacity_margin_penalty/index.html b/dev/concept_reference/min_capacity_margin_penalty/index.html index 16dc25166c..c056329ffc 100644 --- a/dev/concept_reference/min_capacity_margin_penalty/index.html +++ b/dev/concept_reference/min_capacity_margin_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_capacity_margin_penalty parameter triggers the addition of the min_capacity_margin_slack slack variable in the min_capacity_margin constraint. This allows violation of the constraint which are penalised in the objective function. This can be used to capture the capacity_value of investments. This can also be used to disincentivise scheduling of maintenance outages during times of low capacity. See scheduled_outage_duration for how to define a unit scheduled outage requirement

+- · SpineOpt.jl

The min_capacity_margin_penalty parameter triggers the addition of the min_capacity_margin_slack slack variable in the min_capacity_margin constraint. This allows violation of the constraint which are penalised in the objective function. This can be used to capture the capacity_value of investments. This can also be used to disincentivise scheduling of maintenance outages during times of low capacity. See scheduled_outage_duration for how to define a unit scheduled outage requirement

diff --git a/dev/concept_reference/min_down_time/index.html b/dev/concept_reference/min_down_time/index.html index 3240849533..07a1a907cc 100644 --- a/dev/concept_reference/min_down_time/index.html +++ b/dev/concept_reference/min_down_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_down_time parameter will trigger the creation of the Constraint on minimum down time. It sets a lower bound on the period that a unit has to stay offline after a shutdown.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

+- · SpineOpt.jl

The definition of the min_down_time parameter will trigger the creation of the Constraint on minimum down time. It sets a lower bound on the period that a unit has to stay offline after a shutdown.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

diff --git a/dev/concept_reference/min_node_pressure/index.html b/dev/concept_reference/min_node_pressure/index.html index 7b1e15c539..e2c873301e 100644 --- a/dev/concept_reference/min_node_pressure/index.html +++ b/dev/concept_reference/min_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/min_ratio_in_in_unit_flow/index.html b/dev/concept_reference/min_ratio_in_in_unit_flow/index.html index de6039015a..8157b18e71 100644 --- a/dev/concept_reference/min_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_in_in_unit_flow parameter triggers the generation of the constraint_min_ratio_in_in_unit_flow and sets a lower bound for the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a minimum share of 0.2 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the min_ratio_in_in_unit_flow parameter would be set to 0.2 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the min_ratio_in_in_unit_flow parameter triggers the generation of the constraint_min_ratio_in_in_unit_flow and sets a lower bound for the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a minimum share of 0.2 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the min_ratio_in_in_unit_flow parameter would be set to 0.2 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/min_ratio_in_out_unit_flow/index.html b/dev/concept_reference/min_ratio_in_out_unit_flow/index.html index f1ab653616..2a58e83462 100644 --- a/dev/concept_reference/min_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_in_out_unit_flow parameter triggers the generation of the constraint_min_ratio_in_out_unit_flow and enforces a lower bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes, see) in this relationship represents the from_node, i.e. the incoming flow to the unit, and the second node (or group of nodes) represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

+- · SpineOpt.jl

The definition of the min_ratio_in_out_unit_flow parameter triggers the generation of the constraint_min_ratio_in_out_unit_flow and enforces a lower bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes, see) in this relationship represents the from_node, i.e. the incoming flow to the unit, and the second node (or group of nodes) represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

diff --git a/dev/concept_reference/min_ratio_out_in_connection_flow/index.html b/dev/concept_reference/min_ratio_out_in_connection_flow/index.html index 46bef469d5..e7cad2c5de 100644 --- a/dev/concept_reference/min_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/min_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_out_in_connection_flow parameter triggers the generation of the constraint_min_ratio_out_in_connection_flow and sets a lower bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

Note that the ratio can also be defined for connection__node__node relationships, where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

To enforce e.g. a minimum ratio of 0.2 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the min_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

+- · SpineOpt.jl

The definition of the min_ratio_out_in_connection_flow parameter triggers the generation of the constraint_min_ratio_out_in_connection_flow and sets a lower bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

Note that the ratio can also be defined for connection__node__node relationships, where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

To enforce e.g. a minimum ratio of 0.2 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the min_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

diff --git a/dev/concept_reference/min_ratio_out_in_unit_flow/index.html b/dev/concept_reference/min_ratio_out_in_unit_flow/index.html index 328dd8008f..c10dbd63c4 100644 --- a/dev/concept_reference/min_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the [min_ratio_out_in_unit_flow] parameter triggers the generation of the constraint_min_ratio_out_in_unit_flow and corresponds to a lower bound of the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the min_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the [min_ratio_out_in_unit_flow] parameter triggers the generation of the constraint_min_ratio_out_in_unit_flow and corresponds to a lower bound of the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the min_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/min_ratio_out_out_unit_flow/index.html b/dev/concept_reference/min_ratio_out_out_unit_flow/index.html index 0f580512a5..9379beb250 100644 --- a/dev/concept_reference/min_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_out_out_unit_flow parameter triggers the generation of the constraint_min_ratio_out_out_unit_flow and enforces a lower bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a minimum ratio between two products of a unit u, e.g. setting the minimum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the min_ratio_out_out_unit_flow parameter triggers the generation of the constraint_min_ratio_out_out_unit_flow and enforces a lower bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a minimum ratio between two products of a unit u, e.g. setting the minimum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/min_scheduled_outage_duration/index.html b/dev/concept_reference/min_scheduled_outage_duration/index.html index 29bc2373d9..5f9742e4fc 100644 --- a/dev/concept_reference/min_scheduled_outage_duration/index.html +++ b/dev/concept_reference/min_scheduled_outage_duration/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_scheduled_outage_duration duration parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the sum of the units_out_of_service variable over the optimisation window. The primary function of this parameter is thus, to schedule maintenance outages for units. This parameter enforces that the unit must be taken out of service for at least an amount of time equal to min_scheduled_outage_duration

It can be defined for a unit and will then impose restrictions on the units_out\of_service variables that represent whether a unit is on maintenance ourage at that particular time. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

To scheduled maintenance outages using this functionality, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

+- · SpineOpt.jl

The definition of the min_scheduled_outage_duration duration parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the sum of the units_out_of_service variable over the optimisation window. The primary function of this parameter is thus, to schedule maintenance outages for units. This parameter enforces that the unit must be taken out of service for at least an amount of time equal to min_scheduled_outage_duration

It can be defined for a unit and will then impose restrictions on the units_out\of_service variables that represent whether a unit is on maintenance ourage at that particular time. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

To scheduled maintenance outages using this functionality, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

diff --git a/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html b/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html index e977d9fe06..e4bacd2123 100644 --- a/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html +++ b/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html b/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html index 72448a7ff4..2f7e78b0d0 100644 --- a/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html +++ b/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. A possible use case is a minimum value for electricity generated from renewable sources. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. A possible use case is a minimum value for electricity generated from renewable sources. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/min_units_on_coefficient_in_in/index.html b/dev/concept_reference/min_units_on_coefficient_in_in/index.html index ffb4aacd19..7ccb751ce9 100644 --- a/dev/concept_reference/min_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/min_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the min_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

+- · SpineOpt.jl

The min_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the min_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/min_units_on_coefficient_in_out/index.html b/dev/concept_reference/min_units_on_coefficient_in_out/index.html index e520e9525f..61816fbeb4 100644 --- a/dev/concept_reference/min_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/min_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the min_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

+- · SpineOpt.jl

The min_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the min_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/min_units_on_coefficient_out_in/index.html b/dev/concept_reference/min_units_on_coefficient_out_in/index.html index b1123c927b..897deba04f 100644 --- a/dev/concept_reference/min_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/min_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the min_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

+- · SpineOpt.jl

The min_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the min_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/min_units_on_coefficient_out_out/index.html b/dev/concept_reference/min_units_on_coefficient_out_out/index.html index 54a53bf763..fe68401103 100644 --- a/dev/concept_reference/min_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/min_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the min_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

+- · SpineOpt.jl

The min_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the min_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/min_up_time/index.html b/dev/concept_reference/min_up_time/index.html index 7d70f3ec8a..8027a08ced 100644 --- a/dev/concept_reference/min_up_time/index.html +++ b/dev/concept_reference/min_up_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_up_time parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the period that a unit has to stay online after a startup.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

+- · SpineOpt.jl

The definition of the min_up_time parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the period that a unit has to stay online after a startup.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

diff --git a/dev/concept_reference/min_voltage_angle/index.html b/dev/concept_reference/min_voltage_angle/index.html index 3c76d4a9ce..680d1aea09 100644 --- a/dev/concept_reference/min_voltage_angle/index.html +++ b/dev/concept_reference/min_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/minimum_operating_point/index.html b/dev/concept_reference/minimum_operating_point/index.html index f48afdd6b2..65bc317b5a 100644 --- a/dev/concept_reference/minimum_operating_point/index.html +++ b/dev/concept_reference/minimum_operating_point/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the minimum_operating_point parameter will trigger the creation of the Constraint on minimum operating point. It sets a lower bound on the value of the unit_flow variable for a unit that is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

+- · SpineOpt.jl

The definition of the minimum_operating_point parameter will trigger the creation of the Constraint on minimum operating point. It sets a lower bound on the value of the unit_flow variable for a unit that is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

diff --git a/dev/concept_reference/minimum_reserve_activation_time/index.html b/dev/concept_reference/minimum_reserve_activation_time/index.html index bfa319ebab..c6ed4041e4 100644 --- a/dev/concept_reference/minimum_reserve_activation_time/index.html +++ b/dev/concept_reference/minimum_reserve_activation_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parameter minimum_reserve_activation_time is the duration a reserve product needs to be online, before it can be replaced by another (slower) reserve product.

In SpineOpt, the parameter is used to model reserve provision through storages. If a storage provides reserves to a reserve node (see also is_reserve_node) one needs to ensure that the node state is sufficiently high to provide these scheduled reserves as least for the duration of the minimum_reserve_activation_time. The constraint on the minimum node state with reserve provision is triggered by the existence of the minimum_reserve_activation_time. See also Reserves

+- · SpineOpt.jl

The parameter minimum_reserve_activation_time is the duration a reserve product needs to be online, before it can be replaced by another (slower) reserve product.

In SpineOpt, the parameter is used to model reserve provision through storages. If a storage provides reserves to a reserve node (see also is_reserve_node) one needs to ensure that the node state is sufficiently high to provide these scheduled reserves as least for the duration of the minimum_reserve_activation_time. The constraint on the minimum node state with reserve provision is triggered by the existence of the minimum_reserve_activation_time. See also Reserves

diff --git a/dev/concept_reference/model/index.html b/dev/concept_reference/model/index.html index 47c0210740..59cb3caf4a 100644 --- a/dev/concept_reference/model/index.html +++ b/dev/concept_reference/model/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model object holds general information about the optimization problem at hand. Firstly, the modelling horizon is specified on the model object, i.e. the scope of the optimization model, and if applicable the duration of the rolling window (see also model_start, model_end and roll_forward). Secondly, the model works as an overarching assembler - only through linking temporal_blocks and stochastic_structures to a model object via relationships, they become part of the optimization problem, and respectively linked nodes, connections and units. If desired the user can also specify defaults for temporals and stochastic via the designated default relationships (see e.g., model__default_temporal_block). In this case, the default temporal is populated for missing node__temporal_block relationships. Lastly, the model object contains information about the algorithm used for solving the problem (see model_type).

+- · SpineOpt.jl

The model object holds general information about the optimization problem at hand. Firstly, the modelling horizon is specified on the model object, i.e. the scope of the optimization model, and if applicable the duration of the rolling window (see also model_start, model_end and roll_forward). Secondly, the model works as an overarching assembler - only through linking temporal_blocks and stochastic_structures to a model object via relationships, they become part of the optimization problem, and respectively linked nodes, connections and units. If desired the user can also specify defaults for temporals and stochastic via the designated default relationships (see e.g., model__default_temporal_block). In this case, the default temporal is populated for missing node__temporal_block relationships. Lastly, the model object contains information about the algorithm used for solving the problem (see model_type).

diff --git a/dev/concept_reference/model__default_investment_stochastic_structure/index.html b/dev/concept_reference/model__default_investment_stochastic_structure/index.html index d81cfb5dc1..e2cc7f1d81 100644 --- a/dev/concept_reference/model__default_investment_stochastic_structure/index.html +++ b/dev/concept_reference/model__default_investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model__default_investment_stochastic_structure relationship can be used to set model-wide default unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships. Its main purpose is to allow users to avoid defining each relationship individually, and instead allow them to focus on defining only the exceptions. As such, any specific unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships take priority over the model__default_investment_stochastic_structure relationship.

+- · SpineOpt.jl

The model__default_investment_stochastic_structure relationship can be used to set model-wide default unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships. Its main purpose is to allow users to avoid defining each relationship individually, and instead allow them to focus on defining only the exceptions. As such, any specific unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships take priority over the model__default_investment_stochastic_structure relationship.

diff --git a/dev/concept_reference/model__default_investment_temporal_block/index.html b/dev/concept_reference/model__default_investment_temporal_block/index.html index 281eed544d..81030f41d0 100644 --- a/dev/concept_reference/model__default_investment_temporal_block/index.html +++ b/dev/concept_reference/model__default_investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

model__default_investment_temporal_block is a two-dimensional relationship between a model and a temporal_block. This relationship defines the default temporal resolution and scope for all investment decisions in the model (units, connections and storages). Specifying model__default_investment_temporal_block for a model avoids the need to specify individual node__investment_temporal_block, unit__investment_temporal_block and connection__investment_temporal_block relationships. Conversely, if any of these individual relationships are defined (e.g. connection__investment_temporal_block) along with model__temporal_block, these will override model__default_investment_temporal_block.

See also Investment Optimization

+- · SpineOpt.jl

model__default_investment_temporal_block is a two-dimensional relationship between a model and a temporal_block. This relationship defines the default temporal resolution and scope for all investment decisions in the model (units, connections and storages). Specifying model__default_investment_temporal_block for a model avoids the need to specify individual node__investment_temporal_block, unit__investment_temporal_block and connection__investment_temporal_block relationships. Conversely, if any of these individual relationships are defined (e.g. connection__investment_temporal_block) along with model__temporal_block, these will override model__default_investment_temporal_block.

See also Investment Optimization

diff --git a/dev/concept_reference/model__default_stochastic_structure/index.html b/dev/concept_reference/model__default_stochastic_structure/index.html index 3e223af013..aebc7814b9 100644 --- a/dev/concept_reference/model__default_stochastic_structure/index.html +++ b/dev/concept_reference/model__default_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__default_temporal_block/index.html b/dev/concept_reference/model__default_temporal_block/index.html index 8b3576086f..d597f73290 100644 --- a/dev/concept_reference/model__default_temporal_block/index.html +++ b/dev/concept_reference/model__default_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__report/index.html b/dev/concept_reference/model__report/index.html index 787ac93b6b..e74cd6ea0f 100644 --- a/dev/concept_reference/model__report/index.html +++ b/dev/concept_reference/model__report/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__stochastic_structure/index.html b/dev/concept_reference/model__stochastic_structure/index.html index 7257b5aea5..af82f046ce 100644 --- a/dev/concept_reference/model__stochastic_structure/index.html +++ b/dev/concept_reference/model__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The [model__stochastic_structure] relationship defines which stochastic_structures are active in which models. Essentially, this relationship allows for e.g. attributing multiple node__stochastic_structure relationships for a single node, and switching between them in different models. Any stochastic_structure in the model__default_stochastic_structure relationship is automatically assumed to be active in the connected model, so there's no need to include it in [model__stochastic_structure] separately.

+- · SpineOpt.jl

The [model__stochastic_structure] relationship defines which stochastic_structures are active in which models. Essentially, this relationship allows for e.g. attributing multiple node__stochastic_structure relationships for a single node, and switching between them in different models. Any stochastic_structure in the model__default_stochastic_structure relationship is automatically assumed to be active in the connected model, so there's no need to include it in [model__stochastic_structure] separately.

diff --git a/dev/concept_reference/model__temporal_block/index.html b/dev/concept_reference/model__temporal_block/index.html index 77599e52e8..d52a1fbf7b 100644 --- a/dev/concept_reference/model__temporal_block/index.html +++ b/dev/concept_reference/model__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model__temporal_block relationship is used to determine which temporal_blocks are included in a specific model. Note that defining this relationship does not yet imply that any element of the model will be governed by the specified temporal_block, for this to happen additional relationships have to be defined such as the model__default_temporal_block relationship.

+- · SpineOpt.jl

The model__temporal_block relationship is used to determine which temporal_blocks are included in a specific model. Note that defining this relationship does not yet imply that any element of the model will be governed by the specified temporal_block, for this to happen additional relationships have to be defined such as the model__default_temporal_block relationship.

diff --git a/dev/concept_reference/model_end/index.html b/dev/concept_reference/model_end/index.html index 0fcb3e2e77..116d4b0a30 100644 --- a/dev/concept_reference/model_end/index.html +++ b/dev/concept_reference/model_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Together with the model_start parameter, it is used to define the temporal horizon of the model. In case of a single solve optimization, the parameter marks the end of the last timestep that is possibly part of the optimization. Note that it poses an upper bound, and that the optimization does not necessarily include this timestamp when the block_end parameters are more stringent.

In case of a rolling horizon optimization, it will tell to the model to stop rolling forward once an optimization has been performed for which the result of the indicated timestamp has been kept in the final results. For example, assume that a model_end value of 2030-01-01T05:00:00 has been chosen, a block_end of 3h, and a roll_forward of 2h. The roll_forward parameter indicates here that the results of the first two hours of each optimization window are kept as final, therefore the last optimization window will span the timeframe [2030-01-01T04:00:00 - 2030-01-01T06:00:00].

A DateTime value should be chosen for this parameter.

+- · SpineOpt.jl

Together with the model_start parameter, it is used to define the temporal horizon of the model. In case of a single solve optimization, the parameter marks the end of the last timestep that is possibly part of the optimization. Note that it poses an upper bound, and that the optimization does not necessarily include this timestamp when the block_end parameters are more stringent.

In case of a rolling horizon optimization, it will tell to the model to stop rolling forward once an optimization has been performed for which the result of the indicated timestamp has been kept in the final results. For example, assume that a model_end value of 2030-01-01T05:00:00 has been chosen, a block_end of 3h, and a roll_forward of 2h. The roll_forward parameter indicates here that the results of the first two hours of each optimization window are kept as final, therefore the last optimization window will span the timeframe [2030-01-01T04:00:00 - 2030-01-01T06:00:00].

A DateTime value should be chosen for this parameter.

diff --git a/dev/concept_reference/model_start/index.html b/dev/concept_reference/model_start/index.html index 6c796d1b0b..9e295586a4 100644 --- a/dev/concept_reference/model_start/index.html +++ b/dev/concept_reference/model_start/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Together with the model_end parameter, it is used to define the temporal horizon of the model. For a single solve optimization, it marks the timestamp from which the relative offset in a temporal_block is defined by the block_start parameter. In the rolling optimization framework, it does this for the first optimization window.

A DateTime value should be chosen for this parameter.

+- · SpineOpt.jl

Together with the model_end parameter, it is used to define the temporal horizon of the model. For a single solve optimization, it marks the timestamp from which the relative offset in a temporal_block is defined by the block_start parameter. In the rolling optimization framework, it does this for the first optimization window.

A DateTime value should be chosen for this parameter.

diff --git a/dev/concept_reference/model_type/index.html b/dev/concept_reference/model_type/index.html index dc1048c5b5..4ac0263d9b 100644 --- a/dev/concept_reference/model_type/index.html +++ b/dev/concept_reference/model_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter controls the low-level algorithm that SpineOpt uses to solve the underlying optimization problem. Currently three values are possible:

spineopt_standard uses the standard algorithm.

spineopt_benders uses the Benders decomposition algorithm (see Decomposition.

spineopt_mga uses the Model to Generate Alternatives algorithm.

+- · SpineOpt.jl

This parameter controls the low-level algorithm that SpineOpt uses to solve the underlying optimization problem. Currently three values are possible:

spineopt_standard uses the standard algorithm.

spineopt_benders uses the Benders decomposition algorithm (see Decomposition.

spineopt_mga uses the Model to Generate Alternatives algorithm.

diff --git a/dev/concept_reference/model_type_list/index.html b/dev/concept_reference/model_type_list/index.html index c34f8ec498..b790f04ffb 100644 --- a/dev/concept_reference/model_type_list/index.html +++ b/dev/concept_reference/model_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

model_type_list holds the possible values for the model parameter model_type parameter. See model_type for more details

+- · SpineOpt.jl

model_type_list holds the possible values for the model parameter model_type parameter. See model_type for more details

diff --git a/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html b/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html index a3b8bbd9eb..5100160b46 100644 --- a/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html +++ b/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For investment models that are solved using the Benders algorithm (i.e., with model_type set to spineopt_benders), mp_min_res_gen_to_demand_ratio represents a lower bound on the fraction of the total system demand that must be supplied by renewable generation sources (RES).

A unit can be marked as a renewable generation source by setting is_renewable to true.

+- · SpineOpt.jl

For investment models that are solved using the Benders algorithm (i.e., with model_type set to spineopt_benders), mp_min_res_gen_to_demand_ratio represents a lower bound on the fraction of the total system demand that must be supplied by renewable generation sources (RES).

A unit can be marked as a renewable generation source by setting is_renewable to true.

diff --git a/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html b/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html index 237b946f0e..423549bbf4 100644 --- a/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html +++ b/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A penalty for violating the mp_min_res_gen_to_demand_ratio. If set, then the lower bound on the fraction of the total system demand that must be supplied by RES becomes a 'soft' constraint. A new cost term is added to the objective, mutlitplying the penalty by the slack.

+- · SpineOpt.jl

A penalty for violating the mp_min_res_gen_to_demand_ratio. If set, then the lower bound on the fraction of the total system demand that must be supplied by RES becomes a 'soft' constraint. A new cost term is added to the objective, mutlitplying the penalty by the slack.

diff --git a/dev/concept_reference/nodal_balance_sense/index.html b/dev/concept_reference/nodal_balance_sense/index.html index 051a9e040d..845bd45f9a 100644 --- a/dev/concept_reference/nodal_balance_sense/index.html +++ b/dev/concept_reference/nodal_balance_sense/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

nodal_balance_sense determines whether or not a node is able to naturally consume or produce energy. The default value, ==, means that the node is unable to do any of that, and thus it needs to be perfectly balanced. The vale >= means that the node is a sink, that is, it can consume any amounts of energy. The value <= means that the node is a source, that is, it can produce any amounts of energy.

+- · SpineOpt.jl

nodal_balance_sense determines whether or not a node is able to naturally consume or produce energy. The default value, ==, means that the node is unable to do any of that, and thus it needs to be perfectly balanced. The vale >= means that the node is a sink, that is, it can consume any amounts of energy. The value <= means that the node is a source, that is, it can produce any amounts of energy.

diff --git a/dev/concept_reference/node/index.html b/dev/concept_reference/node/index.html index f61d79420f..d8b855fa28 100644 --- a/dev/concept_reference/node/index.html +++ b/dev/concept_reference/node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node is perhaps the most important object class out of the Systemic object classes, as it is what connects the rest together via the Systemic relationship classes. Essentially, nodes act as points in the modelled commodity network where commodity balance is enforced via the node balance and node injection constraints, tying together the inputs and outputs from units and connections, as well as any external demand. Furthermore, nodes play a crucial role for defining the temporal and stochastic structures of the model via the node__temporal_block and node__stochastic_structure relationships. For more details about the Temporal Framework and the Stochastic Framework, please refer to the dedicated sections.

Since nodes act as the points where commodity balance is enforced, this also makes them a natural fit for implementing storage. The has_state parameter controls whether a node has a node_state variable, which essentially represents the commodity content of the node. The state_coeff parameter tells how the node_state variable relates to all the commodity flows. Storage losses are handled via the frac_state_loss parameter, and potential diffusion of commodity content to other nodes via the diff_coeff parameter for the node__node relationship.

+- · SpineOpt.jl

The node is perhaps the most important object class out of the Systemic object classes, as it is what connects the rest together via the Systemic relationship classes. Essentially, nodes act as points in the modelled commodity network where commodity balance is enforced via the node balance and node injection constraints, tying together the inputs and outputs from units and connections, as well as any external demand. Furthermore, nodes play a crucial role for defining the temporal and stochastic structures of the model via the node__temporal_block and node__stochastic_structure relationships. For more details about the Temporal Framework and the Stochastic Framework, please refer to the dedicated sections.

Since nodes act as the points where commodity balance is enforced, this also makes them a natural fit for implementing storage. The has_state parameter controls whether a node has a node_state variable, which essentially represents the commodity content of the node. The state_coeff parameter tells how the node_state variable relates to all the commodity flows. Storage losses are handled via the frac_state_loss parameter, and potential diffusion of commodity content to other nodes via the diff_coeff parameter for the node__node relationship.

diff --git a/dev/concept_reference/node__commodity/index.html b/dev/concept_reference/node__commodity/index.html index 5af0c6d75d..d5e9487f7e 100644 --- a/dev/concept_reference/node__commodity/index.html +++ b/dev/concept_reference/node__commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__commodity is a two-dimensional relationship between a node and a commodity and specifies the commodity that flows to or from the node. Generally, since flows are not dimensioned by commodity, this has no meaning in terms of the variables and constraint equations. However, there are two specific uses for this relationship:

  1. To specify that specific network physics should apply to the network formed by the member nodes for that commodity. See powerflow
  2. Only connection flows that are between nodes of the same or no commodity are included in the node_balance constraint.
+- · SpineOpt.jl

node__commodity is a two-dimensional relationship between a node and a commodity and specifies the commodity that flows to or from the node. Generally, since flows are not dimensioned by commodity, this has no meaning in terms of the variables and constraint equations. However, there are two specific uses for this relationship:

  1. To specify that specific network physics should apply to the network formed by the member nodes for that commodity. See powerflow
  2. Only connection flows that are between nodes of the same or no commodity are included in the node_balance constraint.
diff --git a/dev/concept_reference/node__investment_stochastic_structure/index.html b/dev/concept_reference/node__investment_stochastic_structure/index.html index e5daff1085..78fe4cf6ac 100644 --- a/dev/concept_reference/node__investment_stochastic_structure/index.html +++ b/dev/concept_reference/node__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node__investment_temporal_block/index.html b/dev/concept_reference/node__investment_temporal_block/index.html index 364667ffef..34b7199c6e 100644 --- a/dev/concept_reference/node__investment_temporal_block/index.html +++ b/dev/concept_reference/node__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__investment_temporal_block is a two-dimensional relationship between a node and a temporal_block. This relationship defines the temporal resolution and scope of a node's investment decisions (currently only storage invesments). Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no node__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if node__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified node.

See also Investment Optimization

+- · SpineOpt.jl

node__investment_temporal_block is a two-dimensional relationship between a node and a temporal_block. This relationship defines the temporal resolution and scope of a node's investment decisions (currently only storage invesments). Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no node__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if node__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified node.

See also Investment Optimization

diff --git a/dev/concept_reference/node__node/index.html b/dev/concept_reference/node__node/index.html index e683b301cc..fb0a0cb900 100644 --- a/dev/concept_reference/node__node/index.html +++ b/dev/concept_reference/node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node__node relationship is used for defining direct interactions between two nodes, like diffusion of commodity content. Note that the node__node relationship is assumed to be one-directional, meaning that

node__node(node1=n1, node2=n2) != node__node(node1=n2, node2=n1).

Thus, when one wants to define symmetric relationships between two nodes, one needs to define both directions as separate relationships.

+- · SpineOpt.jl

The node__node relationship is used for defining direct interactions between two nodes, like diffusion of commodity content. Note that the node__node relationship is assumed to be one-directional, meaning that

node__node(node1=n1, node2=n2) != node__node(node1=n2, node2=n1).

Thus, when one wants to define symmetric relationships between two nodes, one needs to define both directions as separate relationships.

diff --git a/dev/concept_reference/node__stochastic_structure/index.html b/dev/concept_reference/node__stochastic_structure/index.html index 142f91a5e9..9387e533f4 100644 --- a/dev/concept_reference/node__stochastic_structure/index.html +++ b/dev/concept_reference/node__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node__stochastic_structure relationship defines which stochastic_structure the node uses. Essentially, it sets the stochastic_structure of all the flow variables connected to the node, as well as the potential node_state variable. Note that only one stochastic_structure can be defined per node per model, as interpreted based on the node__stochastic_structure and model__stochastic_structure relationships. Investment variables use dedicated relationships, as detailed in the Investment Optimization section.

The node__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

+- · SpineOpt.jl

The node__stochastic_structure relationship defines which stochastic_structure the node uses. Essentially, it sets the stochastic_structure of all the flow variables connected to the node, as well as the potential node_state variable. Note that only one stochastic_structure can be defined per node per model, as interpreted based on the node__stochastic_structure and model__stochastic_structure relationships. Investment variables use dedicated relationships, as detailed in the Investment Optimization section.

The node__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

diff --git a/dev/concept_reference/node__temporal_block/index.html b/dev/concept_reference/node__temporal_block/index.html index 4b3c604254..3ff0fa206e 100644 --- a/dev/concept_reference/node__temporal_block/index.html +++ b/dev/concept_reference/node__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This relationship links a node to a temporal_block and as such it will determine which temporal block governs the temporal horizon and resolution of the variables associated with this node. Specifically, the resolution of the temporal block will directly imply the duration of the time slices for which both the flow variables and their associated constraints are created.

For a more detailed description of how the temporal structure in SpineOpt can be created, see Temporal Framework.

+- · SpineOpt.jl

This relationship links a node to a temporal_block and as such it will determine which temporal block governs the temporal horizon and resolution of the variables associated with this node. Specifically, the resolution of the temporal block will directly imply the duration of the time slices for which both the flow variables and their associated constraints are created.

For a more detailed description of how the temporal structure in SpineOpt can be created, see Temporal Framework.

diff --git a/dev/concept_reference/node__unit_constraint/index.html b/dev/concept_reference/node__unit_constraint/index.html index a441a67805..4752dfdf10 100644 --- a/dev/concept_reference/node__unit_constraint/index.html +++ b/dev/concept_reference/node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__user_constraint is a two-dimensional relationship between a node and a user_constraint. The relationship specifies that a variable associated only with the node (currently only the node_state) is involved in the constraint. For example, the node_state_coefficient defined on node__user_constraint specifies the coefficient of the node's node_state variable in the specified user_constraint.

See also user_constraint

+- · SpineOpt.jl

node__user_constraint is a two-dimensional relationship between a node and a user_constraint. The relationship specifies that a variable associated only with the node (currently only the node_state) is involved in the constraint. For example, the node_state_coefficient defined on node__user_constraint specifies the coefficient of the node's node_state variable in the specified user_constraint.

See also user_constraint

diff --git a/dev/concept_reference/node_opf_type/index.html b/dev/concept_reference/node_opf_type/index.html index 1e2166c048..4811914bd9 100644 --- a/dev/concept_reference/node_opf_type/index.html +++ b/dev/concept_reference/node_opf_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node_opf_type_list/index.html b/dev/concept_reference/node_opf_type_list/index.html index 8b79349910..406e02971f 100644 --- a/dev/concept_reference/node_opf_type_list/index.html +++ b/dev/concept_reference/node_opf_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Houses the different possible values for the node_opf_type parameter. To identify the reference node, set node_opf_type = :node_opf_type_reference, while node_opf_type = node_opf_type_normal is the default value for non-reference nodes.

See also powerflow.

+- · SpineOpt.jl

Houses the different possible values for the node_opf_type parameter. To identify the reference node, set node_opf_type = :node_opf_type_reference, while node_opf_type = node_opf_type_normal is the default value for non-reference nodes.

See also powerflow.

diff --git a/dev/concept_reference/node_slack_penalty/index.html b/dev/concept_reference/node_slack_penalty/index.html index d33c9ee2b7..c0423e0b5f 100644 --- a/dev/concept_reference/node_slack_penalty/index.html +++ b/dev/concept_reference/node_slack_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node_slack_penalty triggers the creation of node slack variables, node_slack_pos and node_slack_neg. This allows the model to violate the node_balance constraint with these violations penalised in the objective function with a coefficient equal to node_slack_penalty. If node_slack_penalty = 0 the slack variables are created and violations are unpenalised. If set to none or undefined, the variables are not created and violation of the node_balance constraint is not possible.

+- · SpineOpt.jl

node_slack_penalty triggers the creation of node slack variables, node_slack_pos and node_slack_neg. This allows the model to violate the node_balance constraint with these violations penalised in the objective function with a coefficient equal to node_slack_penalty. If node_slack_penalty = 0 the slack variables are created and violations are unpenalised. If set to none or undefined, the variables are not created and violation of the node_balance constraint is not possible.

diff --git a/dev/concept_reference/node_state_cap/index.html b/dev/concept_reference/node_state_cap/index.html index 05b70e1849..ddcc12030d 100644 --- a/dev/concept_reference/node_state_cap/index.html +++ b/dev/concept_reference/node_state_cap/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node_state_cap parameter represents the maximum allowed value for the node_state variable. Note that in order for a node to have a node_state variable in the first place, the has_state parameter must be set to true. However, if the node has storage investments enabled using the candidate_storages parameter, the node_state_cap parameter acts as a coefficient for the storages_invested_available variable. Essentially, with investments, the node_state_cap parameter represents storage capacity per storage investment.

+- · SpineOpt.jl

The node_state_cap parameter represents the maximum allowed value for the node_state variable. Note that in order for a node to have a node_state variable in the first place, the has_state parameter must be set to true. However, if the node has storage investments enabled using the candidate_storages parameter, the node_state_cap parameter acts as a coefficient for the storages_invested_available variable. Essentially, with investments, the node_state_cap parameter represents storage capacity per storage investment.

diff --git a/dev/concept_reference/node_state_coefficient/index.html b/dev/concept_reference/node_state_coefficient/index.html index 334f29f9e7..e1a861999d 100644 --- a/dev/concept_reference/node_state_coefficient/index.html +++ b/dev/concept_reference/node_state_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node_state_min/index.html b/dev/concept_reference/node_state_min/index.html index 852e22c92b..0f3755b621 100644 --- a/dev/concept_reference/node_state_min/index.html +++ b/dev/concept_reference/node_state_min/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/number_of_units/index.html b/dev/concept_reference/number_of_units/index.html index 3ea5b7ae86..d0a3e1c61b 100644 --- a/dev/concept_reference/number_of_units/index.html +++ b/dev/concept_reference/number_of_units/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Defines how many members a certain unit object represents. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor and units_unavailable, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). It is possible to allow the model to increase the number_of_units itself, through Investment Optimization. It is also possible to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 1.

+- · SpineOpt.jl

Defines how many members a certain unit object represents. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor and units_unavailable, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). It is possible to allow the model to increase the number_of_units itself, through Investment Optimization. It is also possible to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 1.

diff --git a/dev/concept_reference/online_variable_type/index.html b/dev/concept_reference/online_variable_type/index.html index 05173a9e93..5d111aa080 100644 --- a/dev/concept_reference/online_variable_type/index.html +++ b/dev/concept_reference/online_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

online_variable_type is a method parameter to model the 'commitment' or 'activation' of a unit, that is the situation where the unit becomes online and active in the system. It can take the values "unit_online_variable_type_binary", "unit_online_variable_type_integer", "unit_online_variable_type_linear" and "unit_online_variable_type_none".

If unit\_online\_variable\_type\_binary, then the commitment is modelled as an online/offline decision (classic unit commitment).

If unit\_online\_variable\_type\_integer, then the commitment is modelled as the number of units that are online (clustered unit commitment).

If unit\_online\_variable\_type\_linear, then the commitment is modelled as the number of units that are online, but here it is also possible to activate 'fractions' of a unit. This should reduce computational burden compared to unit\_online\_variable\_type\_integer.

If unit\_online\_variable\_type\_none, then the committment is not modelled at all and the unit is assumed to be always online. This reduces the computational burden the most.

+- · SpineOpt.jl

online_variable_type is a method parameter to model the 'commitment' or 'activation' of a unit, that is the situation where the unit becomes online and active in the system. It can take the values "unit_online_variable_type_binary", "unit_online_variable_type_integer", "unit_online_variable_type_linear" and "unit_online_variable_type_none".

If unit\_online\_variable\_type\_binary, then the commitment is modelled as an online/offline decision (classic unit commitment).

If unit\_online\_variable\_type\_integer, then the commitment is modelled as the number of units that are online (clustered unit commitment).

If unit\_online\_variable\_type\_linear, then the commitment is modelled as the number of units that are online, but here it is also possible to activate 'fractions' of a unit. This should reduce computational burden compared to unit\_online\_variable\_type\_integer.

If unit\_online\_variable\_type\_none, then the committment is not modelled at all and the unit is assumed to be always online. This reduces the computational burden the most.

diff --git a/dev/concept_reference/operating_cost/index.html b/dev/concept_reference/operating_cost/index.html index 8cda22da77..6150a636a3 100644 --- a/dev/concept_reference/operating_cost/index.html +++ b/dev/concept_reference/operating_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the operating_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for operating costs associated with that unit over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the operating_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for operating costs associated with that unit over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/operating_points/index.html b/dev/concept_reference/operating_points/index.html index 80baae4156..08dd4644d2 100644 --- a/dev/concept_reference/operating_points/index.html +++ b/dev/concept_reference/operating_points/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If operating_points is defined as an array type on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub operating segment variables, unit_flow_op one for each operating segment, with an additional index, i to reference the specific operating segment. Each value in the array represents the upper bound of the operating segment, normalized on unit_capacity for the corresponding unit__to_node or unit__from_node flow. operating_points is used in conjunction with fix_ratio_in_out_unit_flow where the array dimension must match and is used to define the normalized operating point bounds for the corresponding incremental ratio. operating_points is also used in conjunction with user_constraint where the array dimension must match any corresponding piecewise linear unit_flow_coefficient. Here operating_points is used also to define the normalized operating point bounds for the corresponding unit_flow_coefficients.

Note that operating_points is defined on a capacity-normalized basis and the values represent the upper bound of the corresponding operating segment variable. So if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

+- · SpineOpt.jl

If operating_points is defined as an array type on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub operating segment variables, unit_flow_op one for each operating segment, with an additional index, i to reference the specific operating segment. Each value in the array represents the upper bound of the operating segment, normalized on unit_capacity for the corresponding unit__to_node or unit__from_node flow. operating_points is used in conjunction with fix_ratio_in_out_unit_flow where the array dimension must match and is used to define the normalized operating point bounds for the corresponding incremental ratio. operating_points is also used in conjunction with user_constraint where the array dimension must match any corresponding piecewise linear unit_flow_coefficient. Here operating_points is used also to define the normalized operating point bounds for the corresponding unit_flow_coefficients.

Note that operating_points is defined on a capacity-normalized basis and the values represent the upper bound of the corresponding operating segment variable. So if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

diff --git a/dev/concept_reference/ordered_unit_flow_op/index.html b/dev/concept_reference/ordered_unit_flow_op/index.html index 5ccc0b3d50..f3ebcb3dbb 100644 --- a/dev/concept_reference/ordered_unit_flow_op/index.html +++ b/dev/concept_reference/ordered_unit_flow_op/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If one defines the parameter ordered_unit_flow_op in a unit__from_node or unit__to_node relationship, SpineOpt will create variable unit_flow_op_active to order each unit_flow_op of the unit_flow according to the rank of defined operating_points. This setting is only necessary when the segmental unit_flow_ops are with increasing conversion efficiency. The numerical type of unit_flow_op_active (float, binary, or integer) follows that of variable units_on which can be set via parameter online_variable_type.

Note that this functionality is based on SOS2 constraints so only a MILP configuration, i.e. make variable unit_flow_op_active a binary or integer, guarantees correct performance.

+- · SpineOpt.jl

If one defines the parameter ordered_unit_flow_op in a unit__from_node or unit__to_node relationship, SpineOpt will create variable unit_flow_op_active to order each unit_flow_op of the unit_flow according to the rank of defined operating_points. This setting is only necessary when the segmental unit_flow_ops are with increasing conversion efficiency. The numerical type of unit_flow_op_active (float, binary, or integer) follows that of variable units_on which can be set via parameter online_variable_type.

Note that this functionality is based on SOS2 constraints so only a MILP configuration, i.e. make variable unit_flow_op_active a binary or integer, guarantees correct performance.

diff --git a/dev/concept_reference/outage_variable_type/index.html b/dev/concept_reference/outage_variable_type/index.html index e20864255a..0ff100bc67 100644 --- a/dev/concept_reference/outage_variable_type/index.html +++ b/dev/concept_reference/outage_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

outage_variable_type is a method parameter to model the 'commitment' or 'activation' of unit maintenance outages.

To scheduled maintenance outages, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

+- · SpineOpt.jl

outage_variable_type is a method parameter to model the 'commitment' or 'activation' of unit maintenance outages.

To scheduled maintenance outages, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

diff --git a/dev/concept_reference/output/index.html b/dev/concept_reference/output/index.html index 96d0083c64..a56b4130f9 100644 --- a/dev/concept_reference/output/index.html +++ b/dev/concept_reference/output/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

An output is essentially a handle for a SpineOpt variable and Objective function to be included in a report and written into an output database. Typically, e.g. the unit_flow variables are desired as output from most models, so creating an output object called unit_flow allows one to designate it as something to be written in the desired report. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

+- · SpineOpt.jl

An output is essentially a handle for a SpineOpt variable and Objective function to be included in a report and written into an output database. Typically, e.g. the unit_flow variables are desired as output from most models, so creating an output object called unit_flow allows one to designate it as something to be written in the desired report. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

diff --git a/dev/concept_reference/output_db_url/index.html b/dev/concept_reference/output_db_url/index.html index e1c19a5651..e31ae8a0ba 100644 --- a/dev/concept_reference/output_db_url/index.html +++ b/dev/concept_reference/output_db_url/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The output_db_url parameter is the url of the databse to write the results of the model run. It overrides the value of the second argument passed to run_spineopt.

+- · SpineOpt.jl

The output_db_url parameter is the url of the databse to write the results of the model run. It overrides the value of the second argument passed to run_spineopt.

diff --git a/dev/concept_reference/output_resolution/index.html b/dev/concept_reference/output_resolution/index.html index d0751d49af..6802cfe0ea 100644 --- a/dev/concept_reference/output_resolution/index.html +++ b/dev/concept_reference/output_resolution/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The output_resolution parameter indicates the resolution at which output values should be reported.

If null (the default), then results are reported at the highest available resolution from the model. If output_resolution is a duration value, then results are aggregated at that resolution before being reported. At the moment, the aggregation is simply performed by taking the average value.

+- · SpineOpt.jl

The output_resolution parameter indicates the resolution at which output values should be reported.

If null (the default), then results are reported at the highest available resolution from the model. If output_resolution is a duration value, then results are aggregated at that resolution before being reported. At the moment, the aggregation is simply performed by taking the average value.

diff --git a/dev/concept_reference/overwrite_results_on_rolling/index.html b/dev/concept_reference/overwrite_results_on_rolling/index.html index 1c3b76b611..28f0be5e66 100644 --- a/dev/concept_reference/overwrite_results_on_rolling/index.html +++ b/dev/concept_reference/overwrite_results_on_rolling/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The overwrite_results_on_rolling parameter allows one to define whether or not results from further optimisation windows should overwrite those from previous ones. This, of course, is relevant only if optimisation windows overlap, which in turn happens whenever a temporal_block goes beyond the end of the window.

If true (the default) then results are written as a time-series. If false, then results are written as a map from analysis time (i.e., the window start) to time-series.

+- · SpineOpt.jl

The overwrite_results_on_rolling parameter allows one to define whether or not results from further optimisation windows should overwrite those from previous ones. This, of course, is relevant only if optimisation windows overlap, which in turn happens whenever a temporal_block goes beyond the end of the window.

If true (the default) then results are written as a time-series. If false, then results are written as a map from analysis time (i.e., the window start) to time-series.

diff --git a/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html b/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html index 03817107ed..c98a80161b 100644 --- a/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html +++ b/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parent_stochastic_scenario__child_stochastic_scenario relationship defines how the individual stochastic_scenarios are related to each other, forming what is referred to as the stochastic direct acyclic graph (DAG) in the Stochastic Framework section. It acts as a sort of basis for the stochastic_structures, but doesn't contain any Parameters necessary for describing how it relates to the Temporal Framework or the Objective function.

The parent_stochastic_scenario__child_stochastic_scenario relationship and the stochastic DAG it forms are crucial for Constraint generation with stochastic path indexing. Every finite stochastic DAG has a limited number of unique ways of traversing it, called full stochastic paths, which are used when determining how many different constraints need to be generated over time periods where stochastic_structures branch or converge, or when generating constraints involving different stochastic_structures. See the Stochastic Framework section for more information.

+- · SpineOpt.jl

The parent_stochastic_scenario__child_stochastic_scenario relationship defines how the individual stochastic_scenarios are related to each other, forming what is referred to as the stochastic direct acyclic graph (DAG) in the Stochastic Framework section. It acts as a sort of basis for the stochastic_structures, but doesn't contain any Parameters necessary for describing how it relates to the Temporal Framework or the Objective function.

The parent_stochastic_scenario__child_stochastic_scenario relationship and the stochastic DAG it forms are crucial for Constraint generation with stochastic path indexing. Every finite stochastic DAG has a limited number of unique ways of traversing it, called full stochastic paths, which are used when determining how many different constraints need to be generated over time periods where stochastic_structures branch or converge, or when generating constraints involving different stochastic_structures. See the Stochastic Framework section for more information.

diff --git a/dev/concept_reference/ramp_down_limit/index.html b/dev/concept_reference/ramp_down_limit/index.html index b8835d4c61..0dfdfe7abd 100644 --- a/dev/concept_reference/ramp_down_limit/index.html +++ b/dev/concept_reference/ramp_down_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the ramp_down_limit parameter limits the maximum decrease in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

+- · SpineOpt.jl

The definition of the ramp_down_limit parameter limits the maximum decrease in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

diff --git a/dev/concept_reference/ramp_up_limit/index.html b/dev/concept_reference/ramp_up_limit/index.html index 8ea2675661..d0d779da73 100644 --- a/dev/concept_reference/ramp_up_limit/index.html +++ b/dev/concept_reference/ramp_up_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the ramp_up_limit parameter limits the maximum increase in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

+- · SpineOpt.jl

The definition of the ramp_up_limit parameter limits the maximum increase in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

diff --git a/dev/concept_reference/report/index.html b/dev/concept_reference/report/index.html index eb717a8a89..cf9d767307 100644 --- a/dev/concept_reference/report/index.html +++ b/dev/concept_reference/report/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A report is essentially a group of outputs from a model, that gets written into the output database as a result of running SpineOpt. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

+- · SpineOpt.jl

A report is essentially a group of outputs from a model, that gets written into the output database as a result of running SpineOpt. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

diff --git a/dev/concept_reference/report__output/index.html b/dev/concept_reference/report__output/index.html index 6070f1f7c5..ba0b84ff9e 100644 --- a/dev/concept_reference/report__output/index.html +++ b/dev/concept_reference/report__output/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/representative_periods_mapping/index.html b/dev/concept_reference/representative_periods_mapping/index.html index dd585733c9..ca0debc401 100644 --- a/dev/concept_reference/representative_periods_mapping/index.html +++ b/dev/concept_reference/representative_periods_mapping/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the names of temporal_block objects to use as representative periods for certain time ranges. This indicates the model to define operational variables only for those representative periods, and map variables from normal periods to representative ones. The idea behind this is to reduce the size of the problem by using a reduced set of variables, when one knows that some reduced set of time periods can be representative for a larger one.

Note that only operational variables other than node_state are sensitive to this parameter. In other words, the model always create node_state variables and investment variables for all time periods, regardless of whether or not representative_periods_mapping is specified for any temporal_block.

To use representative periods in your model, do the following:

  1. Define one temporal_block for the 'normal' periods as you would do if you weren't using representative periods.
  2. Define a set of temporal_block objects, each corresponding to one representative period.
  3. Specify representative_periods_mapping for the 'normal' temporal_block as a map, from consecutive date-time values to the name of a representative temporal_block.
  4. Associate all the above temporal_block objects to elements in your model (e.g., via node__temporal_block and/or units_on__temporal_block relationships), to map their operational variables from normal periods, to the variable from the representative period.

See also Representative days with seasonal storages.

+- · SpineOpt.jl

Specifies the names of temporal_block objects to use as representative periods for certain time ranges. This indicates the model to define operational variables only for those representative periods, and map variables from normal periods to representative ones. The idea behind this is to reduce the size of the problem by using a reduced set of variables, when one knows that some reduced set of time periods can be representative for a larger one.

Note that only operational variables other than node_state are sensitive to this parameter. In other words, the model always create node_state variables and investment variables for all time periods, regardless of whether or not representative_periods_mapping is specified for any temporal_block.

To use representative periods in your model, do the following:

  1. Define one temporal_block for the 'normal' periods as you would do if you weren't using representative periods.
  2. Define a set of temporal_block objects, each corresponding to one representative period.
  3. Specify representative_periods_mapping for the 'normal' temporal_block as a map, from consecutive date-time values to the name of a representative temporal_block.
  4. Associate all the above temporal_block objects to elements in your model (e.g., via node__temporal_block and/or units_on__temporal_block relationships), to map their operational variables from normal periods, to the variable from the representative period.

See also Representative days with seasonal storages.

diff --git a/dev/concept_reference/reserve_procurement_cost/index.html b/dev/concept_reference/reserve_procurement_cost/index.html index d0bd9312fc..7c15de9ef6 100644 --- a/dev/concept_reference/reserve_procurement_cost/index.html +++ b/dev/concept_reference/reserve_procurement_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the reserve_procurement_cost parameter for a specific unit__to_node or unit__from_node relationship, a cost term will be added to the objective function whenever that unit is used over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the reserve_procurement_cost parameter for a specific unit__to_node or unit__from_node relationship, a cost term will be added to the objective function whenever that unit is used over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/resolution/index.html b/dev/concept_reference/resolution/index.html index 3cce503b10..eda51bebdf 100644 --- a/dev/concept_reference/resolution/index.html +++ b/dev/concept_reference/resolution/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run. Generally speaking, variables and constraints are generated for each timestep of an optimization. For example, the nodal balance constraint must hold for each timestep.

An array of duration values can be used to have a resolution that varies with time itself. It can for example be used when uncertainty in one of the inputs rises as the optimization moves away from the model start. Think of a forecast of for instance wind power generation, which might be available in quarter hourly detail for one day in the future, and in hourly detail for the next two days. It is possible to take a quarter hourly resolution for the full horizon of three days. However, by lowering the temporal resolution after the first day, the computational burden is lowered substantially.

+- · SpineOpt.jl

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run. Generally speaking, variables and constraints are generated for each timestep of an optimization. For example, the nodal balance constraint must hold for each timestep.

An array of duration values can be used to have a resolution that varies with time itself. It can for example be used when uncertainty in one of the inputs rises as the optimization moves away from the model start. Think of a forecast of for instance wind power generation, which might be available in quarter hourly detail for one day in the future, and in hourly detail for the next two days. It is possible to take a quarter hourly resolution for the full horizon of three days. However, by lowering the temporal resolution after the first day, the computational burden is lowered substantially.

diff --git a/dev/concept_reference/right_hand_side/index.html b/dev/concept_reference/right_hand_side/index.html index f6b88d7016..1ef8df469f 100644 --- a/dev/concept_reference/right_hand_side/index.html +++ b/dev/concept_reference/right_hand_side/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/roll_forward/index.html b/dev/concept_reference/roll_forward/index.html index 12dacc7e69..9305d8375c 100644 --- a/dev/concept_reference/roll_forward/index.html +++ b/dev/concept_reference/roll_forward/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In a rolling horizon optimization, the model is split in windows that are optimized iteratively; roll_forward indicates how much the window should roll forward after each iteration. Overlap between consecutive optimization windows is possible. In the practical approaches presented in Temporal Framework, the rolling window optimization will be explained in more detail. The default value of this parameter is the entire model time horizon, which leads to a single optimization for the entire time horizon.

In case you want your model to roll a different amount of time after each iteration, you can specify an array of durations for roll_forward. Position ith in this array indicates how much the model should roll after iteration i. This allows you to perform a rolling horizon optimization over a selection of disjoint representative periods as if they were contiguous.

+- · SpineOpt.jl

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In a rolling horizon optimization, the model is split in windows that are optimized iteratively; roll_forward indicates how much the window should roll forward after each iteration. Overlap between consecutive optimization windows is possible. In the practical approaches presented in Temporal Framework, the rolling window optimization will be explained in more detail. The default value of this parameter is the entire model time horizon, which leads to a single optimization for the entire time horizon.

In case you want your model to roll a different amount of time after each iteration, you can specify an array of durations for roll_forward. Position ith in this array indicates how much the model should roll after iteration i. This allows you to perform a rolling horizon optimization over a selection of disjoint representative periods as if they were contiguous.

diff --git a/dev/concept_reference/shut_down_cost/index.html b/dev/concept_reference/shut_down_cost/index.html index a80f6bb73e..eb93eee615 100644 --- a/dev/concept_reference/shut_down_cost/index.html +++ b/dev/concept_reference/shut_down_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the shut_down_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit shuts down over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the shut_down_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit shuts down over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/shut_down_limit/index.html b/dev/concept_reference/shut_down_limit/index.html index e6a529f94b..1162970e52 100644 --- a/dev/concept_reference/shut_down_limit/index.html +++ b/dev/concept_reference/shut_down_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the shut_down_limit parameter sets an upper bound on the unit_flow variable for the timestep right before a shutdown.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

+- · SpineOpt.jl

The definition of the shut_down_limit parameter sets an upper bound on the unit_flow variable for the timestep right before a shutdown.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

diff --git a/dev/concept_reference/start_up_cost/index.html b/dev/concept_reference/start_up_cost/index.html index 08435915fb..1309fe7210 100644 --- a/dev/concept_reference/start_up_cost/index.html +++ b/dev/concept_reference/start_up_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the start_up_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit starts up over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the start_up_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit starts up over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/start_up_limit/index.html b/dev/concept_reference/start_up_limit/index.html index 0ba8400a2f..65f13eae27 100644 --- a/dev/concept_reference/start_up_limit/index.html +++ b/dev/concept_reference/start_up_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the start_up_limit parameter sets an upper bound on the unit_flow variable for the timestep right after a startup.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

+- · SpineOpt.jl

The definition of the start_up_limit parameter sets an upper bound on the unit_flow variable for the timestep right after a startup.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

diff --git a/dev/concept_reference/state_coeff/index.html b/dev/concept_reference/state_coeff/index.html index eb8f18b650..e6b3b7668a 100644 --- a/dev/concept_reference/state_coeff/index.html +++ b/dev/concept_reference/state_coeff/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The state_coeff parameter acts as a coefficient for the node_state variable in the node injection constraint. Essentially, it tells how the node_state variable should be treated in relation to the commodity flows and demand, and can be used for e.g. scaling or unit conversions. For most use-cases a state_coeff parameter value of 1.0 should suffice, e.g. having a MWh storage connected to MW flows in a model with hour as the basic unit of time.

Note that in order for the state_coeff parameter to have an impact, the node must first have a node_state variable to begin with, defined using the has_state parameter. By default, the state_coeff is set to zero as a precaution, so that the user always has to set its value explicitly for it to have an impact on the model.

+- · SpineOpt.jl

The state_coeff parameter acts as a coefficient for the node_state variable in the node injection constraint. Essentially, it tells how the node_state variable should be treated in relation to the commodity flows and demand, and can be used for e.g. scaling or unit conversions. For most use-cases a state_coeff parameter value of 1.0 should suffice, e.g. having a MWh storage connected to MW flows in a model with hour as the basic unit of time.

Note that in order for the state_coeff parameter to have an impact, the node must first have a node_state variable to begin with, defined using the has_state parameter. By default, the state_coeff is set to zero as a precaution, so that the user always has to set its value explicitly for it to have an impact on the model.

diff --git a/dev/concept_reference/stochastic_scenario/index.html b/dev/concept_reference/stochastic_scenario/index.html index 51002ff76a..d1a4c8fd29 100644 --- a/dev/concept_reference/stochastic_scenario/index.html +++ b/dev/concept_reference/stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Essentially, a stochastic_scenario is a label for an alternative period of time, describing one possibility of what might come to pass. They are the basic building blocks of the scenario-based Stochastic Framework in SpineOpt.jl, but aren't really meaningful on their own. Only when combined into a stochastic_structure using the stochastic_structure__stochastic_scenario and parent_stochastic_scenario__child_stochastic_scenario relationships, along with Parameters like the weight_relative_to_parents and stochastic_scenario_end, they become meaningful.

+- · SpineOpt.jl

Essentially, a stochastic_scenario is a label for an alternative period of time, describing one possibility of what might come to pass. They are the basic building blocks of the scenario-based Stochastic Framework in SpineOpt.jl, but aren't really meaningful on their own. Only when combined into a stochastic_structure using the stochastic_structure__stochastic_scenario and parent_stochastic_scenario__child_stochastic_scenario relationships, along with Parameters like the weight_relative_to_parents and stochastic_scenario_end, they become meaningful.

diff --git a/dev/concept_reference/stochastic_scenario_end/index.html b/dev/concept_reference/stochastic_scenario_end/index.html index 30a0849b72..24fe59c8ca 100644 --- a/dev/concept_reference/stochastic_scenario_end/index.html +++ b/dev/concept_reference/stochastic_scenario_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_scenario_end is a Duration-type parameter, defining when a stochastic_scenario ends relative to the start of the current optimization. As it is a parameter for the stochastic_structure__stochastic_scenario relationship, different stochastic_structures can have different values for the same stochastic_scenario, making it possible to define slightly different stochastic_structures using the same stochastic_scenarios. See the Stochastic Framework section for more information about how different stochastic_structures interact in SpineOpt.jl.

When a stochastic_scenario ends at the point in time defined by the stochastic_scenario_end parameter, it spawns its children according to the parent_stochastic_scenario__child_stochastic_scenario relationship. Note that the children will be inherently assumed to belong to the same stochastic_structure their parent belonged to, even without explicit stochastic_structure__stochastic_scenario relationships! Thus, you might need to define the weight_relative_to_parents parameter for the children.

If no stochastic_scenario_end is defined, the stochastic_scenario is assumed to go on indefinitely.

+- · SpineOpt.jl

The stochastic_scenario_end is a Duration-type parameter, defining when a stochastic_scenario ends relative to the start of the current optimization. As it is a parameter for the stochastic_structure__stochastic_scenario relationship, different stochastic_structures can have different values for the same stochastic_scenario, making it possible to define slightly different stochastic_structures using the same stochastic_scenarios. See the Stochastic Framework section for more information about how different stochastic_structures interact in SpineOpt.jl.

When a stochastic_scenario ends at the point in time defined by the stochastic_scenario_end parameter, it spawns its children according to the parent_stochastic_scenario__child_stochastic_scenario relationship. Note that the children will be inherently assumed to belong to the same stochastic_structure their parent belonged to, even without explicit stochastic_structure__stochastic_scenario relationships! Thus, you might need to define the weight_relative_to_parents parameter for the children.

If no stochastic_scenario_end is defined, the stochastic_scenario is assumed to go on indefinitely.

diff --git a/dev/concept_reference/stochastic_structure/index.html b/dev/concept_reference/stochastic_structure/index.html index 921f289683..d0b3701e6a 100644 --- a/dev/concept_reference/stochastic_structure/index.html +++ b/dev/concept_reference/stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_structure is the key component of the scenario-based Stochastic Framework in SpineOpt.jl, and essentially represents a group of stochastic_scenarios with set Parameters. The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structures, and the weight_relative_to_parents and stochastic_scenario_end Parameters define the exact shape and impact of the stochastic_structure, along with the parent_stochastic_scenario__child_stochastic_scenario relationship.

The main reason as to why stochastic_structures are so important is, that they act as handles connecting the Stochastic Framework to the modelled system. This is handled using the Structural relationship classes e.g. node__stochastic_structure, which define the stochastic_structure applied to each object describing the modelled system. Connecting each system object to the appropriate stochastic_structure individually can be a bit bothersome at times, so there are also a number of convenience Meta relationship classes like the model__default_stochastic_structure, which allow setting model-wide defaults to be used whenever specific definitions are missing.

+- · SpineOpt.jl

The stochastic_structure is the key component of the scenario-based Stochastic Framework in SpineOpt.jl, and essentially represents a group of stochastic_scenarios with set Parameters. The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structures, and the weight_relative_to_parents and stochastic_scenario_end Parameters define the exact shape and impact of the stochastic_structure, along with the parent_stochastic_scenario__child_stochastic_scenario relationship.

The main reason as to why stochastic_structures are so important is, that they act as handles connecting the Stochastic Framework to the modelled system. This is handled using the Structural relationship classes e.g. node__stochastic_structure, which define the stochastic_structure applied to each object describing the modelled system. Connecting each system object to the appropriate stochastic_structure individually can be a bit bothersome at times, so there are also a number of convenience Meta relationship classes like the model__default_stochastic_structure, which allow setting model-wide defaults to be used whenever specific definitions are missing.

diff --git a/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html b/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html index 934406afb5..667c2b629d 100644 --- a/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html +++ b/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structure, as well as holds the stochastic_scenario_end and weight_relative_to_parents Parameters defining how the stochastic_structure interacts with the Temporal Framework and the Objective function. Along with parent_stochastic_scenario__child_stochastic_scenario, this relationship is used to define the exact properties of each stochastic_structure, which are then applied to the objects describing the modelled system according to the Structural relationship classes, like the node__stochastic_structure relationship.

+- · SpineOpt.jl

The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structure, as well as holds the stochastic_scenario_end and weight_relative_to_parents Parameters defining how the stochastic_structure interacts with the Temporal Framework and the Objective function. Along with parent_stochastic_scenario__child_stochastic_scenario, this relationship is used to define the exact properties of each stochastic_structure, which are then applied to the objects describing the modelled system according to the Structural relationship classes, like the node__stochastic_structure relationship.

diff --git a/dev/concept_reference/storage_investment_cost/index.html b/dev/concept_reference/storage_investment_cost/index.html index 58a804163e..43fe52800e 100644 --- a/dev/concept_reference/storage_investment_cost/index.html +++ b/dev/concept_reference/storage_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the storage_investment_cost parameter for a specific node, a cost term will be added to the objective function whenever a storage investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the storage_investment_cost parameter for a specific node, a cost term will be added to the objective function whenever a storage investment is made during the current optimization window.

diff --git a/dev/concept_reference/storage_investment_lifetime/index.html b/dev/concept_reference/storage_investment_lifetime/index.html index f8c1ae1406..71021edae7 100644 --- a/dev/concept_reference/storage_investment_lifetime/index.html +++ b/dev/concept_reference/storage_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Duration parameter that determines the minimum duration of storage investment decisions. Once a storage has been invested-in, it must remain invested-in for storage_investment_tech_lifetime. Note that storage_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_storages

+- · SpineOpt.jl

Duration parameter that determines the minimum duration of storage investment decisions. Once a storage has been invested-in, it must remain invested-in for storage_investment_tech_lifetime. Note that storage_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_storages

diff --git a/dev/concept_reference/storage_investment_variable_type/index.html b/dev/concept_reference/storage_investment_variable_type/index.html index 7026614890..4dba1bacd7 100644 --- a/dev/concept_reference/storage_investment_variable_type/index.html +++ b/dev/concept_reference/storage_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem storage_investment_variable_type determines the storage investment decision variable type. Since a node's node_state will be limited to the product of the investment variable and the corresponding node_state_cap and since candidate_storages represents the upper bound of the storage investment decision variable, storage_investment_variable_type thus determines what the investment decision represents. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages that may be invested-in. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a capacity with node_state_cap being analagous to a scaling parameter. For example, if storage_investment_variable_type = integer, candidate_storages = 4 and node_state_cap = 1000 MWh, then the investment decision is how many 1000h MW storages to build. If storage_investment_variable_type = continuous, candidate_storages = 1000 and node_state_cap = 1 MWh, then the investment decision is how much storage capacity to build. Finally, if storage_investment_variable_type = integer, candidate_storages = 10 and node_state_cap = 100 MWh, then the investment decision is how many 100MWh storage blocks to build.

See also Investment Optimization and candidate_storages.

+- · SpineOpt.jl

Within an investments problem storage_investment_variable_type determines the storage investment decision variable type. Since a node's node_state will be limited to the product of the investment variable and the corresponding node_state_cap and since candidate_storages represents the upper bound of the storage investment decision variable, storage_investment_variable_type thus determines what the investment decision represents. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages that may be invested-in. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a capacity with node_state_cap being analagous to a scaling parameter. For example, if storage_investment_variable_type = integer, candidate_storages = 4 and node_state_cap = 1000 MWh, then the investment decision is how many 1000h MW storages to build. If storage_investment_variable_type = continuous, candidate_storages = 1000 and node_state_cap = 1 MWh, then the investment decision is how much storage capacity to build. Finally, if storage_investment_variable_type = integer, candidate_storages = 10 and node_state_cap = 100 MWh, then the investment decision is how many 100MWh storage blocks to build.

See also Investment Optimization and candidate_storages.

diff --git a/dev/concept_reference/storages_invested_avaiable_coefficient/index.html b/dev/concept_reference/storages_invested_avaiable_coefficient/index.html index 74fb044e91..ab6b68d8ee 100644 --- a/dev/concept_reference/storages_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/storages_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/storages_invested_big_m_mga/index.html b/dev/concept_reference/storages_invested_big_m_mga/index.html index 4d792d9968..f3ef76488d 100644 --- a/dev/concept_reference/storages_invested_big_m_mga/index.html +++ b/dev/concept_reference/storages_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The storages_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_storages could suffice.)

+- · SpineOpt.jl

The storages_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_storages could suffice.)

diff --git a/dev/concept_reference/storages_invested_coefficient/index.html b/dev/concept_reference/storages_invested_coefficient/index.html index a3093878d8..f18c43f7f6 100644 --- a/dev/concept_reference/storages_invested_coefficient/index.html +++ b/dev/concept_reference/storages_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/storages_invested_mga/index.html b/dev/concept_reference/storages_invested_mga/index.html index 102644cab0..77f17e503c 100644 --- a/dev/concept_reference/storages_invested_mga/index.html +++ b/dev/concept_reference/storages_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The storages_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of storages_invested_mga is set to true, investment decisions in this connection, or group of storages, will be included in the MGA algorithm.

+- · SpineOpt.jl

The storages_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of storages_invested_mga is set to true, investment decisions in this connection, or group of storages, will be included in the MGA algorithm.

diff --git a/dev/concept_reference/tax_in_unit_flow/index.html b/dev/concept_reference/tax_in_unit_flow/index.html index 39adb0f558..80c5909323 100644 --- a/dev/concept_reference/tax_in_unit_flow/index.html +++ b/dev/concept_reference/tax_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_in_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction to_node over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the tax_in_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction to_node over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/tax_net_unit_flow/index.html b/dev/concept_reference/tax_net_unit_flow/index.html index fee5f3a555..0d6dd23cb1 100644 --- a/dev/concept_reference/tax_net_unit_flow/index.html +++ b/dev/concept_reference/tax_net_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_net_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with the net total of all unit_flow variables with the direction to_node for this specific node minus all unit_flow variables with direction from_node.

+- · SpineOpt.jl

By defining the tax_net_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with the net total of all unit_flow variables with the direction to_node for this specific node minus all unit_flow variables with direction from_node.

diff --git a/dev/concept_reference/tax_out_unit_flow/index.html b/dev/concept_reference/tax_out_unit_flow/index.html index 52319ecb49..14932aaeca 100644 --- a/dev/concept_reference/tax_out_unit_flow/index.html +++ b/dev/concept_reference/tax_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_out_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction from_node over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the tax_out_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction from_node over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/temporal_block/index.html b/dev/concept_reference/temporal_block/index.html index f817af1e56..9107415cf4 100644 --- a/dev/concept_reference/temporal_block/index.html +++ b/dev/concept_reference/temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A temporal block defines the temporal properties of the optimization that is to be solved in the current window. It is the key building block of the Temporal Framework. Most importantly, it holds the necessary information about the resolution and horizon of the optimization. A single model can have multiple temporal blocks, which is one of the main sources of temporal flexibility in Spine: by linking different parts of the model to different temporal blocks, a single model can contain aspects that are solved with different temporal resolutions or time horizons.

+- · SpineOpt.jl

A temporal block defines the temporal properties of the optimization that is to be solved in the current window. It is the key building block of the Temporal Framework. Most importantly, it holds the necessary information about the resolution and horizon of the optimization. A single model can have multiple temporal blocks, which is one of the main sources of temporal flexibility in Spine: by linking different parts of the model to different temporal blocks, a single model can contain aspects that are solved with different temporal resolutions or time horizons.

diff --git a/dev/concept_reference/the_basics/index.html b/dev/concept_reference/the_basics/index.html index de21f716a8..abfa7eee78 100644 --- a/dev/concept_reference/the_basics/index.html +++ b/dev/concept_reference/the_basics/index.html @@ -1,2 +1,2 @@ -Basics of the data structure · SpineOpt.jl

Basics of the model structure

In SpineOpt.jl, the model structure is generated based on the input data, allowing it to be used for a multitude of different problems. Here, we aim to provide you with a basic understanding of the SpineOpt.jl model and data structure, while the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections provide more in-depth explanations of each concept.

Introduction to object classes

Essentially, Object Classes represents different types of objects or entities that make up the model. For example, every power plant in the model is represented as an object of the object class unit, every power line as an object of the object class connection, and so forth. In order to add any new entity to a model, a new object has to be added to desired object class in the input data.

Each object class has a very specific purpose in SpineOpt.jl, so understanding their differences is key. The Object Classes can be roughly divided into three distinctive groups, namely Systemic object classes, Structural object classes, and Meta object classes.

Systemic object classes

As the name implies, system Object Classes are used to describe the system to be modelled. Essentially, they define what you want to model. These include:

  • commodity represents different goods to be generated, consumed, transported, etc.
  • connection handles the transfer of commodities between nodes.
  • node ensures the balance of the commodity flows, and can be used to store commodities as well.
  • unit handles the generation and consumption of commodities.

Structural object classes

Structural Object Classes are used to define the temporal and stochastic structure of the modelled problem, as well as custom User Constraints. Unlike the above system Object Classes, the structural Object Classes are more about how you want to model, instead of strictly what you want to model. These include:

Meta object classes

Meta Object Classes are used for defining things on the level of models or above, like model output and even multiple models for problem decompositions. These include:

  • model represents an individual model, grouping together all the things relevant for itself.
  • output defines which Variables are output from the model.
  • report groups together multiple output objects.

Introduction to relationship classes

While Object Classes define all the objects or entities that make up a model, Relationship Classes define how those entities are related to each other. Thus, Relationship Classes hold no meaning on their own, and always include at least one object class.

Similar to Object Classes, each relationship class has a very specific purpose in SpineOpt.jl, and understanding the purpose of each relationship class is paramount. The Relationship Classes can be roughly divided into Systemic relationship classes, Structural relationship classes, and Meta relationship classes, again similar to Object Classes.

Systemic relationship classes

Systemic Relationship Classes define how Systemic object classes are related to each other, thus helping define the system to be modelled. Most of these relationships deal with which units and connections interact with which nodes, and how those interactions work. This essentially defines the possible commodity flows to be modelled. Systemic Relationship Classes include:

Structural relationship classes

Structural Relationship Classes primarily relate Structural object classes to Systemic object classes, defining what structures the individual parts of the system use. These are mostly used to determine the temporal and stochastic structures to be used in different parts of the modelled system, or custom User Constraints.

SpineOpt.jl has a very flexible temporal and stochastic structure, explained in detail in the Temporal Framework and Stochastic Framework sections of the documentation. Unfortunately, this flexibility requires quite a few different structural Relationship Classes, the most important of which are the following basic structural Relationship Classes:

Furthermore, there are also a number of advanced structural Relationship Classes, which are only necessary when using some of the optional features of SpineOpt.jl. For Investment Optimization, the following relationships control the stochastic and temporal structures of the investment variables:

For User Constraints, which are essentially generic data-driven custom constraints, the following relationships are used to control which variables are included and with what coefficients:

Meta relationship classes

Meta Relationship Classes are used for defining model-level settings, like which temporal blocks or stochastic structures are active, and what the model output is. These include:

Introduction to parameters

While the primary function of Object Classes and Relationship Classes is to define the system to be modelled and it's structure, Parameters exist to constrain them. Every parameter is attributed to at least one object class or relationship class, but some appear in many classes whenever they serve a similar purpose.

Parameters accept different types of values depending on their purpose, e.g. whether they act as a flag for some specific functionality or appear as a coefficient in Constraints, so understanding each parameter is key. Most coefficient-type Parameters accept constant, time series, and even stochastic time series form input, but there are some exceptions. Most flag-type Parameters, on the other hand, have a restricted list of acceptable values defined by their Parameter Value Lists.

The existence of some Constraints is controlled based on if the relevant Parameters are defined. As a rule-of-thumb, a constraint only gets generated if at least one of the Parameters appearing in it is defined, but one should refer to the appropriate Constraints and Parameters sections when in doubt.

Introduction to groups of objects

Groups of objects are used within SpineOpt for different purposes. To create a group of objects, simply right-click the corresponding Object Class in the Spine Toolbox database editor and select Add object group. Groups are essentially special objects, that act as a single handle for all of its members.

On the one hand, groups can be used in order to impose constraints on the aggregation of a variable, e.g. on the sum of multiple unit_flow variables. Constraints based on parameters associated with the unit__node__node, unit__to_node, unit__from_node, connection__node__node, connection__to_node, connection__from_node can generally be used for this kind of flow aggregation by defining the parameters on groups of objects, typically node groups. (with the exception of variable fixing parameters, e.g. fix_unit_flow, fix_connection_flow etc.). See for instance constraint_unit_flow_capacity.

On the other hand, a node group can be used to for PTDF based powerflows. Here a node group is used to enforce a nodal balance on system level, while suppressing the node balances at individual nodes. See also balance_type and the node balance constraint.

+Basics of the data structure · SpineOpt.jl

Basics of the model structure

In SpineOpt.jl, the model structure is generated based on the input data, allowing it to be used for a multitude of different problems. Here, we aim to provide you with a basic understanding of the SpineOpt.jl model and data structure, while the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections provide more in-depth explanations of each concept.

Introduction to object classes

Essentially, Object Classes represents different types of objects or entities that make up the model. For example, every power plant in the model is represented as an object of the object class unit, every power line as an object of the object class connection, and so forth. In order to add any new entity to a model, a new object has to be added to desired object class in the input data.

Each object class has a very specific purpose in SpineOpt.jl, so understanding their differences is key. The Object Classes can be roughly divided into three distinctive groups, namely Systemic object classes, Structural object classes, and Meta object classes.

Systemic object classes

As the name implies, system Object Classes are used to describe the system to be modelled. Essentially, they define what you want to model. These include:

  • commodity represents different goods to be generated, consumed, transported, etc.
  • connection handles the transfer of commodities between nodes.
  • node ensures the balance of the commodity flows, and can be used to store commodities as well.
  • unit handles the generation and consumption of commodities.

Structural object classes

Structural Object Classes are used to define the temporal and stochastic structure of the modelled problem, as well as custom User Constraints. Unlike the above system Object Classes, the structural Object Classes are more about how you want to model, instead of strictly what you want to model. These include:

Meta object classes

Meta Object Classes are used for defining things on the level of models or above, like model output and even multiple models for problem decompositions. These include:

  • model represents an individual model, grouping together all the things relevant for itself.
  • output defines which Variables are output from the model.
  • report groups together multiple output objects.

Introduction to relationship classes

While Object Classes define all the objects or entities that make up a model, Relationship Classes define how those entities are related to each other. Thus, Relationship Classes hold no meaning on their own, and always include at least one object class.

Similar to Object Classes, each relationship class has a very specific purpose in SpineOpt.jl, and understanding the purpose of each relationship class is paramount. The Relationship Classes can be roughly divided into Systemic relationship classes, Structural relationship classes, and Meta relationship classes, again similar to Object Classes.

Systemic relationship classes

Systemic Relationship Classes define how Systemic object classes are related to each other, thus helping define the system to be modelled. Most of these relationships deal with which units and connections interact with which nodes, and how those interactions work. This essentially defines the possible commodity flows to be modelled. Systemic Relationship Classes include:

Structural relationship classes

Structural Relationship Classes primarily relate Structural object classes to Systemic object classes, defining what structures the individual parts of the system use. These are mostly used to determine the temporal and stochastic structures to be used in different parts of the modelled system, or custom User Constraints.

SpineOpt.jl has a very flexible temporal and stochastic structure, explained in detail in the Temporal Framework and Stochastic Framework sections of the documentation. Unfortunately, this flexibility requires quite a few different structural Relationship Classes, the most important of which are the following basic structural Relationship Classes:

Furthermore, there are also a number of advanced structural Relationship Classes, which are only necessary when using some of the optional features of SpineOpt.jl. For Investment Optimization, the following relationships control the stochastic and temporal structures of the investment variables:

For User Constraints, which are essentially generic data-driven custom constraints, the following relationships are used to control which variables are included and with what coefficients:

Meta relationship classes

Meta Relationship Classes are used for defining model-level settings, like which temporal blocks or stochastic structures are active, and what the model output is. These include:

Introduction to parameters

While the primary function of Object Classes and Relationship Classes is to define the system to be modelled and it's structure, Parameters exist to constrain them. Every parameter is attributed to at least one object class or relationship class, but some appear in many classes whenever they serve a similar purpose.

Parameters accept different types of values depending on their purpose, e.g. whether they act as a flag for some specific functionality or appear as a coefficient in Constraints, so understanding each parameter is key. Most coefficient-type Parameters accept constant, time series, and even stochastic time series form input, but there are some exceptions. Most flag-type Parameters, on the other hand, have a restricted list of acceptable values defined by their Parameter Value Lists.

The existence of some Constraints is controlled based on if the relevant Parameters are defined. As a rule-of-thumb, a constraint only gets generated if at least one of the Parameters appearing in it is defined, but one should refer to the appropriate Constraints and Parameters sections when in doubt.

Introduction to groups of objects

Groups of objects are used within SpineOpt for different purposes. To create a group of objects, simply right-click the corresponding Object Class in the Spine Toolbox database editor and select Add object group. Groups are essentially special objects, that act as a single handle for all of its members.

On the one hand, groups can be used in order to impose constraints on the aggregation of a variable, e.g. on the sum of multiple unit_flow variables. Constraints based on parameters associated with the unit__node__node, unit__to_node, unit__from_node, connection__node__node, connection__to_node, connection__from_node can generally be used for this kind of flow aggregation by defining the parameters on groups of objects, typically node groups. (with the exception of variable fixing parameters, e.g. fix_unit_flow, fix_connection_flow etc.). See for instance constraint_unit_flow_capacity.

On the other hand, a node group can be used to for PTDF based powerflows. Here a node group is used to enforce a nodal balance on system level, while suppressing the node balances at individual nodes. See also balance_type and the node balance constraint.

diff --git a/dev/concept_reference/unit/index.html b/dev/concept_reference/unit/index.html index ac6321bbe9..484d4f0c36 100644 --- a/dev/concept_reference/unit/index.html +++ b/dev/concept_reference/unit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A unit represents an energy conversion process, where energy of one commodity can be converted into energy of another commodity. For example, a gas turbine, a power plant, or even a load, can be modelled using a unit.

A unit always takes energy from one or more nodes, and releases energy to one or more (possibly the same) nodes. The former are specificed through the unit__from_node relationship, and the latter through unit__to_node. Every unit has a temporal and stochastic structures given by the units_on__temporal_block and [units_on__stochastic_structure] relationships. The model will generate unit_flow variables for every combination of unit, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the unit is specified through a number of parameter values. For example, the capacity of the unit, as the maximum amount of energy that can enter or leave it, is given by unit_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_unit_flow, max_ratio_out_in_unit_flow, and min_ratio_out_in_unit_flow. The variable operating cost is given by vom_cost.

+- · SpineOpt.jl

A unit represents an energy conversion process, where energy of one commodity can be converted into energy of another commodity. For example, a gas turbine, a power plant, or even a load, can be modelled using a unit.

A unit always takes energy from one or more nodes, and releases energy to one or more (possibly the same) nodes. The former are specificed through the unit__from_node relationship, and the latter through unit__to_node. Every unit has a temporal and stochastic structures given by the units_on__temporal_block and [units_on__stochastic_structure] relationships. The model will generate unit_flow variables for every combination of unit, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the unit is specified through a number of parameter values. For example, the capacity of the unit, as the maximum amount of energy that can enter or leave it, is given by unit_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_unit_flow, max_ratio_out_in_unit_flow, and min_ratio_out_in_unit_flow. The variable operating cost is given by vom_cost.

diff --git a/dev/concept_reference/unit__commodity/index.html b/dev/concept_reference/unit__commodity/index.html index e27cdb1476..fe75f5e27b 100644 --- a/dev/concept_reference/unit__commodity/index.html +++ b/dev/concept_reference/unit__commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To impose a limit on the cumulative amount of commodity flows, the max_cum_in_unit_flow_bound can be imposed on a unit__commodity relationship. This can be very helpful, e.g. if a certain amount of emissions should not be surpased throughout the optimization.

Note that, next to the unit__commodity relationship, also the nodes connected to the units need to be associated with their corresponding commodities, see node__commodity.

+- · SpineOpt.jl

To impose a limit on the cumulative amount of commodity flows, the max_cum_in_unit_flow_bound can be imposed on a unit__commodity relationship. This can be very helpful, e.g. if a certain amount of emissions should not be surpased throughout the optimization.

Note that, next to the unit__commodity relationship, also the nodes connected to the units need to be associated with their corresponding commodities, see node__commodity.

diff --git a/dev/concept_reference/unit__from_node/index.html b/dev/concept_reference/unit__from_node/index.html index 0bf2cf1b2b..b452833762 100644 --- a/dev/concept_reference/unit__from_node/index.html +++ b/dev/concept_reference/unit__from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__from_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flows, cost terms, such as fuel_costs and vom_costs, can be included for the unit__from_node relationship.

It is important to note, that the parameters associated with the unit__from_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

+- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__from_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flows, cost terms, such as fuel_costs and vom_costs, can be included for the unit__from_node relationship.

It is important to note, that the parameters associated with the unit__from_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

diff --git a/dev/concept_reference/unit__from_node__unit_constraint/index.html b/dev/concept_reference/unit__from_node__unit_constraint/index.html index e50b769989..1dbf5663ea 100644 --- a/dev/concept_reference/unit__from_node__unit_constraint/index.html +++ b/dev/concept_reference/unit__from_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__from_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable to the specified unit from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__from_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

+- · SpineOpt.jl

unit__from_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable to the specified unit from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__from_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/unit__investment_stochastic_structure/index.html b/dev/concept_reference/unit__investment_stochastic_structure/index.html index a8c0aaa322..1071988941 100644 --- a/dev/concept_reference/unit__investment_stochastic_structure/index.html +++ b/dev/concept_reference/unit__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/unit__investment_temporal_block/index.html b/dev/concept_reference/unit__investment_temporal_block/index.html index 41f9ad629d..28ae5522e7 100644 --- a/dev/concept_reference/unit__investment_temporal_block/index.html +++ b/dev/concept_reference/unit__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__investment_temporal_block is a two-dimensional relationship between a unit and a temporal_block. This relationship defines the temporal resolution and scope of a unit's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no unit__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if unit__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified unit.

See also Investment Optimization

+- · SpineOpt.jl

unit__investment_temporal_block is a two-dimensional relationship between a unit and a temporal_block. This relationship defines the temporal resolution and scope of a unit's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no unit__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if unit__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified unit.

See also Investment Optimization

diff --git a/dev/concept_reference/unit__node__node/index.html b/dev/concept_reference/unit__node__node/index.html index 27f5dfb8b3..274225f470 100644 --- a/dev/concept_reference/unit__node__node/index.html +++ b/dev/concept_reference/unit__node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

While the relationships unit__to_node and unit__to_node take care of the automatic generation of the unit_flow variables, the unit__node__node relationships hold the information how the different commodity flows of a unit interact. Only through this relationship and the associated parameters, the topology of a unit, i.e. which intakes lead to which products etc., becomes unambiguous.

In almost all cases, at least one of the ..._ratio_... parameters will be defined, e.g. to set a fixed ratio between outgoing and incoming commodity flows of unit (see also e.g. fix_ratio_out_in_unit_flow). Note that the parameters can also be defined on a relationship between groups of objects, e.g. to force a fixed ratio between a group of nodes. In the triggered constraints, this will lead to an aggregation of the individual unit flows.

+- · SpineOpt.jl

While the relationships unit__to_node and unit__to_node take care of the automatic generation of the unit_flow variables, the unit__node__node relationships hold the information how the different commodity flows of a unit interact. Only through this relationship and the associated parameters, the topology of a unit, i.e. which intakes lead to which products etc., becomes unambiguous.

In almost all cases, at least one of the ..._ratio_... parameters will be defined, e.g. to set a fixed ratio between outgoing and incoming commodity flows of unit (see also e.g. fix_ratio_out_in_unit_flow). Note that the parameters can also be defined on a relationship between groups of objects, e.g. to force a fixed ratio between a group of nodes. In the triggered constraints, this will lead to an aggregation of the individual unit flows.

diff --git a/dev/concept_reference/unit__to_node/index.html b/dev/concept_reference/unit__to_node/index.html index 40a72b9d8a..4f83a903f2 100644 --- a/dev/concept_reference/unit__to_node/index.html +++ b/dev/concept_reference/unit__to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__to_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flow, cost terms, such as fuel_costs and vom_costs, can be included for the unit__to_node relationship.

It is important to note, that the parameters associated with the unit__to_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

+- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__to_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flow, cost terms, such as fuel_costs and vom_costs, can be included for the unit__to_node relationship.

It is important to note, that the parameters associated with the unit__to_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

diff --git a/dev/concept_reference/unit__to_node__unit_constraint/index.html b/dev/concept_reference/unit__to_node__unit_constraint/index.html index 42e6af550d..721ac64d4f 100644 --- a/dev/concept_reference/unit__to_node__unit_constraint/index.html +++ b/dev/concept_reference/unit__to_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__to_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable from the specified unit to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__to_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

+- · SpineOpt.jl

unit__to_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable from the specified unit to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__to_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/unit__unit_constraint/index.html b/dev/concept_reference/unit__unit_constraint/index.html index bb93ff3bfe..07623ec8b3 100644 --- a/dev/concept_reference/unit__unit_constraint/index.html +++ b/dev/concept_reference/unit__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__user_constraint is a two-dimensional relationship between a unit and a user_constraint. The relationship specifies that a variable or variable(s) associated only with the unit (not a unit_flow for example) are involved in the constraint. For example, the units_on_coefficient defined on unit__user_constraint specifies the coefficient of the unit's units_on variable in the specified user_constraint.

See also user_constraint

+- · SpineOpt.jl

unit__user_constraint is a two-dimensional relationship between a unit and a user_constraint. The relationship specifies that a variable or variable(s) associated only with the unit (not a unit_flow for example) are involved in the constraint. For example, the units_on_coefficient defined on unit__user_constraint specifies the coefficient of the unit's units_on variable in the specified user_constraint.

See also user_constraint

diff --git a/dev/concept_reference/unit_availability_factor/index.html b/dev/concept_reference/unit_availability_factor/index.html index 7342729cad..5997a77977 100644 --- a/dev/concept_reference/unit_availability_factor/index.html +++ b/dev/concept_reference/unit_availability_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To indicate that a unit is only available to a certain extent or at certain times of the optimization, the unit_availability_factor can be used. A typical use case could be an availability timeseries for a variable renewable energy source. By default the availability factor is set to 1. The availability is, among others, used in the constraint_units_available.

+- · SpineOpt.jl

To indicate that a unit is only available to a certain extent or at certain times of the optimization, the unit_availability_factor can be used. A typical use case could be an availability timeseries for a variable renewable energy source. By default the availability factor is set to 1. The availability is, among others, used in the constraint_units_available.

diff --git a/dev/concept_reference/unit_capacity/index.html b/dev/concept_reference/unit_capacity/index.html index 2f802b287b..4e2668990f 100644 --- a/dev/concept_reference/unit_capacity/index.html +++ b/dev/concept_reference/unit_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/unit_conv_cap_to_flow/index.html b/dev/concept_reference/unit_conv_cap_to_flow/index.html index 26ca6da1af..25a27f96a7 100644 --- a/dev/concept_reference/unit_conv_cap_to_flow/index.html +++ b/dev/concept_reference/unit_conv_cap_to_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit_conv_cap_to_flow, as defined for a unit__to_node or unit__from_node, allows the user to align between unit_flow variables and the unit_capacity parameter, which may be expressed in different units. An example would be when the unit_capacity is expressed in GWh, while the demand on the node is expressed in MWh. In that case, a unit_conv_cap_to_flow parameter of 1000 would be applicable.

+- · SpineOpt.jl

The unit_conv_cap_to_flow, as defined for a unit__to_node or unit__from_node, allows the user to align between unit_flow variables and the unit_capacity parameter, which may be expressed in different units. An example would be when the unit_capacity is expressed in GWh, while the demand on the node is expressed in MWh. In that case, a unit_conv_cap_to_flow parameter of 1000 would be applicable.

diff --git a/dev/concept_reference/unit_flow_coefficient/index.html b/dev/concept_reference/unit_flow_coefficient/index.html index 9616092ec6..6f3e9c99b8 100644 --- a/dev/concept_reference/unit_flow_coefficient/index.html +++ b/dev/concept_reference/unit_flow_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit_flow_coefficient is an optional parameter that can be used to include the unit_flow or unit_flow_op variables from or to a node in a user_constraint via the unit__from_node__user_constraint and unit__to_node__user_constraint relationships. Essentially, unit_flow_coefficient appears as a coefficient for the unit_flow and unit_flow_op variables from or to the node in the user constraint.

Note that the unit_flow_op variables are a bit of a special case, defined using the operating_points parameter.

+- · SpineOpt.jl

The unit_flow_coefficient is an optional parameter that can be used to include the unit_flow or unit_flow_op variables from or to a node in a user_constraint via the unit__from_node__user_constraint and unit__to_node__user_constraint relationships. Essentially, unit_flow_coefficient appears as a coefficient for the unit_flow and unit_flow_op variables from or to the node in the user constraint.

Note that the unit_flow_op variables are a bit of a special case, defined using the operating_points parameter.

diff --git a/dev/concept_reference/unit_investment_cost/index.html b/dev/concept_reference/unit_investment_cost/index.html index e81bea2e0c..f8a619a507 100644 --- a/dev/concept_reference/unit_investment_cost/index.html +++ b/dev/concept_reference/unit_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the unit_investment_cost parameter for a specific unit, a cost term will be added to the objective function whenever a unit investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the unit_investment_cost parameter for a specific unit, a cost term will be added to the objective function whenever a unit investment is made during the current optimization window.

diff --git a/dev/concept_reference/unit_investment_lifetime/index.html b/dev/concept_reference/unit_investment_lifetime/index.html index 208e2affa0..e43418408f 100644 --- a/dev/concept_reference/unit_investment_lifetime/index.html +++ b/dev/concept_reference/unit_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Duration parameter that determines the minimum duration of unit investment decisions. Once a unit has been invested-in, it must remain invested-in for unit_investment_tech_lifetime. Note that unit_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_units

+- · SpineOpt.jl

Duration parameter that determines the minimum duration of unit investment decisions. Once a unit has been invested-in, it must remain invested-in for unit_investment_tech_lifetime. Note that unit_investment_tech_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_units

diff --git a/dev/concept_reference/unit_investment_variable_type/index.html b/dev/concept_reference/unit_investment_variable_type/index.html index 8e2b7ae014..d39d3c7f9a 100644 --- a/dev/concept_reference/unit_investment_variable_type/index.html +++ b/dev/concept_reference/unit_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem unit_investment_variable_type determines the unit investment decision variable type. Since the unit_flows will be limited to the product of the investment variable and the corresponding unit_capacity for each unit_flow and since candidate_units represents the upper bound of the investment decision variable, unit_investment_variable_type thus determines what the investment decision represents. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested. If unit_investment_variable_type is continuous, candidate_units is more analagous to a capacity with unit_capacity being analagous to a scaling parameter. For example, if unit_investment_variable_type = integer, candidate_units = 4 and unit_capacity for a particular unit_flow = 400 MW, then the investment decision is how many 400 MW units to build. If unit_investment_variable_type = continuous, candidate_units = 400 and unit_capacity for a particular unit_flow = 1 MW, then the investment decision is how much capacity if this particular unit to build. Finally, if unit_investment_variable_type = integer, candidate_units = 10 and unit_capacity for a particular unit_flow = 50 MW, then the investment decision is many 50MW blocks of capacity of this particular unit to build.

See also Investment Optimization and candidate_units

+- · SpineOpt.jl

Within an investments problem unit_investment_variable_type determines the unit investment decision variable type. Since the unit_flows will be limited to the product of the investment variable and the corresponding unit_capacity for each unit_flow and since candidate_units represents the upper bound of the investment decision variable, unit_investment_variable_type thus determines what the investment decision represents. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested. If unit_investment_variable_type is continuous, candidate_units is more analagous to a capacity with unit_capacity being analagous to a scaling parameter. For example, if unit_investment_variable_type = integer, candidate_units = 4 and unit_capacity for a particular unit_flow = 400 MW, then the investment decision is how many 400 MW units to build. If unit_investment_variable_type = continuous, candidate_units = 400 and unit_capacity for a particular unit_flow = 1 MW, then the investment decision is how much capacity if this particular unit to build. Finally, if unit_investment_variable_type = integer, candidate_units = 10 and unit_capacity for a particular unit_flow = 50 MW, then the investment decision is many 50MW blocks of capacity of this particular unit to build.

See also Investment Optimization and candidate_units

diff --git a/dev/concept_reference/unit_investment_variable_type_list/index.html b/dev/concept_reference/unit_investment_variable_type_list/index.html index dc13d4877c..7b9cb44cc5 100644 --- a/dev/concept_reference/unit_investment_variable_type_list/index.html +++ b/dev/concept_reference/unit_investment_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit_investment_variable_type_list holds the possible values for the type of a unit's investment variable which may be chosen from integer, binary or continuous.

+- · SpineOpt.jl

unit_investment_variable_type_list holds the possible values for the type of a unit's investment variable which may be chosen from integer, binary or continuous.

diff --git a/dev/concept_reference/unit_online_variable_type_list/index.html b/dev/concept_reference/unit_online_variable_type_list/index.html index d3d1ca5d1f..f7f95645af 100644 --- a/dev/concept_reference/unit_online_variable_type_list/index.html +++ b/dev/concept_reference/unit_online_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit_online_variable_type_list holds the possible values for the type of a unit's commitment status variable which may be chosen from binary, integer, or linear.

+- · SpineOpt.jl

unit_online_variable_type_list holds the possible values for the type of a unit's commitment status variable which may be chosen from binary, integer, or linear.

diff --git a/dev/concept_reference/unit_start_flow/index.html b/dev/concept_reference/unit_start_flow/index.html index 06e30afee8..047de7f899 100644 --- a/dev/concept_reference/unit_start_flow/index.html +++ b/dev/concept_reference/unit_start_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to implement unit startup fuel consumption where node 1 is assumed to be input fuel and node 2 is assumed to be output elecrical energy. This is a flow from node 1 that is incurred when the value of the variable unitsstartedup is 1 in the corresponding time period. This flow does not result in additional output flow at node 2.

+- · SpineOpt.jl

Used to implement unit startup fuel consumption where node 1 is assumed to be input fuel and node 2 is assumed to be output elecrical energy. This is a flow from node 1 that is incurred when the value of the variable unitsstartedup is 1 in the corresponding time period. This flow does not result in additional output flow at node 2.

diff --git a/dev/concept_reference/units_invested_avaiable_coefficient/index.html b/dev/concept_reference/units_invested_avaiable_coefficient/index.html index 66722fb723..537217578e 100644 --- a/dev/concept_reference/units_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/units_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_invested_big_m_mga/index.html b/dev/concept_reference/units_invested_big_m_mga/index.html index c2ae0e082e..ba9ccb5a75 100644 --- a/dev/concept_reference/units_invested_big_m_mga/index.html +++ b/dev/concept_reference/units_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_units could suffice.)

+- · SpineOpt.jl

The units_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_units could suffice.)

diff --git a/dev/concept_reference/units_invested_coefficient/index.html b/dev/concept_reference/units_invested_coefficient/index.html index a4380d35ac..64567f8e79 100644 --- a/dev/concept_reference/units_invested_coefficient/index.html +++ b/dev/concept_reference/units_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_invested_mga/index.html b/dev/concept_reference/units_invested_mga/index.html index 20400d9a94..a8275cbcad 100644 --- a/dev/concept_reference/units_invested_mga/index.html +++ b/dev/concept_reference/units_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of units_invested_mga is set to true, investment decisions in this connection, or group of units, will be included in the MGA algorithm.

+- · SpineOpt.jl

The units_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of units_invested_mga is set to true, investment decisions in this connection, or group of units, will be included in the MGA algorithm.

diff --git a/dev/concept_reference/units_on__stochastic_structure/index.html b/dev/concept_reference/units_on__stochastic_structure/index.html index aef130a211..fcf9afd80c 100644 --- a/dev/concept_reference/units_on__stochastic_structure/index.html +++ b/dev/concept_reference/units_on__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_on__stochastic_structure relationship defines the stochastic_structure used by the units_on variable. Essentially, this relationship permits defining a different stochastic_structure for the online decisions regarding the units_on variable, than what is used for the production unit_flow variables. A common use-case is e.g. using only one units_on variable across multiple stochastic_scenarios for the unit_flow variables. Note that only one units_on__stochastic_structure relationship can be defined per unit per model, as interpreted by the units_on__stochastic_structure and model__stochastic_structure relationships.

The units_on__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

+- · SpineOpt.jl

The units_on__stochastic_structure relationship defines the stochastic_structure used by the units_on variable. Essentially, this relationship permits defining a different stochastic_structure for the online decisions regarding the units_on variable, than what is used for the production unit_flow variables. A common use-case is e.g. using only one units_on variable across multiple stochastic_scenarios for the unit_flow variables. Note that only one units_on__stochastic_structure relationship can be defined per unit per model, as interpreted by the units_on__stochastic_structure and model__stochastic_structure relationships.

The units_on__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

diff --git a/dev/concept_reference/units_on__temporal_block/index.html b/dev/concept_reference/units_on__temporal_block/index.html index acc2e9c448..36930b7477 100644 --- a/dev/concept_reference/units_on__temporal_block/index.html +++ b/dev/concept_reference/units_on__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

units_on__temporal_block is a relationship linking the units_on variable of a unit to a specific temporal_block object. As such, this relationship will determine which temporal block governs the on- and offline status of the unit. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

+- · SpineOpt.jl

units_on__temporal_block is a relationship linking the units_on variable of a unit to a specific temporal_block object. As such, this relationship will determine which temporal block governs the on- and offline status of the unit. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

diff --git a/dev/concept_reference/units_on_coefficient/index.html b/dev/concept_reference/units_on_coefficient/index.html index b41d6891a4..2bd848521d 100644 --- a/dev/concept_reference/units_on_coefficient/index.html +++ b/dev/concept_reference/units_on_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_on_cost/index.html b/dev/concept_reference/units_on_cost/index.html index 8191d85f4a..00f3ed1312 100644 --- a/dev/concept_reference/units_on_cost/index.html +++ b/dev/concept_reference/units_on_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the units_on_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit is online over the current optimization window. It can be used to represent an idling cost or any fixed cost incurred when a unit is online.

+- · SpineOpt.jl

By defining the units_on_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit is online over the current optimization window. It can be used to represent an idling cost or any fixed cost incurred when a unit is online.

diff --git a/dev/concept_reference/units_on_non_anticipativity_time/index.html b/dev/concept_reference/units_on_non_anticipativity_time/index.html index 2d55636ff4..1600488800 100644 --- a/dev/concept_reference/units_on_non_anticipativity_time/index.html +++ b/dev/concept_reference/units_on_non_anticipativity_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_on_non_anticipativity_time parameter defines the duration, starting from the begining of the optimisation window, where units_on variables need to be fixed to the result of the previous window.

This is intended to model "slow" units whose commitment decision needs to be taken in advance, e.g., in "day-ahead" mode, and cannot be changed afterwards.

+- · SpineOpt.jl

The units_on_non_anticipativity_time parameter defines the duration, starting from the begining of the optimisation window, where units_on variables need to be fixed to the result of the previous window.

This is intended to model "slow" units whose commitment decision needs to be taken in advance, e.g., in "day-ahead" mode, and cannot be changed afterwards.

diff --git a/dev/concept_reference/units_started_up_coefficient/index.html b/dev/concept_reference/units_started_up_coefficient/index.html index 9e9356332d..712043059d 100644 --- a/dev/concept_reference/units_started_up_coefficient/index.html +++ b/dev/concept_reference/units_started_up_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_unavailable/index.html b/dev/concept_reference/units_unavailable/index.html index 9180af3551..988fe0531b 100644 --- a/dev/concept_reference/units_unavailable/index.html +++ b/dev/concept_reference/units_unavailable/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For clustered units, defines how many members of that unit are out of service, generally, or at a particular time. This can be used to, for example, to model maintenance outages. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor, and number_of_units, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable).

It is possible to allow the model to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 0.

+- · SpineOpt.jl

For clustered units, defines how many members of that unit are out of service, generally, or at a particular time. This can be used to, for example, to model maintenance outages. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor, and number_of_units, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable).

It is possible to allow the model to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 0.

diff --git a/dev/concept_reference/upward_reserve/index.html b/dev/concept_reference/upward_reserve/index.html index ffa301cca0..f37cce0aa5 100644 --- a/dev/concept_reference/upward_reserve/index.html +++ b/dev/concept_reference/upward_reserve/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

+- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

diff --git a/dev/concept_reference/user_constraint/index.html b/dev/concept_reference/user_constraint/index.html index ae648ecbe7..f5b2b7e3ca 100644 --- a/dev/concept_reference/user_constraint/index.html +++ b/dev/concept_reference/user_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The user_constraint is a generic data-driven custom constraint, which allows for defining constraints involving multiple units, nodes, or connections. The constraint_sense parameter changes the sense of the user_constraint, while the right_hand_side parameter allows for defining the constant terms of the constraint.

Coefficients for the different variables appearing in the user_constraint are defined using relationships, like e.g. unit__from_node__user_constraint and connection__to_node__user_constraint for unit_flow and connection_flow variables, or unit__user_constraint and node__user_constraint for units_on, units_started_up, and node_state variables.

For more information, see the dedicated article on User Constraints

+- · SpineOpt.jl

The user_constraint is a generic data-driven custom constraint, which allows for defining constraints involving multiple units, nodes, or connections. The constraint_sense parameter changes the sense of the user_constraint, while the right_hand_side parameter allows for defining the constant terms of the constraint.

Coefficients for the different variables appearing in the user_constraint are defined using relationships, like e.g. unit__from_node__user_constraint and connection__to_node__user_constraint for unit_flow and connection_flow variables, or unit__user_constraint and node__user_constraint for units_on, units_started_up, and node_state variables.

For more information, see the dedicated article on User Constraints

diff --git a/dev/concept_reference/variable_type_list/index.html b/dev/concept_reference/variable_type_list/index.html index 966e3a711b..644bd66d65 100644 --- a/dev/concept_reference/variable_type_list/index.html +++ b/dev/concept_reference/variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/vom_cost/index.html b/dev/concept_reference/vom_cost/index.html index 86a7ad7e75..7139305241 100644 --- a/dev/concept_reference/vom_cost/index.html +++ b/dev/concept_reference/vom_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the vom_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for the variable operation and maintenance costs associated with that unit over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the vom_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for the variable operation and maintenance costs associated with that unit over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/weight/index.html b/dev/concept_reference/weight/index.html index f537656fd2..706a969b83 100644 --- a/dev/concept_reference/weight/index.html +++ b/dev/concept_reference/weight/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The weight variable, defined for a temporal_block object can be used to assign different weights to different temporal periods that are modeled. It basically determines how important a certain temporal period is in the total cost, as it enters the Objective function. The main use of this parameter is for representative periods, where each representative period represents a specific fraction of a year or so.

+- · SpineOpt.jl

The weight variable, defined for a temporal_block object can be used to assign different weights to different temporal periods that are modeled. It basically determines how important a certain temporal period is in the total cost, as it enters the Objective function. The main use of this parameter is for representative periods, where each representative period represents a specific fraction of a year or so.

diff --git a/dev/concept_reference/weight_relative_to_parents/index.html b/dev/concept_reference/weight_relative_to_parents/index.html index b31b218ed5..a74d5a4055 100644 --- a/dev/concept_reference/weight_relative_to_parents/index.html +++ b/dev/concept_reference/weight_relative_to_parents/index.html @@ -5,4 +5,4 @@ # If not a root `stochastic_scenario` -weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

The above calculation is performed starting from the roots, generation by generation, until the leaves of the stochastic DAG. Thus, the final weight of each stochastic_scenario is dependent on the weight_relative_to_parents Parameters of all its ancestors.

+weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

The above calculation is performed starting from the roots, generation by generation, until the leaves of the stochastic DAG. Thus, the final weight of each stochastic_scenario is dependent on the weight_relative_to_parents Parameters of all its ancestors.

diff --git a/dev/concept_reference/window_weight/index.html b/dev/concept_reference/window_weight/index.html index d57156ec3a..a1724bd789 100644 --- a/dev/concept_reference/window_weight/index.html +++ b/dev/concept_reference/window_weight/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The window_weight parameter, defined for a model object, is used in the Benders decomposition algorithm with representative periods. In this setup, the subproblem rolls over a series of possibly disconnected windows, corresponding to the representative periods. Each of these windows can have a different weight, for example, equal to the fraction of the full model horizon that it represents. Chosing a good weigth can help the solution be more accurate.

To use weighted rolling representative periods Benders, do the following.

  • Specify roll_forward as an array of n duration values, so the subproblem rolls over representative periods.
  • Specify window_weight as an array of n + 1 floating point values, representing the weight of each window.

Note that it the problem rolls n times, then you have n + 1 windows.

+- · SpineOpt.jl

The window_weight parameter, defined for a model object, is used in the Benders decomposition algorithm with representative periods. In this setup, the subproblem rolls over a series of possibly disconnected windows, corresponding to the representative periods. Each of these windows can have a different weight, for example, equal to the fraction of the full model horizon that it represents. Chosing a good weigth can help the solution be more accurate.

To use weighted rolling representative periods Benders, do the following.

  • Specify roll_forward as an array of n duration values, so the subproblem rolls over representative periods.
  • Specify window_weight as an array of n + 1 floating point values, representing the weight of each window.

Note that it the problem rolls n times, then you have n + 1 windows.

diff --git a/dev/concept_reference/write_lodf_file/index.html b/dev/concept_reference/write_lodf_file/index.html index fd599a6ecb..74c758a804 100644 --- a/dev/concept_reference/write_lodf_file/index.html +++ b/dev/concept_reference/write_lodf_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network line outage distributions factors in CSV format will be written to the current directory.

+- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network line outage distributions factors in CSV format will be written to the current directory.

diff --git a/dev/concept_reference/write_mps_file/index.html b/dev/concept_reference/write_mps_file/index.html index 7d42469d4c..91767560dc 100644 --- a/dev/concept_reference/write_mps_file/index.html +++ b/dev/concept_reference/write_mps_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is deprecated and will be removed in a future version.

This parameter controls when to write a diagnostic model file in MPS format. If set to write_mps_always, the model will always be written in MPS format to the current directory. If set to write\_mps\_on\_no\_solve, the MPS file will be written when the model solve terminates with a status of false. If set to write\_mps\_never, no file will be written

+- · SpineOpt.jl

This parameter is deprecated and will be removed in a future version.

This parameter controls when to write a diagnostic model file in MPS format. If set to write_mps_always, the model will always be written in MPS format to the current directory. If set to write\_mps\_on\_no\_solve, the MPS file will be written when the model solve terminates with a status of false. If set to write\_mps\_never, no file will be written

diff --git a/dev/concept_reference/write_mps_file_list/index.html b/dev/concept_reference/write_mps_file_list/index.html index 4ac65c86fa..81f7724caf 100644 --- a/dev/concept_reference/write_mps_file_list/index.html +++ b/dev/concept_reference/write_mps_file_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter value list is deprecated and will be removed in a future version.

Houses the different values for the write_mps_file parameter. Possible values include write_mps_always, write\_mps\_on\_no\_solve, and write\_mps\_never.

+- · SpineOpt.jl

This parameter value list is deprecated and will be removed in a future version.

Houses the different values for the write_mps_file parameter. Possible values include write_mps_always, write\_mps\_on\_no\_solve, and write\_mps\_never.

diff --git a/dev/concept_reference/write_ptdf_file/index.html b/dev/concept_reference/write_ptdf_file/index.html index 84049d228a..3a3b2382e4 100644 --- a/dev/concept_reference/write_ptdf_file/index.html +++ b/dev/concept_reference/write_ptdf_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network power transfer distributions factors in CSV format will be written to the current directory.

+- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network power transfer distributions factors in CSV format will be written to the current directory.

diff --git a/dev/getting_started/installation/index.html b/dev/getting_started/installation/index.html index b9be2103d6..e092b48e30 100644 --- a/dev/getting_started/installation/index.html +++ b/dev/getting_started/installation/index.html @@ -44,4 +44,4 @@ import Pkg Pkg.activate(path_environment) import PyCall -println(PyCall.pyprogramname)
Warning

You have to adjust this code for the correct paths, in particular the path for your python environment for Spine Toolbox (as the installation instructions for Spine Toolbox may slightly differ from these instructions).

8. Configure Julia in the Spine Toolbox settings

If you want to use this SpineOpt package in Spine Toolbox, make sure that the settings in Spine Toolbox point to the correct julia executable (and environment folder) where you installed SpineOpt (File > Settings > Tools).

9. Install the SpineOpt plugin

Select the SpineOpt plugin to add a ribbon to Spine Toolbox with easy access to some basic tools for SpineOpt (including a template for a SpineOpt (spine) database and a tool to run SpineOpt).

Upgrade

To upgrade spine tools in this configurations, take the following steps:

  1. Git pull in each of the source folders
  2. Activate the julia environment and run the Pkg.update() command. (You may need to go through the steps with Pkg.instantiate() if there are new dependencies.)
  3. Reconfigure PyCall for good measure.
+println(PyCall.pyprogramname)
Warning

You have to adjust this code for the correct paths, in particular the path for your python environment for Spine Toolbox (as the installation instructions for Spine Toolbox may slightly differ from these instructions).

8. Configure Julia in the Spine Toolbox settings

If you want to use this SpineOpt package in Spine Toolbox, make sure that the settings in Spine Toolbox point to the correct julia executable (and environment folder) where you installed SpineOpt (File > Settings > Tools).

9. Install the SpineOpt plugin

Select the SpineOpt plugin to add a ribbon to Spine Toolbox with easy access to some basic tools for SpineOpt (including a template for a SpineOpt (spine) database and a tool to run SpineOpt).

Upgrade

To upgrade spine tools in this configurations, take the following steps:

  1. Git pull in each of the source folders
  2. Activate the julia environment and run the Pkg.update() command. (You may need to go through the steps with Pkg.instantiate() if there are new dependencies.)
  3. Reconfigure PyCall for good measure.
diff --git a/dev/getting_started/recommended_workflow/index.html b/dev/getting_started/recommended_workflow/index.html index f89f7b450c..26a193f9eb 100644 --- a/dev/getting_started/recommended_workflow/index.html +++ b/dev/getting_started/recommended_workflow/index.html @@ -1,3 +1,3 @@ Recommended workflow · SpineOpt.jl

Recommended Workflow

Now that we've installed Spine Toolbox and SpineOpt, let's make sure that everything truly works by running an example. We'll be using an existing example to ensure that any issues we may encounter at this point are related to the installation. If you indeed encounter any problems, check the troubleshooting section. On the other hand, if you are able to successfully complete this example, you can continue to a first hands on experience with the tutorials.

In short, we'll create a new project in Spine Toolbox where we'll set up a simple workflow for using SpineOpt. We'll create an input database with meaningful data in the SpineOpt format, run SpineOpt on that input and examine the output. The steps to take are:

  1. Open Spine Toolbox and create a new project: File > New Project
  2. Drag 2 Data Store items, the Load Template tool and the Run SpineOpt tool from the ribbon to the design view. Each time you drag an item to the design view you are prompted to choose a name for the item. The default names are ok but for clarity we'll name the Data Store items 'input' and 'output'.
  3. Connect the items with (yellow) arrows as following: Load Template > input > Run SpineOpt > output image
  4. For each Data Store item
    1. select the Data Store item in the design view by a single click on the item, you should see a Data Store properties window (typically to the right of the design view).
    2. Choose the SQL database dialect (sqlite is a local file and works without a server).
    3. Click New Spine DB to create a new database (and save it, if it's sqlite).
    image
  5. For each tool
    1. select the tool in the design view by a single click on the tool, you should see a Tool properties window (typically to the right of the design view).
    2. Drag the available sources (i.e. the databases) to the tool arguments. The order of matters. Make sure that the input is the first argument and the output is the second argument.
    image
  6. Select the Load Template tool and press the 'Run Selection' button in the ribbon (and wait until the process is done). image
  7. Download the data of an existing example
  8. Double click the input database to open the spine db editor.
    1. File > Import
    2. Navigate to the downloaded file and wait until Spine Toolbox indicates that it has imported the data
    3. Save the imported data by pressing the 'commit' button.
    4. Close the spine db editor
    image
  9. Select the 'Run SpineOpt' tool and press the 'Run Selection' button in the ribbon (or press the 'Run Project' button) image

The remainder of this section explains each of these steps in more detail with the aim to get more familiar with the use of Spine Toolbox and SpineOpt.

Create a workflow for SpineOpt in Spine Toolbox

To create a workflow, we first need to open spinetoolbox and create a new project: File > New project

Our workflow in this project is going to consist of 2 databases (Data Store) and 2 tools (Load Template and Run SpineOpt). Drag these items from the ribbon to the design view. Everytime you drag a tool or database to the design view, Spine Toolbox will ask for a name for the tool. We'll call one database 'input' and the other 'output'. For the tools we can accept the default names. In the design view it is possible to connect these tools and databases by dragging yellow arrows between them (click on the white square connections). Connect them as following: Load Template > input > Run SpineOpt > output

image

image

Each of the databases need to be initialised:

  1. Select the database, you should see the Data Store Properties window (typically on the right of the design view).
  2. Choose the SQL database dialect (sqlite is a local file and works without a server).
  3. Click New Spine DB to create a new database (and save it, if it's sqlite).

image

Each of the tools need to be connected to the databases. The connections we made before were only the available connections, we need to explicitly tell our tools to use these available connections.

  1. Select the tool, you should see the Tool Properties window (typically on the right of the design view). Now that you've initialised the databases, you should also see the available resources.
  2. Drag the available sources to the tool arguments. The order of matters. Make sure that the input is the first argument and the output is the second argument.

image

To summarize, we've created a workflow where we first format the spine database to a SpineOpt database by loading the SpineOpt template into that database. Remember that a SpineOpt database is a spine database but a spine database is not necessarily a SpineOpt database. In the next section we'll manually intervene at this point to add meaningfull data to this database. Once we have the input database with meaningful data in the SpineOpt format, we run SpineOpt. SpineOpt will then write its results in the output database.

Info

If no 'Data Store' is specified for the output, the results of SpineOpt will be written by default to the input 'Data Store'. However, it is generally preferable to define a separate output data store for results.

A meaningful input database for SpineOpt

To prepare the input database for SpineOpt we are going to do 2 things:

  1. We'll format the spine Database to a SpineOpt database.
  2. We'll import and examine data from an existing example (i.e. the simple system tutorial).

To format the spine Database, select and execute the Load template tool. To execute the tool, we do not need to run the entire project, instead we can run the selection. (No worries if you accidentally ran the entire project. The Run SpineOpt tool may fail or the output database is meaningless. But that will resolve itself in the next steps.)

image

Note that the Load template tool makes use of SpineOpt. This is therefore the first part where we may run into troubles if SpineOpt is not installed correctly. If we select the tool, we can see the console output of the tool (typically on the lower right) and can follow along what it is doing.

Warning

SpineOpt is written in Julia. Every Julia session, i.e. everytime you run a Julia tool in Spine Toolbox for the first time since Spine Toolbox is active, Julia needs to compile the SpineOpt package before the tool actually runs. That compilation process takes time. It is possible to precompile Julia but that is quite advanced.

During compilation, SpineOpt displays some warnings that can be ignored, i.e.:

WARNING: using JuMP.parameter_value in module SpineInterface conflicts with an existing identifier.
-WARNING: using JuMP.Parameter in module SpineInterface conflicts with an existing identifier.

Now it is time to add information to import meaningful data to the input database. To that end we'll first need to get a file with the data. We can find that file on the SpineOpt repository on github. There is an examples folder with functioning examples. Let's take the simple system tutorial. It does not matter where you save this file on your system but it is possible to place it in the folder of your spine project.

Info

These example files are part of our tests for the master branch so they should always work correctly.

To import a '.json' file to a spine database, take the following steps:

  1. Double click on the input database to open the spine db editor.
  2. In the spine db editor go to: File > Import
  3. Navigate to your file and wait until Spine Toolbox indicates that it has imported the data
  4. Save the imported data by pressing the 'commit' button.

image

Now, let's examine what we see in the spine db editor. Typically you'll see a list of entities on the left, a table of parameters in the middle and alternatives/scenarios on the right. Something that will also be particularly useful (in the beginning) is the graph view. Click the 'graph' button in the ribbon on top to open the graph view. The view only shows what has been selected (as well as what is connected to that selection) so we won't see much yet. If you select 'root' in the list of entities on the left, you'll see everything. There is some stuff in the database that is typically hidden which clutters up the view so that is not particularly useful. Instead select 'unit'.

In the graph view you should now see a small energy system with a fuel node, two power plants and an electricity node. As you would expect, this is a system which chooses the cheapest power plant to supply the demand in the electricity node. As we can see in the parameters for the unit, each plant has a different variable operation cost (vom) and capacity. In other words, the system will choose the plant with the lowest vom cost until it is limited by its capacity. Then the system will choose the other plant.

image

Besides the data for the system, the database also contains data for the optimization model. To view that data select 'model' in the list of entities.

In the graph view you should now see a model at the center and different structures attached to it. The model entity contains information on, e.g., the solver to be used by the model. SpineOpt has a flexible temporal and stochastic structure. These are specified through the respective entities. If you select each of these entities, you can see that they are only connected to the model. That means that for this system, all entities use the same temporal structure by default. But if we want, we can add a specific temporal structure for a specific entity. The same holds for the stochastic structure. The stochastic structure manages the scenarios it is connected to. Here, there is only one scenario implying we are using a deterministic system. Finally, the model is also connected to a report. The report determines what is written to the output database when SpineOpt runs. In particular, any output entity connected to the report will appear in the output database.

image

Warning

There is a difference between scenarios in SpineOpt and SpineToolbox! Spine Toolbox uses 'alternatives' and SpineOpt uses 'entities'.

The Spine Toolbox scenarios can be used for, e.g., a Monte Carlo analysis. To use the Spine Toolbox scenarios, create a new alternative in the (typically) right panel and add it to a scenario (typically panel below the alternatives). You can then access these alternatives in the alternative field of a parameter.

The SpineOpt scenarios are used for robust solutions of the SpineOpt model. To use the SpineOpt scenarios, create a new scenario entity and connect it to a stochastic structure entity. You can then use a map of scenarios and values in the value field of a parameter.

For the latter there will be a tutorial later on.

Info

For more information about creating and managing Spine Toolbox database, see the documentation for the spine database editor

Run SpineOpt

Back in the Spine Toolbox workflow, we now have a meaningful input database for SpineOpt. We can therefore run SpineOpt. Select the 'Run SpineOpt' tool and press the 'Run Selection' button in the ribbon.

While the tool runs, you can keep an eye on the console (typically lower right panel). A lot of information is displayed here. Amongst others, there is information on building the model and the optimality. Any errors will also appear here.

image

Warning

Not only does this process take a while to run due to the compilation time but also to build the model. That is because the flexible structure is quite complex. SpineOpt is therefore less suited for simple models but should perform well for more complex models.

Examine the output of SpineOpt

Finally, we can also take a look at the output of SpineOpt. You can view the data the same way as you do for the input data. By double clicking the output database you open the spine db editor. There is not much to see in the graph view, so we'll look at the table of parameters. This time we can select the root (on the left) and we'll see all the output of the database in the table (on the middle).

For this example, we see the flows throughout the system. We can look at the production of each powerplant. We may need to scroll but eventually we see that the values are 'time series'. By double clicking on these values we get the values of this time series as well as a plot of the values.

Indeed, the power plant with lower vom cost is used at its maximum and the other plant is used whenever necessary.

+WARNING: using JuMP.Parameter in module SpineInterface conflicts with an existing identifier.

Now it is time to add information to import meaningful data to the input database. To that end we'll first need to get a file with the data. We can find that file on the SpineOpt repository on github. There is an examples folder with functioning examples. Let's take the simple system tutorial. It does not matter where you save this file on your system but it is possible to place it in the folder of your spine project.

Info

These example files are part of our tests for the master branch so they should always work correctly.

To import a '.json' file to a spine database, take the following steps:

  1. Double click on the input database to open the spine db editor.
  2. In the spine db editor go to: File > Import
  3. Navigate to your file and wait until Spine Toolbox indicates that it has imported the data
  4. Save the imported data by pressing the 'commit' button.

image

Now, let's examine what we see in the spine db editor. Typically you'll see a list of entities on the left, a table of parameters in the middle and alternatives/scenarios on the right. Something that will also be particularly useful (in the beginning) is the graph view. Click the 'graph' button in the ribbon on top to open the graph view. The view only shows what has been selected (as well as what is connected to that selection) so we won't see much yet. If you select 'root' in the list of entities on the left, you'll see everything. There is some stuff in the database that is typically hidden which clutters up the view so that is not particularly useful. Instead select 'unit'.

In the graph view you should now see a small energy system with a fuel node, two power plants and an electricity node. As you would expect, this is a system which chooses the cheapest power plant to supply the demand in the electricity node. As we can see in the parameters for the unit, each plant has a different variable operation cost (vom) and capacity. In other words, the system will choose the plant with the lowest vom cost until it is limited by its capacity. Then the system will choose the other plant.

image

Besides the data for the system, the database also contains data for the optimization model. To view that data select 'model' in the list of entities.

In the graph view you should now see a model at the center and different structures attached to it. The model entity contains information on, e.g., the solver to be used by the model. SpineOpt has a flexible temporal and stochastic structure. These are specified through the respective entities. If you select each of these entities, you can see that they are only connected to the model. That means that for this system, all entities use the same temporal structure by default. But if we want, we can add a specific temporal structure for a specific entity. The same holds for the stochastic structure. The stochastic structure manages the scenarios it is connected to. Here, there is only one scenario implying we are using a deterministic system. Finally, the model is also connected to a report. The report determines what is written to the output database when SpineOpt runs. In particular, any output entity connected to the report will appear in the output database.

image

Warning

There is a difference between scenarios in SpineOpt and SpineToolbox! Spine Toolbox uses 'alternatives' and SpineOpt uses 'entities'.

The Spine Toolbox scenarios can be used for, e.g., a Monte Carlo analysis. To use the Spine Toolbox scenarios, create a new alternative in the (typically) right panel and add it to a scenario (typically panel below the alternatives). You can then access these alternatives in the alternative field of a parameter.

The SpineOpt scenarios are used for robust solutions of the SpineOpt model. To use the SpineOpt scenarios, create a new scenario entity and connect it to a stochastic structure entity. You can then use a map of scenarios and values in the value field of a parameter.

For the latter there will be a tutorial later on.

Info

For more information about creating and managing Spine Toolbox database, see the documentation for the spine database editor

Run SpineOpt

Back in the Spine Toolbox workflow, we now have a meaningful input database for SpineOpt. We can therefore run SpineOpt. Select the 'Run SpineOpt' tool and press the 'Run Selection' button in the ribbon.

While the tool runs, you can keep an eye on the console (typically lower right panel). A lot of information is displayed here. Amongst others, there is information on building the model and the optimality. Any errors will also appear here.

image

Warning

Not only does this process take a while to run due to the compilation time but also to build the model. That is because the flexible structure is quite complex. SpineOpt is therefore less suited for simple models but should perform well for more complex models.

Examine the output of SpineOpt

Finally, we can also take a look at the output of SpineOpt. You can view the data the same way as you do for the input data. By double clicking the output database you open the spine db editor. There is not much to see in the graph view, so we'll look at the table of parameters. This time we can select the root (on the left) and we'll see all the output of the database in the table (on the middle).

For this example, we see the flows throughout the system. We can look at the production of each powerplant. We may need to scroll but eventually we see that the values are 'time series'. By double clicking on these values we get the values of this time series as well as a plot of the values.

Indeed, the power plant with lower vom cost is used at its maximum and the other plant is used whenever necessary.

diff --git a/dev/getting_started/troubleshooting/index.html b/dev/getting_started/troubleshooting/index.html index 207bab2c27..bdf5872f41 100644 --- a/dev/getting_started/troubleshooting/index.html +++ b/dev/getting_started/troubleshooting/index.html @@ -28,4 +28,4 @@ + [System.Net.ServicePointManager]:: <<<< SecurityProtocol = + CategoryInfo : InvalidOperation: (:) [], RuntimeException + FullyQualifiedErrorId : PropertyAssignmentException -...

The solution:

  1. Install .NET 4.5 from here: https://www.microsoft.com/en-US/download/details.aspx?id=30653.

  2. Install Windows management framework 3 or later, from here https://docs.microsoft.com/en-us/powershell/scripting/windows-powershell/wmf/overview?view=powershell-7.1.

  3. Try to install SpineOpt again.

+...

The solution:

  1. Install .NET 4.5 from here: https://www.microsoft.com/en-US/download/details.aspx?id=30653.

  2. Install Windows management framework 3 or later, from here https://docs.microsoft.com/en-us/powershell/scripting/windows-powershell/wmf/overview?view=powershell-7.1.

  3. Try to install SpineOpt again.

diff --git a/dev/how_to/change_the_solver/index.html b/dev/how_to/change_the_solver/index.html index e5c254256b..e9c9c7831f 100644 --- a/dev/how_to/change_the_solver/index.html +++ b/dev/how_to/change_the_solver/index.html @@ -1,2 +1,2 @@ -Change the solver · SpineOpt.jl

How to change the solver

If you want to change the solver for your optimization problem in SpineOpt, here is some guidance:

  • You can change the solvers in your input datastore using the db_lp_solver and db_mip_solver parameter values of the model object.
  • You can specify solver options via the db_lp_solver_options and db_mip_solver_options parameters respectively. These are map parameters where the first key is the solver name exactly as the db_mip_solver or db_lp_solver name, the second key is the solver option name and the value is the option value.
  • You can get a head start by copying the default map values for db_lp_solver_options and db_mip_solver_options. You can access the default values by clicking on the 'Object parameter definition' tab.
  • If you were trying to change the solver using the arguments to run_spineopt(), this is not the recommended way and will soon be deprecated.
  • The solver name corresponds to the name of the Julia package that you will need to install. Some like HiGHs.jl are self contained and include the binaries. Others like CPLEX.jl and Gurobi.jl you will need to point the package to your locally installed binaries - the julia packages have the instructions to do this.

The first option is the easiest. The more advanced way of using the solver options is illustrated below.

Set the model parameter values to choose the solvers and set the solver options:

image

This is what the solver options map parameter value looks like:

image

To get a head start with solver options, you can copy their default map values from the parameter definition tab like this:

image

+Change the solver · SpineOpt.jl

How to change the solver

If you want to change the solver for your optimization problem in SpineOpt, here is some guidance:

  • You can change the solvers in your input datastore using the db_lp_solver and db_mip_solver parameter values of the model object.
  • You can specify solver options via the db_lp_solver_options and db_mip_solver_options parameters respectively. These are map parameters where the first key is the solver name exactly as the db_mip_solver or db_lp_solver name, the second key is the solver option name and the value is the option value.
  • You can get a head start by copying the default map values for db_lp_solver_options and db_mip_solver_options. You can access the default values by clicking on the 'Object parameter definition' tab.
  • If you were trying to change the solver using the arguments to run_spineopt(), this is not the recommended way and will soon be deprecated.
  • The solver name corresponds to the name of the Julia package that you will need to install. Some like HiGHs.jl are self contained and include the binaries. Others like CPLEX.jl and Gurobi.jl you will need to point the package to your locally installed binaries - the julia packages have the instructions to do this.

The first option is the easiest. The more advanced way of using the solver options is illustrated below.

Set the model parameter values to choose the solvers and set the solver options:

image

This is what the solver options map parameter value looks like:

image

To get a head start with solver options, you can copy their default map values from the parameter definition tab like this:

image

diff --git a/dev/how_to/compile_julia_system_image/index.html b/dev/how_to/compile_julia_system_image/index.html index b1d2a9519d..f879675112 100644 --- a/dev/how_to/compile_julia_system_image/index.html +++ b/dev/how_to/compile_julia_system_image/index.html @@ -1,3 +1,3 @@ Compile julia system image · SpineOpt.jl

How to compile into a Julia system image

Sometimes it can be useful to 'compile' SpineOpt into a so-called system image. A system image is a binary library that, roughly speaking, 'stores' all the compilation work from a previous Julia session. If you start Julia with a system image, then Julia doesn't need to redo all that work and your code will be fast the first time you run it.

However if you upgrade your version of SpineOpt, any system images you might have created will not reflect that change - you will need to re-generate them.

To compile SpineOpt into a system image just do the following:

  1. Install PackageCompiler.jl.

  2. Create a file with precompilation statements for SpineOpt:

    a. Start julia with --trace-compile=file.jl.

    b. Call run_spineopt(url...) with a nice DB - one that triggers most of SpineOpt's functionality you need.

    c. Quit julia.

  3. Create the sysimage using the precompilation statements file:

    a. Start julia normally.

    b. Create the sysimage with PackageCompiler:

    using PackageCompiler
    -create_sysimage(; sysimage_path="SpineOpt.dll", precompile_statements_file="file.jl")
  4. Start Julia with --sysimage=SpineOpt.dll to use the generated image.

+create_sysimage(; sysimage_path="SpineOpt.dll", precompile_statements_file="file.jl")
  • Start Julia with --sysimage=SpineOpt.dll to use the generated image.

  • diff --git a/dev/how_to/define_an_efficiency/index.html b/dev/how_to/define_an_efficiency/index.html index b8e019bfcf..6cdb9ab3c5 100644 --- a/dev/how_to/define_an_efficiency/index.html +++ b/dev/how_to/define_an_efficiency/index.html @@ -1,2 +1,2 @@ -Define an efficiency · SpineOpt.jl

    How to define an efficiency

    relationships between the inputs and outputs of a unit

    The image below shows an overview of the possible relationships between the inputs and outputs of a unit.

    image

    image

    The key capability requirements are:

    • Easily define arbitrary numbers of input and output flows
    • Easily create piecewise affine linear relationships between any two flows
    • Anything more complicated can be done via user_constraints

    unit__node__node relationship

    image

    The unit__node__node relationship allows you to constrain two nodes to each other via a number of different parameters.:

    • fix_ratio_in_out_unit_flow: equivalent to an (incremental) heat rate. Input_flow = fix_ratio_in_out_unit_flow * output_flow + fix_units_on_coefficient_in_out * units_on. It can be piecewise linear, used in conjunction with operating_points with monotonically increasing coefficients (not enforced). Used in conjunction with fix_units_on_coefficient_in_out triggers a fixed flow when the unit is online and unit_start_flow triggers a flow on a unit start (start fuel consumption).
    • fix_ratio_out_in_unit_flow: equivalent to an efficiency. Output_flow = fix_ratio_out_in_unit_flow x input_flow + fix_units_on_coefficient_out_in * units_on. Ordering of the nodes in the unit__node__node relationship matters. The first node will be the output flow and the second node will be treated as the input flow (consistently with the out_in in the parameter name. A units_on coefficient is added with fix_units_on_coefficient_out_in.
    • In addition to fix_ratio_in_out_unit_flow and fix_ratio_out_in_unit_flow you have [constraint]_ratio_[direction1]_[direction2]_unit_flow where constraint can be min, max or fix and determines the sense of the constraint (max: <, min: >, fix: =) while direction1 and direction2 are used to interpret the direction of the flows involved. In signifies an input flow to the unit while out signifies an output flow from the unit. For each of these parameters, there is a corresponding [constraint]_[direction1]_[direction2]_units_on_coefficient. For example: max_ratio_in_out_unit_flow creates the following constraint:

    input_flow < max_ratio_in_out_unit_flow * output_flow + max_units_on_coefficient_in_out * units_on

    real world example: Compressed Air Energy Storage

    To give a feeling for why these functionalities are useful, consider the following real world example for Compressed Air Energy Storage:

    image

    known issues

    That does not mean that this implementation is perfect; there are some known issues:

    • Multiple ways to do the same thing (kind of)
    • The ordering of nodes in unit__node__node relationship matters and this can be confusing
    • When specifying a unit__node__node relationship, currently toolbox doesn’t constrain a user to choosing nodes that are connected to the unit. It’s possible to create a unit__node__node relationship between a unit and nodes where there are no flows. We actually need to define a relationship between two flows, which is really a relationship between a unit__[to/from]_node relationship and a unit__[to/from]_node relationship.
    • There is a long list of parameters (24 in total) [fix/max/min]_ratio_[in/out]_[in/out]_[unit_flow/units_on_coefficient]
    +Define an efficiency · SpineOpt.jl

    How to define an efficiency

    relationships between the inputs and outputs of a unit

    The image below shows an overview of the possible relationships between the inputs and outputs of a unit.

    image

    image

    The key capability requirements are:

    • Easily define arbitrary numbers of input and output flows
    • Easily create piecewise affine linear relationships between any two flows
    • Anything more complicated can be done via user_constraints

    unit__node__node relationship

    image

    The unit__node__node relationship allows you to constrain two nodes to each other via a number of different parameters.:

    • fix_ratio_in_out_unit_flow: equivalent to an (incremental) heat rate. Input_flow = fix_ratio_in_out_unit_flow * output_flow + fix_units_on_coefficient_in_out * units_on. It can be piecewise linear, used in conjunction with operating_points with monotonically increasing coefficients (not enforced). Used in conjunction with fix_units_on_coefficient_in_out triggers a fixed flow when the unit is online and unit_start_flow triggers a flow on a unit start (start fuel consumption).
    • fix_ratio_out_in_unit_flow: equivalent to an efficiency. Output_flow = fix_ratio_out_in_unit_flow x input_flow + fix_units_on_coefficient_out_in * units_on. Ordering of the nodes in the unit__node__node relationship matters. The first node will be the output flow and the second node will be treated as the input flow (consistently with the out_in in the parameter name. A units_on coefficient is added with fix_units_on_coefficient_out_in.
    • In addition to fix_ratio_in_out_unit_flow and fix_ratio_out_in_unit_flow you have [constraint]_ratio_[direction1]_[direction2]_unit_flow where constraint can be min, max or fix and determines the sense of the constraint (max: <, min: >, fix: =) while direction1 and direction2 are used to interpret the direction of the flows involved. In signifies an input flow to the unit while out signifies an output flow from the unit. For each of these parameters, there is a corresponding [constraint]_[direction1]_[direction2]_units_on_coefficient. For example: max_ratio_in_out_unit_flow creates the following constraint:

    input_flow < max_ratio_in_out_unit_flow * output_flow + max_units_on_coefficient_in_out * units_on

    real world example: Compressed Air Energy Storage

    To give a feeling for why these functionalities are useful, consider the following real world example for Compressed Air Energy Storage:

    image

    known issues

    That does not mean that this implementation is perfect; there are some known issues:

    • Multiple ways to do the same thing (kind of)
    • The ordering of nodes in unit__node__node relationship matters and this can be confusing
    • When specifying a unit__node__node relationship, currently toolbox doesn’t constrain a user to choosing nodes that are connected to the unit. It’s possible to create a unit__node__node relationship between a unit and nodes where there are no flows. We actually need to define a relationship between two flows, which is really a relationship between a unit__[to/from]_node relationship and a unit__[to/from]_node relationship.
    • There is a long list of parameters (24 in total) [fix/max/min]_ratio_[in/out]_[in/out]_[unit_flow/units_on_coefficient]
    diff --git a/dev/how_to/impose_renewable_energy_targets/index.html b/dev/how_to/impose_renewable_energy_targets/index.html index 764b5df145..65ca0c46f6 100644 --- a/dev/how_to/impose_renewable_energy_targets/index.html +++ b/dev/how_to/impose_renewable_energy_targets/index.html @@ -1,2 +1,2 @@ -Impose renewable energy targets · SpineOpt.jl

    How to impose renewable energy targets

    This advanced concept illustrates how renewable targets can be realized in SpineOpt.

    Imposing lower limits on renewable production

    Imposing a lower bound on the cumulated flow of a unit group by an absolute value

    In the current landscape of energy systems modeling, especially in investment models, it is a common idea to implement a lower limit on the amount of electricity that is generated by renewable sources. SpineOpt allows the user to implement such restrictions by means of the min_total_cumulated_unit_flow_to_node parameter. Which will trigger the creation of the constraint_total_cumulated_unit_flow.

    To impose a limit on overall renewable generation over the entire optimization horizon, the following objects, relationships, and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. electricity generation from wind), where one

    or multiple unit_flows are associated with renewable generation

    1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing electricity demand. (Note: To distinguish e.g. between regions there can also be more than one electricity node)
    2. unit__to_node: To associate electricity flows with a unit, the relationship between the unit and the electricity node needs to be imposed, to trigger the generation of a electricity-unit_flow variable.
    3. min_total_cumulated_unit_flow_to_node: This parameter triggers a lower bound on all cumulated flows from a unit (or a group of units), e.g. the group of all renewable generators, to a node (or node group).

    Let's take a look at a simple example to see how this works. Suppose that we have a system with only one node, which represents the demand for electricity, and two units: a wind farm, and a conventional gas unit. To connect the wind farm to the electricity node, the unit__to_node relationship has to be defined.

    One can then simply define the min_total_cumulated_unit_flow_to_node parameter for the 'windfarm__toelectricity_node' relationship to impose a lower bound on the total generation origination from the wind farm.

    Note that the value of this parameter is expected to be given as an absolute value, thus care has to be taken to make sure that the units match with the ones used for the unit_flow variable.

    The main source of flexibility in the use of this constraint lies in the possibility to define the parameter for relationships that link 'nodegroups' and/or 'unitgroups'. For example, by grouping multiple units that are considered renewable sources (e.g. PV, and wind), targets can be implemented across multiple renewable sources. Similarly, by defining multiple electricity nodes, generation targets can be spatially disagreggated.

    Limiting the cumulated flow of a unit group by a share of the demand

    For convenience, we want to be able to define the min_total_cumulated_unit_flow_to_node, when used to set a renewable target, as a share of the demand. At the moment an absolute lower bound needs to be provided by the user, but we want to automate this preprocessing in SpineOpt. (to be implemented)

    Imposing an upper limit on carbon emissions

    Imposing an upper limit on carbon emissions over the entire optimization horizon

    To impose a limit on overall carbon emissions over the entire optimization horizon, the following objects, relationships and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

    or multiple unit_flows are associated with carbon emissions

    1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. (Note: To distinguish e.g. between regions there can also be more than one carbon node)
    2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
    3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
    4. max_total_cumulated_unit_flow_to_node (and unit__to_node): This parameter triggers a limit on all flows from a unit (or a group of units), e.g. the group of all conventional generators, to a node (or node groups), e.g. considering the atmosphere as a fictive CO2 node, over the entire modelling horizon (e.g. a carbon budget). For example this could be defined on a relationship between a gasplant and a Carbon node, but can also be defined a unit group of all conventional generators and a carbon node. See also: constraint_total_cumulated_unit_flow

    Imposing an upper bound on the cumulated flows of a unit group for a specific period of time (advanced method)

    If the desired functionality is not to cap emissions over the entire modelling horizon, but rather for specific periods of time (e.g., to impose decreasing carbon caps over time), an alternative method can be used, which will be described in the following.

    To illustrate this functionality, we will assume that there is a ficticious cap of 100 for a period of time 2025-2030, and a cap of 50 for the period of time 2030-2035. In this simple example, we will assume that one carbon-emitting unit carbon_unit is present with two outgoing commodity flows, e.g. here electricity and carbon.

    Three nodes are required to represent this system: an electricity node, a carbon_cap_1 node (with has_state=true and node_state_cap=100), and a carbon_cap_2 node (with has_state=true and node_state_cap=50).

    Further we introduce the unit__node__node relationships between carbon_unit__carbon_cap1__electricity and carbon_unit__carbon_cap2__electricity. On these relationships, we will define the ratio between emissions and electricity production. In this fictious example, we will assume 0.5 units of emissions per unit of electricity.

    The fix_ratio_out_out parameter will now be defined as a time varying parameter in the following way (simplified representation of TimeSeries parameter):

    fix_ratio_out_out(carbon_unit__carbon_cap1__electricity) = [2025: 0.5; 2030: 0] fix_ratio_out_out(carbon_unit__carbon_cap2__electricity) = [2025: 0; 2030: 0.5]

    This way the first emission-cap node carbon_cap1 can only be "filled" during the 2025-2030, while carbon_cap2 can only be "filled" during the second period 2030-2035.

    Note that it would also be possible to have, e.g., one node with time-varying node_state_cap. However, in this case, "unused" carbon emissions in the first period of time would be availble for the second period of time.

    Imposing a carbon tax

    To include carbon pricing in a model, the following objects, relationships and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

    or multiple unit_flows are associated with carbon emissions

    1. node and tax_in_unit_flow: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. To associate a carbon-tax with all incoming unit_flows, the tax_in_unit_flow parameter can be defined on this node (Note: To distinguish e.g. between regions there can also be more than one carbon node)
    2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
    3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (Gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
    +Impose renewable energy targets · SpineOpt.jl

    How to impose renewable energy targets

    This advanced concept illustrates how renewable targets can be realized in SpineOpt.

    Imposing lower limits on renewable production

    Imposing a lower bound on the cumulated flow of a unit group by an absolute value

    In the current landscape of energy systems modeling, especially in investment models, it is a common idea to implement a lower limit on the amount of electricity that is generated by renewable sources. SpineOpt allows the user to implement such restrictions by means of the min_total_cumulated_unit_flow_to_node parameter. Which will trigger the creation of the constraint_total_cumulated_unit_flow.

    To impose a limit on overall renewable generation over the entire optimization horizon, the following objects, relationships, and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. electricity generation from wind), where one

    or multiple unit_flows are associated with renewable generation

    1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing electricity demand. (Note: To distinguish e.g. between regions there can also be more than one electricity node)
    2. unit__to_node: To associate electricity flows with a unit, the relationship between the unit and the electricity node needs to be imposed, to trigger the generation of a electricity-unit_flow variable.
    3. min_total_cumulated_unit_flow_to_node: This parameter triggers a lower bound on all cumulated flows from a unit (or a group of units), e.g. the group of all renewable generators, to a node (or node group).

    Let's take a look at a simple example to see how this works. Suppose that we have a system with only one node, which represents the demand for electricity, and two units: a wind farm, and a conventional gas unit. To connect the wind farm to the electricity node, the unit__to_node relationship has to be defined.

    One can then simply define the min_total_cumulated_unit_flow_to_node parameter for the 'windfarm__toelectricity_node' relationship to impose a lower bound on the total generation origination from the wind farm.

    Note that the value of this parameter is expected to be given as an absolute value, thus care has to be taken to make sure that the units match with the ones used for the unit_flow variable.

    The main source of flexibility in the use of this constraint lies in the possibility to define the parameter for relationships that link 'nodegroups' and/or 'unitgroups'. For example, by grouping multiple units that are considered renewable sources (e.g. PV, and wind), targets can be implemented across multiple renewable sources. Similarly, by defining multiple electricity nodes, generation targets can be spatially disagreggated.

    Limiting the cumulated flow of a unit group by a share of the demand

    For convenience, we want to be able to define the min_total_cumulated_unit_flow_to_node, when used to set a renewable target, as a share of the demand. At the moment an absolute lower bound needs to be provided by the user, but we want to automate this preprocessing in SpineOpt. (to be implemented)

    Imposing an upper limit on carbon emissions

    Imposing an upper limit on carbon emissions over the entire optimization horizon

    To impose a limit on overall carbon emissions over the entire optimization horizon, the following objects, relationships and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

    or multiple unit_flows are associated with carbon emissions

    1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. (Note: To distinguish e.g. between regions there can also be more than one carbon node)
    2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
    3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
    4. max_total_cumulated_unit_flow_to_node (and unit__to_node): This parameter triggers a limit on all flows from a unit (or a group of units), e.g. the group of all conventional generators, to a node (or node groups), e.g. considering the atmosphere as a fictive CO2 node, over the entire modelling horizon (e.g. a carbon budget). For example this could be defined on a relationship between a gasplant and a Carbon node, but can also be defined a unit group of all conventional generators and a carbon node. See also: constraint_total_cumulated_unit_flow

    Imposing an upper bound on the cumulated flows of a unit group for a specific period of time (advanced method)

    If the desired functionality is not to cap emissions over the entire modelling horizon, but rather for specific periods of time (e.g., to impose decreasing carbon caps over time), an alternative method can be used, which will be described in the following.

    To illustrate this functionality, we will assume that there is a ficticious cap of 100 for a period of time 2025-2030, and a cap of 50 for the period of time 2030-2035. In this simple example, we will assume that one carbon-emitting unit carbon_unit is present with two outgoing commodity flows, e.g. here electricity and carbon.

    Three nodes are required to represent this system: an electricity node, a carbon_cap_1 node (with has_state=true and node_state_cap=100), and a carbon_cap_2 node (with has_state=true and node_state_cap=50).

    Further we introduce the unit__node__node relationships between carbon_unit__carbon_cap1__electricity and carbon_unit__carbon_cap2__electricity. On these relationships, we will define the ratio between emissions and electricity production. In this fictious example, we will assume 0.5 units of emissions per unit of electricity.

    The fix_ratio_out_out parameter will now be defined as a time varying parameter in the following way (simplified representation of TimeSeries parameter):

    fix_ratio_out_out(carbon_unit__carbon_cap1__electricity) = [2025: 0.5; 2030: 0] fix_ratio_out_out(carbon_unit__carbon_cap2__electricity) = [2025: 0; 2030: 0.5]

    This way the first emission-cap node carbon_cap1 can only be "filled" during the 2025-2030, while carbon_cap2 can only be "filled" during the second period 2030-2035.

    Note that it would also be possible to have, e.g., one node with time-varying node_state_cap. However, in this case, "unused" carbon emissions in the first period of time would be availble for the second period of time.

    Imposing a carbon tax

    To include carbon pricing in a model, the following objects, relationships and parameters are relevant:

    1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

    or multiple unit_flows are associated with carbon emissions

    1. node and tax_in_unit_flow: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. To associate a carbon-tax with all incoming unit_flows, the tax_in_unit_flow parameter can be defined on this node (Note: To distinguish e.g. between regions there can also be more than one carbon node)
    2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
    3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (Gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
    diff --git a/dev/how_to/manage_output/index.html b/dev/how_to/manage_output/index.html index 776e66700e..f0136b3549 100644 --- a/dev/how_to/manage_output/index.html +++ b/dev/how_to/manage_output/index.html @@ -1,2 +1,2 @@ -Manage output · SpineOpt.jl

    How to manage Output Data

    Once a model is created and successfully run, it will hopefully produce results and output data. This section covers how the writing of output data is controlled and managed.

    Specifying Your Output Data Store

    In your workflow (for more details see Setting up a workflow for SpineOpt in Spine Toolbox) you will normally have a output datastore connected to your RunSpineOpt workflow tool. This is where your output data will be written. If no output datastore is specified, the results will be written by default to the input datastore. However, it is generally preferable to define a separate output data store for results. See Setting up a workflow for SpineOpt in Spine Toolbox for the steps to add an output datastore to your workflow)

    Specifying Outputs to Write

    Outputting of results to the output datastore is controlled using the output and report object classes. To output a specific variable to the output datastore, we need to create an output object of the same name. For example, to output the unit_flow variable, we must create an output object named unit_flow. The SpineOpt template contains output objects for most problem variables and importing or re-importing the SpineOpt template will add these to your input datastore. So it is probable these output objects will exist already in your input datastore. Once the output objects exist in your model, they must then be added to a report object by creating an report__output relationship

    Creating Reports

    Reports are essentially a collection of outputs that can be written to an output datastore. Any number of report objects can be created. We add output items to a report by creating report__output relationships between the output objects we want included and the desired report object. Finally, to write a specic report to the output database, we must create a model__report relationship for each report object we want included in the output datastore.

    Reporting of Input Parameters

    In addition to writing results as outputs to a datastore, SpineOpt can also report input parameter data. To allow specific input parameters to be included in a report, they must be first added as output objects with a name corresponding exactly to the parameter name. For example, to allow the demand parameter to be included in a report, there must be a correspondingly named output object called demand. Similarly to outputs, to include an input parameter in a report, we must create a report__output relationship between the output object representing the input parameter (e.g. demand) and the desired report object.

    Reporting of Dual Values

    To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's marginal value to be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

    To report the reduced_cost() for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of unitson in the final fixed LP to be written to the output db. Finally, if any constraint duals or reducedcost values are requested via a report, calculate_duals is set to true and the final fixed LP solve is triggered.

    Output Data Temporal Resolution

    To control the resolution of report data (both output data and input data appearing in reports), we use the output_resolution output parameter. For the specific output (or input), this indicates the resolution at which the values should be reported. If output_resolution is null (the default), results are reported at the highest available resolution that will follow from the temporal structure of the model. If output_resolution is a duration value, then the average value is reported.

    Output Data Structure

    The structure of the output data will follow the structure of the input data with the inclusion of additional dimensions as described below:

    • The report object to which the output data items belong will be added as a dimension
    • The relevant stochastic scenario will be added as a dimension to all output data items. This allows for stochastic data to be written to the output datastore. However, in deterministic models, the single deterministic scenario will still appear as an additional dimension
    • For unit flows, the flow direction is added as a dimension to the output.

    Example: unit_flow

    For example, consider the unit_flow) optimisation variable. This variable is dimensioned on the unit__to_node and unit__from_node relationships. In the output datastore, the report, stochastic_scenario and flow direction are added as additional dimensions. Therefore, unit__to_node values will appear in the output datastore as timeseries parameters associated with the report__unit__node__direction__stochastic_scenario relationship as shown below.

    image

    To view the data, simply double-click on the timeseries value

    Example: units_on

    Consider the units_on) optimisation variable. This variable is dimensioned on the unit object class. In the output datastore, the report and stochastic_scenario are added as additional dimensions. Therefore, units_on values will appear in the output datastore as timeseries parameters associated with the report__unit__stochastic_scenario relationship as shown below.

    image

    To view the data, simply double-click on the timeseries value

    Alternatives and Multiple Model Runs

    • All outputs from a single run of a model will be tagged with a unique "alternative". Alternatives allow multiple values to be specified for the same parameter. If a model is run multiple times, the results will be appended to the output datastore with a new alternative which uniquely identifies the scenario and model run. This is convenient as it allows results from multiple runs and for multiple scenarios to be viewed and compared simultaneously. If a specific altnernative is not selected (the default condition) the results for all alternatives will be visible. If a single altnerative is selected or multiple alternatives are selected in the altnerative tree, then only the results for the selected alternatives will be shown.

    In the example below, the relationship class report__unit__stochastic_scenario is selected in the relationship treem therefore results for that relationship class are showing in the relationship parameter pane. Furthermore, in the alternative tree, the alternative 10h TP Load _Reun SpineOpt... is selected, meaning only results for that alternative are being displayed.

    image

    Output Writing Summary

    • We need an output object in our intput datastore for each variable or marginal value we want included in a report
    • Inputs data can also be reported. As above, we need to create an output object named after the input parameter we want reported
    • We need to create a report object to contain our desired outputs (or input parameters) which are added to our report via report__output relationships
    • We need to create a model__report object to write a specific report to the output datastore.
    • The temporal resolution of outputs (which may also be input parameters) is controlled by the output_resolution output duration parameter. If null, the highest available resolution is reported, otherwise the average is reported over the desired duration.
    • Additional dimensions are added to the output data such as the report object, stochastic_scenario and, in the case of unit_flow, the flow direction.
    • Model outputs are tagged with altnernatives that are unique to the model run and scenario that generated them
    +Manage output · SpineOpt.jl

    How to manage Output Data

    Once a model is created and successfully run, it will hopefully produce results and output data. This section covers how the writing of output data is controlled and managed.

    Specifying Your Output Data Store

    In your workflow (for more details see Setting up a workflow for SpineOpt in Spine Toolbox) you will normally have a output datastore connected to your RunSpineOpt workflow tool. This is where your output data will be written. If no output datastore is specified, the results will be written by default to the input datastore. However, it is generally preferable to define a separate output data store for results. See Setting up a workflow for SpineOpt in Spine Toolbox for the steps to add an output datastore to your workflow)

    Specifying Outputs to Write

    Outputting of results to the output datastore is controlled using the output and report object classes. To output a specific variable to the output datastore, we need to create an output object of the same name. For example, to output the unit_flow variable, we must create an output object named unit_flow. The SpineOpt template contains output objects for most problem variables and importing or re-importing the SpineOpt template will add these to your input datastore. So it is probable these output objects will exist already in your input datastore. Once the output objects exist in your model, they must then be added to a report object by creating an report__output relationship

    Creating Reports

    Reports are essentially a collection of outputs that can be written to an output datastore. Any number of report objects can be created. We add output items to a report by creating report__output relationships between the output objects we want included and the desired report object. Finally, to write a specic report to the output database, we must create a model__report relationship for each report object we want included in the output datastore.

    Reporting of Input Parameters

    In addition to writing results as outputs to a datastore, SpineOpt can also report input parameter data. To allow specific input parameters to be included in a report, they must be first added as output objects with a name corresponding exactly to the parameter name. For example, to allow the demand parameter to be included in a report, there must be a correspondingly named output object called demand. Similarly to outputs, to include an input parameter in a report, we must create a report__output relationship between the output object representing the input parameter (e.g. demand) and the desired report object.

    Reporting of Dual Values

    To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's marginal value to be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

    To report the reduced_cost() for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of unitson in the final fixed LP to be written to the output db. Finally, if any constraint duals or reducedcost values are requested via a report, calculate_duals is set to true and the final fixed LP solve is triggered.

    Output Data Temporal Resolution

    To control the resolution of report data (both output data and input data appearing in reports), we use the output_resolution output parameter. For the specific output (or input), this indicates the resolution at which the values should be reported. If output_resolution is null (the default), results are reported at the highest available resolution that will follow from the temporal structure of the model. If output_resolution is a duration value, then the average value is reported.

    Output Data Structure

    The structure of the output data will follow the structure of the input data with the inclusion of additional dimensions as described below:

    • The report object to which the output data items belong will be added as a dimension
    • The relevant stochastic scenario will be added as a dimension to all output data items. This allows for stochastic data to be written to the output datastore. However, in deterministic models, the single deterministic scenario will still appear as an additional dimension
    • For unit flows, the flow direction is added as a dimension to the output.

    Example: unit_flow

    For example, consider the unit_flow) optimisation variable. This variable is dimensioned on the unit__to_node and unit__from_node relationships. In the output datastore, the report, stochastic_scenario and flow direction are added as additional dimensions. Therefore, unit__to_node values will appear in the output datastore as timeseries parameters associated with the report__unit__node__direction__stochastic_scenario relationship as shown below.

    image

    To view the data, simply double-click on the timeseries value

    Example: units_on

    Consider the units_on) optimisation variable. This variable is dimensioned on the unit object class. In the output datastore, the report and stochastic_scenario are added as additional dimensions. Therefore, units_on values will appear in the output datastore as timeseries parameters associated with the report__unit__stochastic_scenario relationship as shown below.

    image

    To view the data, simply double-click on the timeseries value

    Alternatives and Multiple Model Runs

    • All outputs from a single run of a model will be tagged with a unique "alternative". Alternatives allow multiple values to be specified for the same parameter. If a model is run multiple times, the results will be appended to the output datastore with a new alternative which uniquely identifies the scenario and model run. This is convenient as it allows results from multiple runs and for multiple scenarios to be viewed and compared simultaneously. If a specific altnernative is not selected (the default condition) the results for all alternatives will be visible. If a single altnerative is selected or multiple alternatives are selected in the altnerative tree, then only the results for the selected alternatives will be shown.

    In the example below, the relationship class report__unit__stochastic_scenario is selected in the relationship treem therefore results for that relationship class are showing in the relationship parameter pane. Furthermore, in the alternative tree, the alternative 10h TP Load _Reun SpineOpt... is selected, meaning only results for that alternative are being displayed.

    image

    Output Writing Summary

    • We need an output object in our intput datastore for each variable or marginal value we want included in a report
    • Inputs data can also be reported. As above, we need to create an output object named after the input parameter we want reported
    • We need to create a report object to contain our desired outputs (or input parameters) which are added to our report via report__output relationships
    • We need to create a model__report object to write a specific report to the output datastore.
    • The temporal resolution of outputs (which may also be input parameters) is controlled by the output_resolution output duration parameter. If null, the highest available resolution is reported, otherwise the average is reported over the desired duration.
    • Additional dimensions are added to the output data such as the report object, stochastic_scenario and, in the case of unit_flow, the flow direction.
    • Model outputs are tagged with altnernatives that are unique to the model run and scenario that generated them
    diff --git a/dev/how_to/model_hydro_power_coupling/index.html b/dev/how_to/model_hydro_power_coupling/index.html index 9860e9cda3..45bb7e7143 100644 --- a/dev/how_to/model_hydro_power_coupling/index.html +++ b/dev/how_to/model_hydro_power_coupling/index.html @@ -1,2 +1,2 @@ -Model hydro power coupling · SpineOpt.jl

    How to model hydro power coupling

    This how-to aims at demonstrating how we can model a hydropower system in Spine (SpineOpt.jl and Spine-Toolbox) with different assumptions and goals. It starts off by setting up a simple model of system of two hydropower plants and gradually introduces additional features.

    Info

    In each of the next sections, we perform incremental changes to the initial simple hydropower model. If you want to keep the database that you created, you can duplicate the database file (right-click on the input database and select Duplicate and duplicate files) and perform the changes in the new database. You need to configure the workflow accordingly in order to run the database you want (please check the Simple System tutorial for how to do that).

    Context

    The goal of the model is to capture the combined operation of two hydropower plants (Språnget and Fallet) that operate on the same river as shown in the picture bellow. Each power plant has its own reservoir and generates electricity by discharging water. The plants might need to spill water, i.e., release water from their reservoirs without generating electricity, for various reasons. The water discharged or spilled by the upstream power plant follows the river route and becomes available to the downstream power plant.

    A system of two hydropower plants.

    Setting up a Basic Hydro power Model

    The picture below shows how such a system of hydro power plants translates to a SpineOpt model. It looks quite daunting at first glance, but we'll break it down in smaller parts in the following subsections.

    Two hydro power plants in SpineOpt

    Parameters for the two hydro power plants in SpineOpt

    Model

    Before we can create the hydro power system, we'll have to define a model, temporal structure and stochastic structure. The basic model in the templates will do. Though, we'll change the temporal resolution to 6h and the model start / end to for 1 day as an example.

    As for the report, we are typically interested in the outputs node_state, unit_flows and connection_flows.

    Nodes and commodities

    Nodes are at the center of a SpineOpt system, so let's start with that. There are other ways to model hydro power plants but here we represent each hydro power plant with 2 nodes, an 'upper' node to represent the water arriving at each plant and a 'lower' node to represent the water that is discharged and becomes available to the next plant. The general idea of splitting these in 2 nodes is to be able to simulate a time delay between the entrance and the exit (although in this tutorial we will not go in detail in this time delay).

    Additionally we need a node for electricity.

    Optionally, we can indicate that we are dealing with water flows and electricity production through commodities. Note that commodities are only indicative and are not strictly necessary. As in the picture below, we define a 'water' and an 'electricity' commodity and we connect these to the nodes with node__commodity relations.

    Two hydro power plants in SpineOpt

    Flows by means of connections

    We'll ensure a correct flow between the nodes through connections. The flows include:

    • local inflows in the reservoirs,
    • internal flows in the hydro power plants (between the 'upper' and 'lower' nodes),
    • the discharge flow that exits the Språnget hydro power plant at the lower node and flows to the upper node of the Fallet hydro power plant,
    • the spill flow that bypasses the Språnget hydro power plant at the upper node and flows to the upper node of the Fallet hydro power plant,
    • the discharge flow that exits the Fallet hydro power plant at the lower node and flows to the downstream river,
    • the spill flow that bypasses the Fallet hydro power plant at the upper node and flows to the downstream river.

    For the local inflows in the reservoirs, we actually do not need a connection. Instead we can model this as a negative demand in one of the nodes of the power plant. For examlpe, consider an inflow of -112 for Språnget and one of -2 for Fallet.

    local inflows Spranget local inflows Fallet

    The flow within each hydro power plant, i.e. the discharge flow between the 'upper' and the 'lower' node to generate the electricity, will also not be handled by the connection but by the units. In fact, anything that happens between the 'upper' and 'lower' nodes will be handled by the units.

    For each of the remaining flows we create a connection entity. These connections need to be connected to nodes to function properly. To that end we'll use the connection__from_node. As the name suggests, we connect the connection to the node where the flow comes from, e.g. the 'lower' node of the Språnget hydro power plant to the connection between the 'lower' node of the Språnget and the 'upper' node of the Fallet hydro power plant. The result is shown in the picture below.

    connection__from_node

    As the flows are unbound by default, we also need to define the relation between the nodes and the flows with the connection__node_node entities. We need one between the 'upper' node of the Språnget hydro power plant and the 'upper' node of the Fallet hydro power plant for the corresponding spill connection. We also need one between the 'lower' node of the Språnget hydro power plant and the 'upper' node of the Fallet hydro power plant for the corresponding discharge connection.

    connection__node_node

    We bind the flows by setting the fixratiooutinflow to 1.0.

    connection__node_node parameters

    The result should look like this:

    Flows through connections

    Energy conversion by means of units

    Each hydro power plant uses a unit to convert the flow of water to electricity. These units are connected to the 'upper' and 'lower' nodes of the hydro power plants and the 'electricity' node. Water enters the 'upper' node, so the 'upper' node is connected to the unit through the unit__from_node. Water is then discharged, so the 'lower' node is connected to the unit through the unit__to_node. As the water discharges, electricity is produced, so the 'electricity' node is also connected to the unit through the unit__to_node. Below is a figure of these units. There is another unit connected to the electricity node but we'll get back to that later.

    units

    Through the relations between the units and the nodes, we can set the capacity for the water flow and the electricity generation.

    For example, the capacity of the water flow from the 'upper' node to the the unit is 115 for Språnget and 165 for Fallet.

    flow capacity parameters

    For example, the capacity of the electricity production from the unit to the 'electricity' node is 112.2 for Fallet and 69 for Språnget.

    electric capacity parameters

    Additionally, we add a unit to represent the income from selling the electricity production in the electricity market. The electricity price will be represented by a negative variable operation and maintenance (VOM) cost. That parameter needs to be set at the unit__from_node between the electricity node and the unit. Any (negative) value is fine, but we show an example below.

    electricity price parameter

    electricity price

    Again, by default the flows are unbound, so we have to bind them with unit__node_node entities. The discharge flow from the 'upper' node flows in its entirety to the 'lower' node. As such the unit__node_node relation between the hydro power plant unit and the 'upper' and 'lower' node gets a value of 1.0 for the fix_ratio_out_in_unit_flow.

    unit flow capacity

    For the conversion from water flow to electricity, we need to take the conversion efficiency of the plant into account. For example, the fix_ratio_out_in_unit_flow for the unit__node_node entity between the unit, the 'upper' node and the 'electricity' node is 0.6 for the Språnget hydro power plant and 0.68 for the Fallet hydro power plant.

    unit efficiency

    The result should look like this:

    units

    Storage in nodes

    To model the reservoirs of each hydropower plant, we leverage the state feature that a node can have to represent storage capability. We only need to do this for one of the two nodes that we have used to model each plant and we choose the upper level node. To activate the storage functionality of a node, we set the value of the parameter has_state as True (be careful to not set it as a string but select the boolean true value). Then, we need to set the capacity of the reservoir by setting the node_state_cap parameter value.

    Depending on the constraints of your hydro power plant, you can also fix the initial and final values of the reservoir by setting the parameter fix_node_state to the respective values (use nan values for the time steps that you don't want to impose such constraints). When fixing the initial value of a reservoir value, the value should be fixed at ‘t-1’ instead of ’t0’. That is because the initial value of a reservoir means the previous value before the first hour.

    storage Spranget storage Fallet

    Examining the results

    At this point the model should be ready to run and you can examine the results in the output database with the Spine DB editor.

    Maximisation of Stored Water

    Instead of fixing the water content of the reservoirs at the end of the planning period, we can consider that the remaining water in the reservoirs has a value and then maximize the value along with the revenues for producing electricity within the planning horizon. This objective term is often called the Value of stored water and we can approximate it by assuming that this water will be used to generate electricity in the future that would be sold at a forecasted price. The water stored in the upstream hydropower plant will become also available to the downstream plant and this should be taken into account.

    To model the value of stored water we need to make some additions and modifications to the initial model.

    • First, add a new node (see adding nodes) and give it a name (e.g., stored_water). This node will accumulate the water stored in the reservoirs at the end of the planning horizon. Associate the node with the water commodity (see node__commodity).

    • Add three more units (see adding units); two will transfer the water at the end of the planning horizon in the new node that we just added (e.g., Språnget_stored_water, Fallet_stored_water), and one will be used as a sink introducing the value of stored water in the objective function (e.g., value_stored_water).

    • To establish the topology of the new units and nodes (see adding unit relationships):

      • add one unit__from_node relationship, between the value_stored_water unit from the stored_water node, another one between the Språnget_stored_water unit from the Språnget_upper node and one for Faller_stored_water from Fallet_upper.
      • add one unit__node__node relationship between the Språnget_stored_water unit with the stored_water and Språnget_upper nodes and another one for Fallet_stored_water unit with the stored_water and Fallet_upper nodes,
      • add a unit__to_node relationship between the Fallet_stored_water and the stored_water node and another one between the Språnget_stored_water unit and the stored_water node.
    • Now we need to make some changes in object parameter values.

      • Extend the planning horizon of the model by one time step
      • Remove the fix_node_state parameter values for the end of the optimization horizon as you seen in the following figure: double click on the value cell of the Språnget_upper and Fallet_upper nodes, select the third data row, right-click, select Remove rows, and click OK.
      • Add an electricity price for the extra time step. Enter the parameter vom_cost on the unit__from_node relationship between the electricity_node and the electricity_load and set 0 as the price of electricity for the last time step. The price is set to zero to ensure no electricity is sold during this hour.
    • Finally, we need to add some relationship parameter values for the new units:

      • Add a vom_cost parameter value on a value_stored_water|stored_water instance of a unit__from_node relationship, as you see in the figure bellow. For the timeseries we have imposed a zero cost for all the optimisation horizon, while we use an assumed future electricity value for the additional time step at the end of the horizon.

      Adding vom_cost parameter value on the value_stored_water unit.

      • Add two fix_ratio_out_in_unit_flow parameter values as you see in the figure bellow. The efficiency of Fallet_stored_water is the same as the Fallet_pwr_plant as the water in Fallet's reservoir will be used to produce electricity by the the Fallet plant only. On the other hand, the water from Språnget's reservoir will be used both by Fallet and Språnget plant, therefore we use the sum of the two efficiencies in the parameter value of Språnget_stored_water.

      Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units.

    You can now commit your changes in the database, execute the project and examine the results! As an exercise, try to retrieve the value of stored water as it is calculated by the model.

    Spillage Constraints - Minimisation of Spilt Water

    It might be the case that we need to impose certain limits to the amount of water that is spilt on each time step of the planning horizon, e.g., for environmental reasons, there can be a minimum and a maximum spillage level. At the same time, to avoid wasting water that could be used for producing electricity, we could explicitly impose the spillage minimisation to be added in the objective function.

    • Add one unit (see adding units) to impose the spillage constraints to each plant and name it (for example Språnget_spill).

    • Remove the Språnget_to_Fallet_spill connection (in the Object tree expand the connection class, right-click on Språnget_to_Fallet_spill, and the click Remove).

    • To establish the topology of the unit (see adding unit relationships):

      • Add a unit__from_node relationship, between the Språnget_spill unit from the Språnget_upper node,
      • add a unit__node__node relationship between the Språnget_spill unit with the Fallet_upper and Språnget_upper nodes,
      • add a unit__to_node relationship between the Språnget_spill and the Fallet_upper node,
    • Add the relationship parameter values for the new units:

      • Set the unit_capacity (to apply a maximum), the minimum_operating_point (defined as a percentage of the unit_capacity) to impose a minimum, and the vom_cost to penalise the water that is spilt:

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

    • For the Språnget_spill unit define the fix_ratio_out_in_unit_flow parameter value of the min_spillage|Fallet_upper|Språnget_upper relationship to 1 (see adding unit relationships).

    Commit your changes in the database, execute the project and examine the results! As an exercise, you can perform this process for and Fallet plant (you would also need to add another water node, downstream of Fallet).

    Follow Contracted Load Curve

    It is often the case that a system of hydropower plants should follow a given production profile. To model this in the given system, all we have to do is set a demand in the form of a timeseries to the electricity_node.

    +Model hydro power coupling · SpineOpt.jl

    How to model hydro power coupling

    This how-to aims at demonstrating how we can model a hydropower system in Spine (SpineOpt.jl and Spine-Toolbox) with different assumptions and goals. It starts off by setting up a simple model of system of two hydropower plants and gradually introduces additional features.

    Info

    In each of the next sections, we perform incremental changes to the initial simple hydropower model. If you want to keep the database that you created, you can duplicate the database file (right-click on the input database and select Duplicate and duplicate files) and perform the changes in the new database. You need to configure the workflow accordingly in order to run the database you want (please check the Simple System tutorial for how to do that).

    Context

    The goal of the model is to capture the combined operation of two hydropower plants (Språnget and Fallet) that operate on the same river as shown in the picture bellow. Each power plant has its own reservoir and generates electricity by discharging water. The plants might need to spill water, i.e., release water from their reservoirs without generating electricity, for various reasons. The water discharged or spilled by the upstream power plant follows the river route and becomes available to the downstream power plant.

    A system of two hydropower plants.

    Setting up a Basic Hydro power Model

    The picture below shows how such a system of hydro power plants translates to a SpineOpt model. It looks quite daunting at first glance, but we'll break it down in smaller parts in the following subsections.

    Two hydro power plants in SpineOpt

    Parameters for the two hydro power plants in SpineOpt

    Model

    Before we can create the hydro power system, we'll have to define a model, temporal structure and stochastic structure. The basic model in the templates will do. Though, we'll change the temporal resolution to 6h and the model start / end to for 1 day as an example.

    As for the report, we are typically interested in the outputs node_state, unit_flows and connection_flows.

    Nodes and commodities

    Nodes are at the center of a SpineOpt system, so let's start with that. There are other ways to model hydro power plants but here we represent each hydro power plant with 2 nodes, an 'upper' node to represent the water arriving at each plant and a 'lower' node to represent the water that is discharged and becomes available to the next plant. The general idea of splitting these in 2 nodes is to be able to simulate a time delay between the entrance and the exit (although in this tutorial we will not go in detail in this time delay).

    Additionally we need a node for electricity.

    Optionally, we can indicate that we are dealing with water flows and electricity production through commodities. Note that commodities are only indicative and are not strictly necessary. As in the picture below, we define a 'water' and an 'electricity' commodity and we connect these to the nodes with node__commodity relations.

    Two hydro power plants in SpineOpt

    Flows by means of connections

    We'll ensure a correct flow between the nodes through connections. The flows include:

    • local inflows in the reservoirs,
    • internal flows in the hydro power plants (between the 'upper' and 'lower' nodes),
    • the discharge flow that exits the Språnget hydro power plant at the lower node and flows to the upper node of the Fallet hydro power plant,
    • the spill flow that bypasses the Språnget hydro power plant at the upper node and flows to the upper node of the Fallet hydro power plant,
    • the discharge flow that exits the Fallet hydro power plant at the lower node and flows to the downstream river,
    • the spill flow that bypasses the Fallet hydro power plant at the upper node and flows to the downstream river.

    For the local inflows in the reservoirs, we actually do not need a connection. Instead we can model this as a negative demand in one of the nodes of the power plant. For examlpe, consider an inflow of -112 for Språnget and one of -2 for Fallet.

    local inflows Spranget local inflows Fallet

    The flow within each hydro power plant, i.e. the discharge flow between the 'upper' and the 'lower' node to generate the electricity, will also not be handled by the connection but by the units. In fact, anything that happens between the 'upper' and 'lower' nodes will be handled by the units.

    For each of the remaining flows we create a connection entity. These connections need to be connected to nodes to function properly. To that end we'll use the connection__from_node. As the name suggests, we connect the connection to the node where the flow comes from, e.g. the 'lower' node of the Språnget hydro power plant to the connection between the 'lower' node of the Språnget and the 'upper' node of the Fallet hydro power plant. The result is shown in the picture below.

    connection__from_node

    As the flows are unbound by default, we also need to define the relation between the nodes and the flows with the connection__node_node entities. We need one between the 'upper' node of the Språnget hydro power plant and the 'upper' node of the Fallet hydro power plant for the corresponding spill connection. We also need one between the 'lower' node of the Språnget hydro power plant and the 'upper' node of the Fallet hydro power plant for the corresponding discharge connection.

    connection__node_node

    We bind the flows by setting the fixratiooutinflow to 1.0.

    connection__node_node parameters

    The result should look like this:

    Flows through connections

    Energy conversion by means of units

    Each hydro power plant uses a unit to convert the flow of water to electricity. These units are connected to the 'upper' and 'lower' nodes of the hydro power plants and the 'electricity' node. Water enters the 'upper' node, so the 'upper' node is connected to the unit through the unit__from_node. Water is then discharged, so the 'lower' node is connected to the unit through the unit__to_node. As the water discharges, electricity is produced, so the 'electricity' node is also connected to the unit through the unit__to_node. Below is a figure of these units. There is another unit connected to the electricity node but we'll get back to that later.

    units

    Through the relations between the units and the nodes, we can set the capacity for the water flow and the electricity generation.

    For example, the capacity of the water flow from the 'upper' node to the the unit is 115 for Språnget and 165 for Fallet.

    flow capacity parameters

    For example, the capacity of the electricity production from the unit to the 'electricity' node is 112.2 for Fallet and 69 for Språnget.

    electric capacity parameters

    Additionally, we add a unit to represent the income from selling the electricity production in the electricity market. The electricity price will be represented by a negative variable operation and maintenance (VOM) cost. That parameter needs to be set at the unit__from_node between the electricity node and the unit. Any (negative) value is fine, but we show an example below.

    electricity price parameter

    electricity price

    Again, by default the flows are unbound, so we have to bind them with unit__node_node entities. The discharge flow from the 'upper' node flows in its entirety to the 'lower' node. As such the unit__node_node relation between the hydro power plant unit and the 'upper' and 'lower' node gets a value of 1.0 for the fix_ratio_out_in_unit_flow.

    unit flow capacity

    For the conversion from water flow to electricity, we need to take the conversion efficiency of the plant into account. For example, the fix_ratio_out_in_unit_flow for the unit__node_node entity between the unit, the 'upper' node and the 'electricity' node is 0.6 for the Språnget hydro power plant and 0.68 for the Fallet hydro power plant.

    unit efficiency

    The result should look like this:

    units

    Storage in nodes

    To model the reservoirs of each hydropower plant, we leverage the state feature that a node can have to represent storage capability. We only need to do this for one of the two nodes that we have used to model each plant and we choose the upper level node. To activate the storage functionality of a node, we set the value of the parameter has_state as True (be careful to not set it as a string but select the boolean true value). Then, we need to set the capacity of the reservoir by setting the node_state_cap parameter value.

    Depending on the constraints of your hydro power plant, you can also fix the initial and final values of the reservoir by setting the parameter fix_node_state to the respective values (use nan values for the time steps that you don't want to impose such constraints). When fixing the initial value of a reservoir value, the value should be fixed at ‘t-1’ instead of ’t0’. That is because the initial value of a reservoir means the previous value before the first hour.

    storage Spranget storage Fallet

    Examining the results

    At this point the model should be ready to run and you can examine the results in the output database with the Spine DB editor.

    Maximisation of Stored Water

    Instead of fixing the water content of the reservoirs at the end of the planning period, we can consider that the remaining water in the reservoirs has a value and then maximize the value along with the revenues for producing electricity within the planning horizon. This objective term is often called the Value of stored water and we can approximate it by assuming that this water will be used to generate electricity in the future that would be sold at a forecasted price. The water stored in the upstream hydropower plant will become also available to the downstream plant and this should be taken into account.

    To model the value of stored water we need to make some additions and modifications to the initial model.

    • First, add a new node (see adding nodes) and give it a name (e.g., stored_water). This node will accumulate the water stored in the reservoirs at the end of the planning horizon. Associate the node with the water commodity (see node__commodity).

    • Add three more units (see adding units); two will transfer the water at the end of the planning horizon in the new node that we just added (e.g., Språnget_stored_water, Fallet_stored_water), and one will be used as a sink introducing the value of stored water in the objective function (e.g., value_stored_water).

    • To establish the topology of the new units and nodes (see adding unit relationships):

      • add one unit__from_node relationship, between the value_stored_water unit from the stored_water node, another one between the Språnget_stored_water unit from the Språnget_upper node and one for Faller_stored_water from Fallet_upper.
      • add one unit__node__node relationship between the Språnget_stored_water unit with the stored_water and Språnget_upper nodes and another one for Fallet_stored_water unit with the stored_water and Fallet_upper nodes,
      • add a unit__to_node relationship between the Fallet_stored_water and the stored_water node and another one between the Språnget_stored_water unit and the stored_water node.
    • Now we need to make some changes in object parameter values.

      • Extend the planning horizon of the model by one time step
      • Remove the fix_node_state parameter values for the end of the optimization horizon as you seen in the following figure: double click on the value cell of the Språnget_upper and Fallet_upper nodes, select the third data row, right-click, select Remove rows, and click OK.
      • Add an electricity price for the extra time step. Enter the parameter vom_cost on the unit__from_node relationship between the electricity_node and the electricity_load and set 0 as the price of electricity for the last time step. The price is set to zero to ensure no electricity is sold during this hour.
    • Finally, we need to add some relationship parameter values for the new units:

      • Add a vom_cost parameter value on a value_stored_water|stored_water instance of a unit__from_node relationship, as you see in the figure bellow. For the timeseries we have imposed a zero cost for all the optimisation horizon, while we use an assumed future electricity value for the additional time step at the end of the horizon.

      Adding vom_cost parameter value on the value_stored_water unit.

      • Add two fix_ratio_out_in_unit_flow parameter values as you see in the figure bellow. The efficiency of Fallet_stored_water is the same as the Fallet_pwr_plant as the water in Fallet's reservoir will be used to produce electricity by the the Fallet plant only. On the other hand, the water from Språnget's reservoir will be used both by Fallet and Språnget plant, therefore we use the sum of the two efficiencies in the parameter value of Språnget_stored_water.

      Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units.

    You can now commit your changes in the database, execute the project and examine the results! As an exercise, try to retrieve the value of stored water as it is calculated by the model.

    Spillage Constraints - Minimisation of Spilt Water

    It might be the case that we need to impose certain limits to the amount of water that is spilt on each time step of the planning horizon, e.g., for environmental reasons, there can be a minimum and a maximum spillage level. At the same time, to avoid wasting water that could be used for producing electricity, we could explicitly impose the spillage minimisation to be added in the objective function.

    • Add one unit (see adding units) to impose the spillage constraints to each plant and name it (for example Språnget_spill).

    • Remove the Språnget_to_Fallet_spill connection (in the Object tree expand the connection class, right-click on Språnget_to_Fallet_spill, and the click Remove).

    • To establish the topology of the unit (see adding unit relationships):

      • Add a unit__from_node relationship, between the Språnget_spill unit from the Språnget_upper node,
      • add a unit__node__node relationship between the Språnget_spill unit with the Fallet_upper and Språnget_upper nodes,
      • add a unit__to_node relationship between the Språnget_spill and the Fallet_upper node,
    • Add the relationship parameter values for the new units:

      • Set the unit_capacity (to apply a maximum), the minimum_operating_point (defined as a percentage of the unit_capacity) to impose a minimum, and the vom_cost to penalise the water that is spilt:

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

    • For the Språnget_spill unit define the fix_ratio_out_in_unit_flow parameter value of the min_spillage|Fallet_upper|Språnget_upper relationship to 1 (see adding unit relationships).

    Commit your changes in the database, execute the project and examine the results! As an exercise, you can perform this process for and Fallet plant (you would also need to add another water node, downstream of Fallet).

    Follow Contracted Load Curve

    It is often the case that a system of hydropower plants should follow a given production profile. To model this in the given system, all we have to do is set a demand in the form of a timeseries to the electricity_node.

    diff --git a/dev/how_to/print_the_model/index.html b/dev/how_to/print_the_model/index.html index 0f123581a8..2bcb15769b 100644 --- a/dev/how_to/print_the_model/index.html +++ b/dev/how_to/print_the_model/index.html @@ -6,4 +6,4 @@ optimize=false ) write_model_file(m; file_name="<path-with-file-name>")

    The resulting file has the extension *.so_model in the especified path.

    Note

    If running the previous code gives you an error, please try replacing the last line with SpineOpt.write_model_file(m; file_name="<path-with-file-name>"). This error might appear in previous versions of SpineOpt where the write_model_file was not exported as part of the SpineOpt package.

    In either case, here are some tips if you are using this file for debugging. The file can be very large so often it is helpful to create a minimum example of your model with only one or two timesteps. In addition, in the call to run_spineopt() you can add the keyword argument optimize=false, as in the example above, so it will just build the model and not attempt to solve it.

    The function write_model_file formats the file nicely for the user's readability. However, if the model is too large, it skips the number of rows it prints. If you still want the complete file, you can also use the JuMP function write_to_file to print the model. For more details on the function, please visit the JuMP package documentation.

    using JuMP
    -JuMP.write_to_file(m, filename="<path-with-file-name>")
    +JuMP.write_to_file(m, filename="<path-with-file-name>") diff --git a/dev/how_to/set_up_representative_days_for_investment_problems/index.html b/dev/how_to/set_up_representative_days_for_investment_problems/index.html index 621b0d818c..61d705d7a8 100644 --- a/dev/how_to/set_up_representative_days_for_investment_problems/index.html +++ b/dev/how_to/set_up_representative_days_for_investment_problems/index.html @@ -1,2 +1,2 @@ -Set up representative days for investment problems · SpineOpt.jl

    How to set up representative days for investment problems

    Assuming you already have an investment model with a certain temporal structure that works, you can turn it into a representative periods model with the following steps.

    Info

    Note that representative days often limit the ability to properly account for seasonal storages. However, SpineOpt takes this into account and allows for seasonal storage.

    1. Select the representative periods. For example if you are modelling a year, you can select a few weeks (one in summer, one in winder, and one in mid season).
    2. For each representative period, create a temporal_block specifying block_start, block_end and resolution.
    3. Associate these temporal_blocks to some nodes and units in your system, via node__temporal_block and units_on__temporal_block relationships.
    4. Finally, for each original temporal_block associated to the nodes and units above, specify the value of the representative_periods_mapping parameter. This should be a map where each entry associates a date-time to the name of one of the representative period temporal_blocks created in step 3. More specifically, an entry with t as the key and b as the value means that time slices from the original block starting at t, are 'represented' by time slices from the b block. In other words, time slices between t and t plus the duration of b are represented by b.

    In SpineOpt, this will be interpreted in the following way:

    • For each node and unit associated to any of your representative temporal_blocks, the operational variables (with the exception of node_state) will be created only for the representative periods. For the non-representative periods, SpineOpt will use the variable of the corresponding representative period according to the value of the representative_periods_mapping parameter.
    • The node_state variable and the investment variables will be created for all periods, representative and non-representative.

    The SpinePeriods.jl package provides an alternative, perhaps simpler way to setup a representative periods model based on the automatic selection and ordering of periods.

    +Set up representative days for investment problems · SpineOpt.jl

    How to set up representative days for investment problems

    Assuming you already have an investment model with a certain temporal structure that works, you can turn it into a representative periods model with the following steps.

    Info

    Note that representative days often limit the ability to properly account for seasonal storages. However, SpineOpt takes this into account and allows for seasonal storage.

    1. Select the representative periods. For example if you are modelling a year, you can select a few weeks (one in summer, one in winder, and one in mid season).
    2. For each representative period, create a temporal_block specifying block_start, block_end and resolution.
    3. Associate these temporal_blocks to some nodes and units in your system, via node__temporal_block and units_on__temporal_block relationships.
    4. Finally, for each original temporal_block associated to the nodes and units above, specify the value of the representative_periods_mapping parameter. This should be a map where each entry associates a date-time to the name of one of the representative period temporal_blocks created in step 3. More specifically, an entry with t as the key and b as the value means that time slices from the original block starting at t, are 'represented' by time slices from the b block. In other words, time slices between t and t plus the duration of b are represented by b.

    In SpineOpt, this will be interpreted in the following way:

    • For each node and unit associated to any of your representative temporal_blocks, the operational variables (with the exception of node_state) will be created only for the representative periods. For the non-representative periods, SpineOpt will use the variable of the corresponding representative period according to the value of the representative_periods_mapping parameter.
    • The node_state variable and the investment variables will be created for all periods, representative and non-representative.

    The SpinePeriods.jl package provides an alternative, perhaps simpler way to setup a representative periods model based on the automatic selection and ordering of periods.

    diff --git a/dev/implementation_details/documentation/index.html b/dev/implementation_details/documentation/index.html index a1d4621955..a4b9dcbb64 100644 --- a/dev/implementation_details/documentation/index.html +++ b/dev/implementation_details/documentation/index.html @@ -28,4 +28,4 @@ expand_tags!(objective_function_lines, docstrings) open(joinpath(mathpath, "objective_function_automatically_generated.md"), "w") do file write(file, join(objective_function_lines, "\n")) -end

    To deactivate the functionality, just remove the code and replace the tags in your .md file.

    It is also possible to introduce this feature over time. Anytime you want to add the documentation of a constraint to the docstring you need to follow a few steps:

    1. For the docstring
      1. add @doc raw before the docstring (that allows to write latex in the docstring)
    2. For the .md file
      1. cut the description and mathematical formulation and paste them in the corresponding function's docstring
      2. add the tag to pull the above from the docstring

    An example of both the docstring and the instruction file have already been shown above.

    Drag and drop

    There is also a drag-and-drop feature for select chapters (e.g. the how to section). For those chapters you can simply add your markdown file to the folder of the chapter and it will be automatically added to the documentation. To allow both manually composed chapters and automatically generated chapter, the functionality is only activated for empty chapters (of the structure "chapter name" => []).

    The drag-and-drop function assumes a specific structure for the documentation files.

    +end

    To deactivate the functionality, just remove the code and replace the tags in your .md file.

    It is also possible to introduce this feature over time. Anytime you want to add the documentation of a constraint to the docstring you need to follow a few steps:

    1. For the docstring
      1. add @doc raw before the docstring (that allows to write latex in the docstring)
    2. For the .md file
      1. cut the description and mathematical formulation and paste them in the corresponding function's docstring
      2. add the tag to pull the above from the docstring

    An example of both the docstring and the instruction file have already been shown above.

    Drag and drop

    There is also a drag-and-drop feature for select chapters (e.g. the how to section). For those chapters you can simply add your markdown file to the folder of the chapter and it will be automatically added to the documentation. To allow both manually composed chapters and automatically generated chapter, the functionality is only activated for empty chapters (of the structure "chapter name" => []).

    The drag-and-drop function assumes a specific structure for the documentation files.

    diff --git a/dev/implementation_details/how_does_the_model_update_itself/index.html b/dev/implementation_details/how_does_the_model_update_itself/index.html index 48b8342d6c..9dc4db364a 100644 --- a/dev/implementation_details/how_does_the_model_update_itself/index.html +++ b/dev/implementation_details/how_does_the_model_update_itself/index.html @@ -1,3 +1,3 @@ How does the model update itself · SpineOpt.jl

    How does the model update itself after rolling?

    In SpineOpt, constraints, objective and bounds update themselves automatically whenever the model rolls. To picture this, imagine you have a rolling model with two windows, corresponding to the first and second days of 2023, and daily resolution. (In other words, each window consists of a single time-slice that covers the entire day.) Also, imagine you have a node where the demand is a time-series defined as follows:

    timestampvalue
    2023-01-015
    2023-01-0210

    To simplify things, let's say the nodal balance constraint in SpineOpt has the following form:

    sum of flows entering the node - sum of flows leaving the node == node's demand
    -(for each t in the current window)

    You would expect the rhs of this constraint to be 5 for the first window, and 10 for the second window. That is indeed the case, but the way this works under the hood is quite 'magical' so to say.

    In SpineOpt, the rhs of the above constraint would be written (roughly) using the following julia expression:

    demand[(node=n, t=t, more arguments...)]

    Notice the brackets ([]) around the named-tuple with the arguments. Without these (i.e., demand(node=n, t=t, more arguments...)) the expression would evaluate to a number, and the constraint would be static (non-self-updating). But with the brackets, instead of a number, the expression evaluates to a special object of type Call. The important thing about the Call is it remembers the arguments, including the t.

    Right before the constraint is passed to the solver, SpineOpt 'realizes' the Call with the current value of t, and computes the actual rhs. So for the first window, where t is the first day in 2023, it will be 5.

    Now, whenever SpineOpt rolls forward to solve the next window, it updates the value of t by adding the roll_forward value. (This allows SpineOpt to reuse the same time-slices in all the windows.) But when this happens, the Call is also checked to see if it would return something different now that t has been rolled. And if that's the case, the constraint is automatically updated to reflect the change. In our example, the rhs would become 10 because t is now the second day.

    In sum, without the brackets, the constraint would be lhs == 5 (and it would never change), whereas with the brackets, the constraint becomes lhs == the demand at the current value of t.

    And the above is valid not only for rhs, but also for any coefficient in any constraint or objective, and for any variable bound.

    To see how all this is actually implemented, we suggest you to look at the code of SpineInterface. The starting point is the implementation of Base.getindex for the Parameter type so that writing, e.g., demand[...arguments...] returns a Call that remembers the arguments. From then, we proceed to extend JuMP.jl to handle our Call objects within constraints and objective. The last bit is perhaps the most complex, and consists in storing callbacks inside TimeSlice objects whenever they are used to retrieve the value of a Parameter to build a model. The callbacks are carefully crafted to update a specific part of that model (e.g., a variable coefficient, a variable bound, a constraint rhs). Whenever the TimeSlice rolls, depending on how much it rolls, the appropriate callbacks are called resulting in the model being properly updated. That's roughly it! Hopefully this brief introduction helps (but please contact us if you need more guidance).

    +(for each t in the current window)

    You would expect the rhs of this constraint to be 5 for the first window, and 10 for the second window. That is indeed the case, but the way this works under the hood is quite 'magical' so to say.

    In SpineOpt, the rhs of the above constraint would be written (roughly) using the following julia expression:

    demand[(node=n, t=t, more arguments...)]

    Notice the brackets ([]) around the named-tuple with the arguments. Without these (i.e., demand(node=n, t=t, more arguments...)) the expression would evaluate to a number, and the constraint would be static (non-self-updating). But with the brackets, instead of a number, the expression evaluates to a special object of type Call. The important thing about the Call is it remembers the arguments, including the t.

    Right before the constraint is passed to the solver, SpineOpt 'realizes' the Call with the current value of t, and computes the actual rhs. So for the first window, where t is the first day in 2023, it will be 5.

    Now, whenever SpineOpt rolls forward to solve the next window, it updates the value of t by adding the roll_forward value. (This allows SpineOpt to reuse the same time-slices in all the windows.) But when this happens, the Call is also checked to see if it would return something different now that t has been rolled. And if that's the case, the constraint is automatically updated to reflect the change. In our example, the rhs would become 10 because t is now the second day.

    In sum, without the brackets, the constraint would be lhs == 5 (and it would never change), whereas with the brackets, the constraint becomes lhs == the demand at the current value of t.

    And the above is valid not only for rhs, but also for any coefficient in any constraint or objective, and for any variable bound.

    To see how all this is actually implemented, we suggest you to look at the code of SpineInterface. The starting point is the implementation of Base.getindex for the Parameter type so that writing, e.g., demand[...arguments...] returns a Call that remembers the arguments. From then, we proceed to extend JuMP.jl to handle our Call objects within constraints and objective. The last bit is perhaps the most complex, and consists in storing callbacks inside TimeSlice objects whenever they are used to retrieve the value of a Parameter to build a model. The callbacks are carefully crafted to update a specific part of that model (e.g., a variable coefficient, a variable bound, a constraint rhs). Whenever the TimeSlice rolls, depending on how much it rolls, the appropriate callbacks are called resulting in the model being properly updated. That's roughly it! Hopefully this brief introduction helps (but please contact us if you need more guidance).

    diff --git a/dev/implementation_details/how_to_write_a_constraint/index.html b/dev/implementation_details/how_to_write_a_constraint/index.html index 6b26a37ca3..41d87c6d44 100644 --- a/dev/implementation_details/how_to_write_a_constraint/index.html +++ b/dev/implementation_details/how_to_write_a_constraint/index.html @@ -249,4 +249,4 @@ my_unit_flow_capacity(unit = pwrplant, node = elec, direction = to_node, t = 2023-01-01T07:00~>2023-01-01T08:00, t_next = 2023-01-01T08:00~>2023-01-01T09:00, s_path = Object[realisation, forecast2]) : 0 = 0 my_unit_flow_capacity(unit = pwrplant, node = fuel, direction = from_node, t = 2023-01-01T00:00~>2023-01-01T02:00, t_next = 2023-01-01T02:00~>2023-01-01T04:00, s_path = Object[realisation]) : 0 = 0 my_unit_flow_capacity(unit = pwrplant, node = fuel, direction = from_node, t = 2023-01-01T02:00~>2023-01-01T04:00, t_next = 2023-01-01T04:00~>2023-01-01T06:00, s_path = Object[realisation]) : 0 = 0 -

    Which looks like we're on to something. Indeed, on the fuel side, s_path is always just [realisation], because both the fuel node and the pwrplant unit have the one_stage stochastic_structure. But on the elec side, at the beginning we have [realisation] and then we start getting [realisation, forecast1] and [realisation, forecast2]. The turning point is exactly at 2023-01-01T06:00, where realisation ends according to the stochastic_scenario_end parameter.

    So it's all good!

    The function that generates the constraint

    Congratulations, you have made it this far. Now we will finally start writing our constraint expression.

    Note

    I will grab a coffee and be right back.

    +

    Which looks like we're on to something. Indeed, on the fuel side, s_path is always just [realisation], because both the fuel node and the pwrplant unit have the one_stage stochastic_structure. But on the elec side, at the beginning we have [realisation] and then we start getting [realisation, forecast1] and [realisation, forecast2]. The turning point is exactly at 2023-01-01T06:00, where realisation ends according to the stochastic_scenario_end parameter.

    So it's all good!

    The function that generates the constraint

    Congratulations, you have made it this far. Now we will finally start writing our constraint expression.

    Note

    I will grab a coffee and be right back.

    diff --git a/dev/implementation_details/time_slices/index.html b/dev/implementation_details/time_slices/index.html index 1a1608bff9..6414be8f55 100644 --- a/dev/implementation_details/time_slices/index.html +++ b/dev/implementation_details/time_slices/index.html @@ -1,2 +1,2 @@ -Time slices · SpineOpt.jl

    How does SpineOpt perceive time?

    This section answers the following questions:

    1. What are time slices?
    2. What are time slice convenience functions?
    3. How can they be used?

    What are time slices?

    A TimeSlice is simply a slice of time with a start and an end. We use them in SpineOpt to represent the temporal dimension.

    More specifically, we build the model using TimeSlices for the temporal indices. This happens in the run_spineopt function and it's done in two steps:

    1. Generate the temporal structure for the model:
      1. Translate the temporal_blocks in the input DB to a set of TimeSlice objects.
      2. Create relationships between these TimeSlice objects:
        • Relationships between two consecutive time slices (t_before ending right when t_after starts).
        • Relationship between overlapping time slices (t_short contained in t_long).
      3. Store all the above within m.ext[:spineopt].temporal_structure.
    2. Build the model:
      1. Query m.ext[:spineopt].temporal_structure to collect generated TimeSlice objects and relationships.
      2. Use them for indexing variables and generating constraints and objective.

    To translate the temporal_blocks into TimeSlice objects, we basically look at the value of model_start and model_end for the model object, as well as the value of the resolution for the different temporal_block objects. Then we build as many TimeSlices as needed to cover the period between model_start and model_end at each resolution.

    Note

    m is the JuMP.Model object that SpineOpt builds and solves using JuMP. It has a field called ext which is a Dict where one can store custom data. m.ext[:spineopt].temporal_structure is just another Dict where we store data related to the temporal structure.

    What are the time slice convenience functions?

    To facilitate querying the temporal structure, we have developed the following convenience functions:

    Note

    To further figure out what the time slice convenience functions do, you can play around with them. To do so, you first need to make a database (e.g. in Spine Toolbox). Then you can call run_spineopt with that database and collect the model m. If you are impatient you do not even need to solve the model, you can pass optimize=false as keyword argument to run_spineopt. And then you can start calling the time slice convenience functions with m (e.g. t_in_t).

    How can the time slice convenience functions be used?

    When building constraints you typically want to know which TimeSlices come after/before another, overlap another, or contain/are contained in another. You can obtain this type of info by calling the above convenience functions.

    For example, say you're generating a constraint at a 3-hour resolution. This means you have a TimeSlice in your constraint index, and that TimeSlice covers 3 hours. Now, say you want to sum a certain variable over those 3 hours in your constraint expression. You need to know all the TimeSlices contained in the one from your constraint index. You can find this out by calling t_in_t with it.

    More information can be found in the Write a constraint for SpineOpt section.

    Note

    A fool proof way of writing a constraint - that may not be the most efficient - is to always take the highest resolution among the overlapping TimeSlices to generate the constraint indices. The other TimeSlices can then be obtained from t_overlaps_t.

    +Time slices · SpineOpt.jl

    How does SpineOpt perceive time?

    This section answers the following questions:

    1. What are time slices?
    2. What are time slice convenience functions?
    3. How can they be used?

    What are time slices?

    A TimeSlice is simply a slice of time with a start and an end. We use them in SpineOpt to represent the temporal dimension.

    More specifically, we build the model using TimeSlices for the temporal indices. This happens in the run_spineopt function and it's done in two steps:

    1. Generate the temporal structure for the model:
      1. Translate the temporal_blocks in the input DB to a set of TimeSlice objects.
      2. Create relationships between these TimeSlice objects:
        • Relationships between two consecutive time slices (t_before ending right when t_after starts).
        • Relationship between overlapping time slices (t_short contained in t_long).
      3. Store all the above within m.ext[:spineopt].temporal_structure.
    2. Build the model:
      1. Query m.ext[:spineopt].temporal_structure to collect generated TimeSlice objects and relationships.
      2. Use them for indexing variables and generating constraints and objective.

    To translate the temporal_blocks into TimeSlice objects, we basically look at the value of model_start and model_end for the model object, as well as the value of the resolution for the different temporal_block objects. Then we build as many TimeSlices as needed to cover the period between model_start and model_end at each resolution.

    Note

    m is the JuMP.Model object that SpineOpt builds and solves using JuMP. It has a field called ext which is a Dict where one can store custom data. m.ext[:spineopt].temporal_structure is just another Dict where we store data related to the temporal structure.

    What are the time slice convenience functions?

    To facilitate querying the temporal structure, we have developed the following convenience functions:

    Note

    To further figure out what the time slice convenience functions do, you can play around with them. To do so, you first need to make a database (e.g. in Spine Toolbox). Then you can call run_spineopt with that database and collect the model m. If you are impatient you do not even need to solve the model, you can pass optimize=false as keyword argument to run_spineopt. And then you can start calling the time slice convenience functions with m (e.g. t_in_t).

    How can the time slice convenience functions be used?

    When building constraints you typically want to know which TimeSlices come after/before another, overlap another, or contain/are contained in another. You can obtain this type of info by calling the above convenience functions.

    For example, say you're generating a constraint at a 3-hour resolution. This means you have a TimeSlice in your constraint index, and that TimeSlice covers 3 hours. Now, say you want to sum a certain variable over those 3 hours in your constraint expression. You need to know all the TimeSlices contained in the one from your constraint index. You can find this out by calling t_in_t with it.

    More information can be found in the Write a constraint for SpineOpt section.

    Note

    A fool proof way of writing a constraint - that may not be the most efficient - is to always take the highest resolution among the overlapping TimeSlices to generate the constraint indices. The other TimeSlices can then be obtained from t_overlaps_t.

    diff --git a/dev/index.html b/dev/index.html index cfc36d66f7..f90275dc32 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Introduction · SpineOpt.jl

    Introduction

    SpineOpt.jl is an integrated energy systems optimization model, striving towards adaptability for a multitude of modelling purposes. The data-driven model structure allows for highly customizable energy system descriptions, as well as flexible temporal and stochastic structures, without the need to alter the model source code directly. The methodology is based on mixed-integer linear programming (MILP), and SpineOpt relies on JuMP.jl for interfacing with the different solvers.

    While, in principle, it is possible to run SpineOpt by itself, it has been designed to be used through the Spine toolbox, and take maximum advantage of the data and modelling workflow management tools therein. Thus, we highly recommend installing Spine Toolbox as well, as outlined in the Installation guide.

    Important remark on spine entities

    The documentation often refers to objects and relationships. These are actually both entities in a spine database (technically they are entities with one dimension and multiple dimensions respectively). The distinction here is to make a more clear distinction between the physical unit/node (entity with one dimension or object) and the flows between units and/or nodes (entities with multiple dimensions or relationships).

    In this documentation the two naming structures (object/relationships or entities) will be used interchangeably. Though, in upcoming versions of the documentation, the naming structure will gravitate more towards entities.

    How the documentation is structured

    Having a high-level overview of how this documentation is structured will help you know where to look for certain things.

    The documentation is implicitly structured in 3 parts (documenter.jl does not explicitly support parts).

    Part 1 aims to get new users started as quick as possible. It contains installation instructions (including trouble shooting), tutorials for basic usage and explains how to do some high-level things (e.g define an efficiency).

    • Getting Started contains guides for starting to use SpineOpt.jl. The Installation section explains different ways to install SpineOpt.jl on your computer. To ensure that the installation has been done correctly, the Recommended workflow section provides a guide to set up a minimal working example of SpineOpt.jl in Spine Toolbox. Some SpineOpt concepts will already be explained in this example but more information is provided in the Concept Reference chapter. Regardless, any issues during this example will most likely be due to the installation. If any problems are encountered, you can start with the Trouble shooting section.

    • Tutorials provides guided examples for a set of basic use-cases, either as videos, written text and/or example files. The SpineOpt.jl repository includes a folder examples for ready-made example models. Each example is its own sub-folder, where the input data is provided as .json or .sqlite files. This way, you can easily get a feel for how SpineOpt works with pre-made datasets, either through Spine Toolbox, or directly from the Julia REPL.

    Warning

    Although these examples are part of the unit tests (and should therefore be up to date), they do rely on migration scripts for their updates. That does mean that there is the possibility that there is a missing parameter that is not used by the example and as such does not trigger an error. Therefore it is not recommended to rely on these example files for building your own models.

    • How to provides explanations on how to do specific high-level things that might involve multiple elements (e.g. how to print the model).

    Part 2 explains the core principles, features and design decisions of SpineOpt without getting lost in the details.

    • Database structure lists and explains all the important data and model structure related concepts to understand in SpineOpt.jl. For a mathematical modelling point of view, see the Mathematical Formulation chapter instead. The Basics of the model structure section briefly explains the general purpose of the most important concepts, like Object Classes and Relationship Classes.

    • Standard model framework covers the temporal and stochastic framework present in very SpineOpt model. The Temporal Framework section explains how defining time works in SpineOpt.jl, and how it can be used for different purposes. The Stochastic Framework section details how different stochastic structures can be defined, how they interact with each other, and how this impacts writing Constraints in SpineOpt.jl.

    • Standard model features covers the features of the SpineOpt model. The Investment Optimization section explains how to include investment variables in your models. The Unit commitment section explains how clustered unit-commitment is defined, while the Ramping and Reserves sections explain how to enable these operational details in your model. The User Constraints section details how to include generic data-driven custom constraints. The remaining sections, namely PTDF-Based Powerflow, Pressure driven gas transfer, Lossless nodal DC power flows, explain various use-case specific modelling approaches supported by SpineOpt.jl.

    • Algorithms are alternative options to the standard model. The Decomposition section explains the Benders decomposition implementation included in SpineOpt.jl, as well as how to use it. There is also Modelling to generate alternatives and multi stage optimisation.

    Part 3 contains all the detailed information you need when you are looking for something specific (e.g. a parameter name or the formulation of a constraint).

    • SpineOpt Template contains a list of all the entities and parameters as you see them in the Spine Toolbox db editor. The Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections contain detailed explanations of each and every aspect of SpineOpt.jl, organized into the respective sections for clarity.

    • Mathematical Formulation provides the mathematical view of SpineOpt.jl, as some of the methodology-related aspects of the model are more easily understood as math than Julia code. The Variables section explains the purpose of each variable in the model, as well as how the variables are related to the different Object Classes and Relationship Classes. the Objective section explains the default objective function used in SpineOpt.jl. The Constraints section contains the mathematical formulation of each constraint, as well as explanations to their purpose and how they are controlled via different Parameters.

    • Implementation details explains some parts of the code (for those who are interested in how things work under the hood). Note that this chapter is particularly sensitive to changes in the code and as such might get out of sync. If you do notice a discrepancy, please create an issue in github. That is also the place to be if you don't find what you are looking for in this documentation.

    +Introduction · SpineOpt.jl

    Introduction

    SpineOpt.jl is an integrated energy systems optimization model, striving towards adaptability for a multitude of modelling purposes. The data-driven model structure allows for highly customizable energy system descriptions, as well as flexible temporal and stochastic structures, without the need to alter the model source code directly. The methodology is based on mixed-integer linear programming (MILP), and SpineOpt relies on JuMP.jl for interfacing with the different solvers.

    While, in principle, it is possible to run SpineOpt by itself, it has been designed to be used through the Spine toolbox, and take maximum advantage of the data and modelling workflow management tools therein. Thus, we highly recommend installing Spine Toolbox as well, as outlined in the Installation guide.

    Important remark on spine entities

    The documentation often refers to objects and relationships. These are actually both entities in a spine database (technically they are entities with one dimension and multiple dimensions respectively). The distinction here is to make a more clear distinction between the physical unit/node (entity with one dimension or object) and the flows between units and/or nodes (entities with multiple dimensions or relationships).

    In this documentation the two naming structures (object/relationships or entities) will be used interchangeably. Though, in upcoming versions of the documentation, the naming structure will gravitate more towards entities.

    How the documentation is structured

    Having a high-level overview of how this documentation is structured will help you know where to look for certain things.

    The documentation is implicitly structured in 3 parts (documenter.jl does not explicitly support parts).

    Part 1 aims to get new users started as quick as possible. It contains installation instructions (including trouble shooting), tutorials for basic usage and explains how to do some high-level things (e.g define an efficiency).

    • Getting Started contains guides for starting to use SpineOpt.jl. The Installation section explains different ways to install SpineOpt.jl on your computer. To ensure that the installation has been done correctly, the Recommended workflow section provides a guide to set up a minimal working example of SpineOpt.jl in Spine Toolbox. Some SpineOpt concepts will already be explained in this example but more information is provided in the Concept Reference chapter. Regardless, any issues during this example will most likely be due to the installation. If any problems are encountered, you can start with the Trouble shooting section.

    • Tutorials provides guided examples for a set of basic use-cases, either as videos, written text and/or example files. The SpineOpt.jl repository includes a folder examples for ready-made example models. Each example is its own sub-folder, where the input data is provided as .json or .sqlite files. This way, you can easily get a feel for how SpineOpt works with pre-made datasets, either through Spine Toolbox, or directly from the Julia REPL.

    Warning

    Although these examples are part of the unit tests (and should therefore be up to date), they do rely on migration scripts for their updates. That does mean that there is the possibility that there is a missing parameter that is not used by the example and as such does not trigger an error. Therefore it is not recommended to rely on these example files for building your own models.

    • How to provides explanations on how to do specific high-level things that might involve multiple elements (e.g. how to print the model).

    Part 2 explains the core principles, features and design decisions of SpineOpt without getting lost in the details.

    • Database structure lists and explains all the important data and model structure related concepts to understand in SpineOpt.jl. For a mathematical modelling point of view, see the Mathematical Formulation chapter instead. The Basics of the model structure section briefly explains the general purpose of the most important concepts, like Object Classes and Relationship Classes.

    • Standard model framework covers the temporal and stochastic framework present in very SpineOpt model. The Temporal Framework section explains how defining time works in SpineOpt.jl, and how it can be used for different purposes. The Stochastic Framework section details how different stochastic structures can be defined, how they interact with each other, and how this impacts writing Constraints in SpineOpt.jl.

    • Standard model features covers the features of the SpineOpt model. The Investment Optimization section explains how to include investment variables in your models. The Unit commitment section explains how clustered unit-commitment is defined, while the Ramping and Reserves sections explain how to enable these operational details in your model. The User Constraints section details how to include generic data-driven custom constraints. The remaining sections, namely PTDF-Based Powerflow, Pressure driven gas transfer, Lossless nodal DC power flows, explain various use-case specific modelling approaches supported by SpineOpt.jl.

    • Algorithms are alternative options to the standard model. The Decomposition section explains the Benders decomposition implementation included in SpineOpt.jl, as well as how to use it. There is also Modelling to generate alternatives and multi stage optimisation.

    Part 3 contains all the detailed information you need when you are looking for something specific (e.g. a parameter name or the formulation of a constraint).

    • SpineOpt Template contains a list of all the entities and parameters as you see them in the Spine Toolbox db editor. The Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections contain detailed explanations of each and every aspect of SpineOpt.jl, organized into the respective sections for clarity.

    • Mathematical Formulation provides the mathematical view of SpineOpt.jl, as some of the methodology-related aspects of the model are more easily understood as math than Julia code. The Variables section explains the purpose of each variable in the model, as well as how the variables are related to the different Object Classes and Relationship Classes. the Objective section explains the default objective function used in SpineOpt.jl. The Constraints section contains the mathematical formulation of each constraint, as well as explanations to their purpose and how they are controlled via different Parameters.

    • Implementation details explains some parts of the code (for those who are interested in how things work under the hood). Note that this chapter is particularly sensitive to changes in the code and as such might get out of sync. If you do notice a discrepancy, please create an issue in github. That is also the place to be if you don't find what you are looking for in this documentation.

    diff --git a/dev/library/index.html b/dev/library/index.html index ed7b5cec7d..c5b2ad105b 100644 --- a/dev/library/index.html +++ b/dev/library/index.html @@ -5,10 +5,10 @@ raw"sqlite:///C:\path\to\your\output_db.sqlite"; filters=Dict("tool" => "object_activity_control", "scenario" => "scenario_to_run"), alternative="alternative_to_write_results" -)source
    run_spineopt(f, url_in, url_out; <keyword arguments>)

    Same as run_spineopt(url_in, url_out; kwargs...) but call function f with the SpineOpt model as argument right after its creation (but before building and solving it).

    This is intended to be called using do block syntax.

    run_spineopt(url_in, url_out) do m
    +)
    source
    run_spineopt(f, url_in, url_out; <keyword arguments>)

    Same as run_spineopt(url_in, url_out; kwargs...) but call function f with the SpineOpt model as argument right after its creation (but before building and solving it).

    This is intended to be called using do block syntax.

    run_spineopt(url_in, url_out) do m
         # Do something with m after its creation
    -end  # Building and solving begins after quiting this block
    source
    SpineOpt.prepare_spineoptFunction
    prepare_spineopt(url_in; <keyword arguments>)

    A SpineOpt model from the contents of url_in - ready to be passed to run_spineopt!. The argument url_in must be either a String pointing to a valid Spine database, or a Dict (e.g. manually created or parsed from a json file).

    Arguments

    • log_level
    • upgrade
    • filters
    • templates
    • mip_solver
    • lp_solver
    • use_direct_model

    See run_spineopt for the description of the keyword arguments.

    source
    SpineOpt.run_spineopt!Function
    run_spineopt!(m, url_out; <keyword arguments>)

    Build SpineOpt on the given m and solve it; write report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • log_level
    • optimize
    • update_names
    • alternative
    • write_as_roll
    • log_file_path
    • resume_file_path

    See run_spineopt for the description of the keyword arguments.

    source
    SpineOpt.create_modelFunction
    create_model(mip_solver, lp_solver, use_direct_model)

    A JuMP.Model extended to be used with SpineOpt. mip_solver and lp_solver are 'optimizer factories' to be passed to JuMP.Model or JuMP.direct_model; use_direct_model is a Bool indicating whether JuMP.Model or JuMP.direct_model should be used.

    source
    SpineOpt.build_model!Function
    build_model!(m; log_level)

    Build given SpineOpt model:

    • create temporal and stochastic structures
    • add variables
    • add expressions
    • add constraints
    • set objective
    • initialize outputs

    Arguments

    • log_level::Int: an integer to control the log level.
    source
    SpineOpt.solve_model!Function
    solve_model!(m; <keyword arguments>)

    Solve given SpineOpt model and save outputs.

    Arguments

    • log_level::Int=3: an integer to control the log level.
    • update_names::Bool=false: whether or not to update variable and constraint names after the model rolls (expensive).
    • write_as_roll::Int=0: if greater than 0 and the run has a rolling horizon, then write results every that many windows.
    • resume_file_path::String=nothing: only relevant in rolling horizon optimisations with write_as_roll greater or equal than one. If the file at given path contains resume data from a previous run, start the run from that point. Also, save resume data to that same file as the model rolls and results are written to the output database.
    • calculate_duals::Bool=false: whether or not to calculate duals after the model solve.
    • output_suffix::NamedTuple=(;): to add to the outputs.
    • log_prefix::String="": to prepend to log messages.
    source
    SpineOpt.add_event_handler!Function
    add_event_handler!(fn, m, event)

    Add an event handler for given model. event must be a Symbol corresponding to an event. fn must be a function callable with the arguments corresponding to that event. Below is a table of events, arguments, and when do they fire.

    eventargumentswhen does it fire
    :model_builtmRight after model m is built.
    :model_about_to_solvemRight before model m is solved.
    :model_solvedmRight after model m is solved.
    :window_about_to_solve(m, k)Right before window k for model m is solved.
    :window_solved(m, k)Right after window k for model m is solved.

    Example

    run_spineopt("sqlite:///path-to-input-db", "sqlite:///path-to-output-db") do m
    +end  # Building and solving begins after quiting this block
    source
    SpineOpt.prepare_spineoptFunction
    prepare_spineopt(url_in; <keyword arguments>)

    A SpineOpt model from the contents of url_in - ready to be passed to run_spineopt!. The argument url_in must be either a String pointing to a valid Spine database, or a Dict (e.g. manually created or parsed from a json file).

    Arguments

    • log_level
    • upgrade
    • filters
    • templates
    • mip_solver
    • lp_solver
    • use_direct_model

    See run_spineopt for the description of the keyword arguments.

    source
    SpineOpt.run_spineopt!Function
    run_spineopt!(m, url_out; <keyword arguments>)

    Build SpineOpt on the given m and solve it; write report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • log_level
    • optimize
    • update_names
    • alternative
    • write_as_roll
    • log_file_path
    • resume_file_path

    See run_spineopt for the description of the keyword arguments.

    source
    SpineOpt.create_modelFunction
    create_model(mip_solver, lp_solver, use_direct_model)

    A JuMP.Model extended to be used with SpineOpt. mip_solver and lp_solver are 'optimizer factories' to be passed to JuMP.Model or JuMP.direct_model; use_direct_model is a Bool indicating whether JuMP.Model or JuMP.direct_model should be used.

    source
    SpineOpt.build_model!Function
    build_model!(m; log_level)

    Build given SpineOpt model:

    • create temporal and stochastic structures
    • add variables
    • add expressions
    • add constraints
    • set objective
    • initialize outputs

    Arguments

    • log_level::Int: an integer to control the log level.
    source
    SpineOpt.solve_model!Function
    solve_model!(m; <keyword arguments>)

    Solve given SpineOpt model and save outputs.

    Arguments

    • log_level::Int=3: an integer to control the log level.
    • update_names::Bool=false: whether or not to update variable and constraint names after the model rolls (expensive).
    • write_as_roll::Int=0: if greater than 0 and the run has a rolling horizon, then write results every that many windows.
    • resume_file_path::String=nothing: only relevant in rolling horizon optimisations with write_as_roll greater or equal than one. If the file at given path contains resume data from a previous run, start the run from that point. Also, save resume data to that same file as the model rolls and results are written to the output database.
    • calculate_duals::Bool=false: whether or not to calculate duals after the model solve.
    • output_suffix::NamedTuple=(;): to add to the outputs.
    • log_prefix::String="": to prepend to log messages.
    source
    SpineOpt.add_event_handler!Function
    add_event_handler!(fn, m, event)

    Add an event handler for given model. event must be a Symbol corresponding to an event. fn must be a function callable with the arguments corresponding to that event. Below is a table of events, arguments, and when do they fire.

    eventargumentswhen does it fire
    :model_builtmRight after model m is built.
    :model_about_to_solvemRight before model m is solved.
    :model_solvedmRight after model m is solved.
    :window_about_to_solve(m, k)Right before window k for model m is solved.
    :window_solved(m, k)Right after window k for model m is solved.

    Example

    run_spineopt("sqlite:///path-to-input-db", "sqlite:///path-to-output-db") do m
         add_event_handler!(println, m, :model_built)  # Print the model right after it's built
    -end
    source
    SpineOpt.generate_temporal_structure!Function
    generate_temporal_structure!(m)

    Create the temporal structure for the given SpineOpt model. After this, you can call the following functions to query the generated structure:

    • time_slice
    • t_before_t
    • t_in_t
    • t_in_t_excl
    • t_overlaps_t
    • to_time_slice
    • current_window
    source
    SpineOpt.roll_temporal_structure!Function
    roll_temporal_structure!(m[, window_number=1]; rev=false)

    Roll the temporal structure of given SpineOpt model forward a period of time equal to the value of the roll_forward parameter. If roll_forward is an array, then window_number can be given either as an Integer or a UnitRange indicating the position or successive positions in that array.

    If rev is true, then the structure is rolled backwards instead of forward.

    source
    SpineOpt.rewind_temporal_structure!Function
    rewind_temporal_structure!(m)

    Rewind the temporal structure of given SpineOpt model back to the first window.

    source
    SpineOpt.time_sliceFunction
    time_slice(m; temporal_block=anything, t=anything)

    An Array of TimeSlices in model m.

    Arguments

    • temporal_block::Union{Object,Vector{Object}}: only return TimeSlices in these blocks.
    • t::Union{TimeSlice,Vector{TimeSlice}}: only return TimeSlices that are also in this collection.
    source
    SpineOpt.t_before_tFunction
    t_before_t(m; t_before=anything, t_after=anything)

    An Array where each element is a Tuple of two consecutive TimeSlices in model m, i.e., the second starting when the first ends.

    Arguments

    • t_before: if given, return an Array of TimeSlices that start when t_before ends.
    • t_after: if given, return an Array of TimeSlices that end when t_after starts.
    source
    SpineOpt.t_in_tFunction
    t_in_t(m; t_short=anything, t_long=anything)

    An Array where each element is a Tuple of two TimeSlices in model m, the second containing the first.

    Keyword arguments

    • t_short: if given, return an Array of TimeSlices that contain t_short.
    • t_long: if given, return an Array of TimeSlices that are contained in t_long.
    source
    SpineOpt.t_in_t_exclFunction
    t_in_t_excl(m; t_short=anything, t_long=anything)

    Same as tint but exclude tuples of the same TimeSlice.

    Keyword arguments

    • t_short: if given, return an Array of TimeSlices that contain t_short (other than t_short itself).
    • t_long: if given, return an Array of TimeSlices that are contained in t_long (other than t_long itself).
    source
    SpineOpt.t_overlaps_tFunction
    t_overlaps_t(m; t)

    An Array of TimeSlices in model m that overlap the given t, where t must be in m.

    source
    SpineOpt.to_time_sliceFunction
    to_time_slice(m; t)

    An Array of TimeSlices in model m overlapping the given TimeSlice (where t may not be in m).

    source
    SpineOpt.current_windowFunction
    current_window(m)

    A TimeSlice corresponding to the current window of given model.

    source
    SpineOpt.generate_stochastic_structure!Function
    generate_stochastic_structure(m::Model)

    Generate the stochastic structure for given SpineOpt model.

    The stochastic structure is a directed acyclic graph (DAG) where the vertices are the stochastic_scenario objects, and the edges are given by the parent_stochastic_scenario__child_stochastic_scenario relationships.

    After this, you can call active_stochastic_paths to slice the generated structure.

    source
    SpineOpt.active_stochastic_pathsFunction
    active_stochastic_paths(
    +end
    source
    SpineOpt.generate_temporal_structure!Function
    generate_temporal_structure!(m)

    Create the temporal structure for the given SpineOpt model. After this, you can call the following functions to query the generated structure:

    • time_slice
    • t_before_t
    • t_in_t
    • t_in_t_excl
    • t_overlaps_t
    • to_time_slice
    • current_window
    source
    SpineOpt.roll_temporal_structure!Function
    roll_temporal_structure!(m[, window_number=1]; rev=false)

    Roll the temporal structure of given SpineOpt model forward a period of time equal to the value of the roll_forward parameter. If roll_forward is an array, then window_number can be given either as an Integer or a UnitRange indicating the position or successive positions in that array.

    If rev is true, then the structure is rolled backwards instead of forward.

    source
    SpineOpt.rewind_temporal_structure!Function
    rewind_temporal_structure!(m)

    Rewind the temporal structure of given SpineOpt model back to the first window.

    source
    SpineOpt.time_sliceFunction
    time_slice(m; temporal_block=anything, t=anything)

    An Array of TimeSlices in model m.

    Arguments

    • temporal_block::Union{Object,Vector{Object}}: only return TimeSlices in these blocks.
    • t::Union{TimeSlice,Vector{TimeSlice}}: only return TimeSlices that are also in this collection.
    source
    SpineOpt.t_before_tFunction
    t_before_t(m; t_before=anything, t_after=anything)

    An Array where each element is a Tuple of two consecutive TimeSlices in model m, i.e., the second starting when the first ends.

    Arguments

    • t_before: if given, return an Array of TimeSlices that start when t_before ends.
    • t_after: if given, return an Array of TimeSlices that end when t_after starts.
    source
    SpineOpt.t_in_tFunction
    t_in_t(m; t_short=anything, t_long=anything)

    An Array where each element is a Tuple of two TimeSlices in model m, the second containing the first.

    Keyword arguments

    • t_short: if given, return an Array of TimeSlices that contain t_short.
    • t_long: if given, return an Array of TimeSlices that are contained in t_long.
    source
    SpineOpt.t_in_t_exclFunction
    t_in_t_excl(m; t_short=anything, t_long=anything)

    Same as tint but exclude tuples of the same TimeSlice.

    Keyword arguments

    • t_short: if given, return an Array of TimeSlices that contain t_short (other than t_short itself).
    • t_long: if given, return an Array of TimeSlices that are contained in t_long (other than t_long itself).
    source
    SpineOpt.t_overlaps_tFunction
    t_overlaps_t(m; t)

    An Array of TimeSlices in model m that overlap the given t, where t must be in m.

    source
    SpineOpt.to_time_sliceFunction
    to_time_slice(m; t)

    An Array of TimeSlices in model m overlapping the given TimeSlice (where t may not be in m).

    source
    SpineOpt.current_windowFunction
    current_window(m)

    A TimeSlice corresponding to the current window of given model.

    source
    SpineOpt.generate_stochastic_structure!Function
    generate_stochastic_structure(m::Model)

    Generate the stochastic structure for given SpineOpt model.

    The stochastic structure is a directed acyclic graph (DAG) where the vertices are the stochastic_scenario objects, and the edges are given by the parent_stochastic_scenario__child_stochastic_scenario relationships.

    After this, you can call active_stochastic_paths to slice the generated structure.

    source
    SpineOpt.active_stochastic_pathsFunction
    active_stochastic_paths(
         m; stochastic_structure::Union{Object,Vector{Object}}, t::Union{TimeSlice,Vector{TimeSlice}}
    -)

    An Array of stochastic paths, where each path is itself an Array of stochastic_scenario Objects.

    The paths are obtained as follows.

    1. Start with the stochastic DAG associated to model m.
    2. Remove all the scenarios that are not in the given stochastic_structure.
    3. Remove scenarios that don't overlap the given t.
    4. Return all the paths from root to leaf in the remaining sub-DAG.
    source
    SpineOpt.write_model_fileFunction
    write_model_file(m; file_name="model")

    Write model file for given model.

    source
    SpineOpt.write_reportFunction
    write_report(m, url_out; <keyword arguments>)

    Write report(s) from given SpineOpt model to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • alternative::String="": if non empty, write results to the given alternative in the output DB.

    • log_level::Int=3: an integer to control the log level.

    source
    SpineOpt.write_report_from_intermediate_resultsFunction
    write_report_from_intermediate_results(intermediate_results_folder, default_url; <keyword arguments>)

    Collect results generated on a previous, unsuccessful SpineOpt run from intermediate_results_folder, and write the corresponding report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • alternative::String="": if non empty, write results to the given alternative in the output DB.

    • log_level::Int=3: an integer to control the log level.

    source
    SpineOpt.master_modelFunction
    master_model(m)

    The Benders master model for given model.

    source
    SpineOpt.stage_modelFunction
    stage_model(m, stage_name)

    A stage model associated to given model.

    source
    Missing docstring.

    Missing docstring for upgrade_db. Check Documenter's build log for details.

    Missing docstring.

    Missing docstring for generate_forced_availability_factor. Check Documenter's build log for details.

    +)

    An Array of stochastic paths, where each path is itself an Array of stochastic_scenario Objects.

    The paths are obtained as follows.

    1. Start with the stochastic DAG associated to model m.
    2. Remove all the scenarios that are not in the given stochastic_structure.
    3. Remove scenarios that don't overlap the given t.
    4. Return all the paths from root to leaf in the remaining sub-DAG.
    source
    SpineOpt.write_model_fileFunction
    write_model_file(m; file_name="model")

    Write model file for given model.

    source
    SpineOpt.write_reportFunction
    write_report(m, url_out; <keyword arguments>)

    Write report(s) from given SpineOpt model to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • alternative::String="": if non empty, write results to the given alternative in the output DB.

    • log_level::Int=3: an integer to control the log level.

    source
    SpineOpt.write_report_from_intermediate_resultsFunction
    write_report_from_intermediate_results(intermediate_results_folder, default_url; <keyword arguments>)

    Collect results generated on a previous, unsuccessful SpineOpt run from intermediate_results_folder, and write the corresponding report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

    Arguments

    • alternative::String="": if non empty, write results to the given alternative in the output DB.

    • log_level::Int=3: an integer to control the log level.

    source
    SpineOpt.master_modelFunction
    master_model(m)

    The Benders master model for given model.

    source
    SpineOpt.stage_modelFunction
    stage_model(m, stage_name)

    A stage model associated to given model.

    source
    Missing docstring.

    Missing docstring for upgrade_db. Check Documenter's build log for details.

    Missing docstring.

    Missing docstring for generate_forced_availability_factor. Check Documenter's build log for details.

    diff --git a/dev/mathematical_formulation/constraints/index.html b/dev/mathematical_formulation/constraints/index.html index 974dffda5b..f993dd6c74 100644 --- a/dev/mathematical_formulation/constraints/index.html +++ b/dev/mathematical_formulation/constraints/index.html @@ -65,4 +65,4 @@ \cdot \left( v^{connections\_invested\_available}_{(c,s,t)} - p^{connections\_invested\_available}_{(c,s,t)} \right) \\ & + \sum_{n,s,t} p^{storages\_invested\_available\_mv}_{(b,n,s,t)} \cdot \left( v^{storages\_invested\_available}_{(n,s,t)} - p^{storages\_invested\_available}_{(n,s,t)} \right) \\ -\end{aligned}\]

    where

    +\end{aligned}\]

    where

    diff --git a/dev/mathematical_formulation/constraints_automatically_generated/index.html b/dev/mathematical_formulation/constraints_automatically_generated/index.html index 67045a50e8..4446d36356 100644 --- a/dev/mathematical_formulation/constraints_automatically_generated/index.html +++ b/dev/mathematical_formulation/constraints_automatically_generated/index.html @@ -592,4 +592,4 @@ \cdot \left( v^{connections\_invested\_available}_{(c,s,t)} - p^{connections\_invested\_available}_{(c,s,t)} \right) \\ & + \sum_{n,s,t} p^{storages\_invested\_available\_mv}_{(b,n,s,t)} \cdot \left( v^{storages\_invested\_available}_{(n,s,t)} - p^{storages\_invested\_available}_{(n,s,t)} \right) \\ -\end{aligned}\]

    where

    +\end{aligned}\]

    where

    diff --git a/dev/mathematical_formulation/objective_function/index.html b/dev/mathematical_formulation/objective_function/index.html index 298abd1155..b6d7ecf81a 100644 --- a/dev/mathematical_formulation/objective_function/index.html +++ b/dev/mathematical_formulation/objective_function/index.html @@ -72,4 +72,4 @@ = \sum_{(n,s,t)} \left(v^{node\_slack\_neg}_{(n, s, t)} - v^{node\_slack\_pos}_{(n, s, t)} \right) \cdot p^{node\_slack\_penalty}_{(n,s,t)} \cdot p^{weight}_{(n,s,t)} \cdot \Delta t \\ -\end{aligned}\]

    +\end{aligned}\]

    diff --git a/dev/mathematical_formulation/sets/index.html b/dev/mathematical_formulation/sets/index.html index 08847b429d..df5373feac 100644 --- a/dev/mathematical_formulation/sets/index.html +++ b/dev/mathematical_formulation/sets/index.html @@ -1,2 +1,2 @@ -Sets · SpineOpt.jl

    Sets

    ind(*parameter*)

    Tuple of all objects, for which the parameter is defined

    t_before_t(t_after=t')

    Set of timeslices that are directly before timeslice t'.

    t_before_t(t_before=t')

    Set of timeslices that are directly after timeslice t'.

    t_in_t(t_short=t')

    Set of timeslices that contain timeslice t'

    t_in_t(t_long=t')

    Set of timeslices that are contained in timeslice t'

    t_overlaps_t(t')

    Set of timeslices that overlap with timeslice t'

    full_stochastic_paths

    Set of all possible scenario branches

    active_stochastic_paths(s)

    Set of all active scenario branches, based on active scenarios s

    +Sets · SpineOpt.jl

    Sets

    ind(*parameter*)

    Tuple of all objects, for which the parameter is defined

    t_before_t(t_after=t')

    Set of timeslices that are directly before timeslice t'.

    t_before_t(t_before=t')

    Set of timeslices that are directly after timeslice t'.

    t_in_t(t_short=t')

    Set of timeslices that contain timeslice t'

    t_in_t(t_long=t')

    Set of timeslices that are contained in timeslice t'

    t_overlaps_t(t')

    Set of timeslices that overlap with timeslice t'

    full_stochastic_paths

    Set of all possible scenario branches

    active_stochastic_paths(s)

    Set of all active scenario branches, based on active scenarios s

    diff --git a/dev/mathematical_formulation/variables/index.html b/dev/mathematical_formulation/variables/index.html index 98f0831274..ee6283921a 100644 --- a/dev/mathematical_formulation/variables/index.html +++ b/dev/mathematical_formulation/variables/index.html @@ -1,2 +1,2 @@ -Variables · SpineOpt.jl

    Variables

    binary_gas_connection_flow

    Math symbol: $v^{binary\_gas\_connection\_flow}$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: binary_gas_connection_flow_indices

    Binary variable with the indices node $n$ over the connection $conn$ in the direction $to\_node$ for the stochastic scenario $s$ at timestep $t$ describing if the direction of gas flow for a pressure drive gastransfer is in the indicated direction.

    connection_flow

    Math symbol: $v^{connection\_flow }$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: connection_flow_indices

    Commodity flow associated with node $n$ over the connection $conn$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

    connection_intact_flow

    Math symbol: $v^{connection\_intact\_flow}$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: connection_intact_flow_indices

    ???

    connections_decommissioned

    Math symbol: $v^{connections\_decommissioned}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of decomissioned connections $conn$ for the stochastic scenario $s$ at timestep $t$

    connections_invested

    Math symbol: $v^{connections\_invested}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of connections $conn$ invested at timestep $t$ in for the stochastic scenario $s$

    connections_invested_available

    Math symbol: $v^{connections\_invested\_available}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of invested connections $conn$ that are available still the stochastic scenario $s$ at timestep $t$

    mp_objective_lowerbound_indices

    Math symbol: $v^{mp\_objective\_lowerbound\_indices}$

    Indices: (t=t)

    Indices function: mp_objective_lowerbound_indices

    Updating lowerbound for master problem of Benders decomposition

    node_injection

    Math symbol: $v^{node\_injection}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_injection_indices

    Commodity injections at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_pressure

    Math symbol: $v^{node\_pressure}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_pressure_indices

    Pressue at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_pressure

    node_slack_neg

    Math symbol: $v^{node\_slack\_neg}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_slack_indices

    Positive slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_slack_pos

    Math symbol: $v^{node\_slack\_pos}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_slack_indices

    Negative slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_state

    Math symbol: $v^{node\_state}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_state_indices

    Storage state at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_voltage_angle

    Math symbol: $v^{node\_voltage\_angle}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_voltage_angle_indices

    Voltage angle at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_voltage_angle

    nonspin_units_shut_down

    Math symbol: $v^{nonspin\_units\_shut\_down}$

    Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

    Indices function: nonspin_units_shut_down_indices

    Number of units $u$ held available for non-spinning downward reserve provision via shutdown to node $n$ for the stochastic scenario $s$ at timestep $t$

    nonspin_units_started_up

    Math symbol: $v^{nonspin\_units\_started\_up}$

    Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

    Indices function: nonspin_units_started_up_indices

    Number of units $u$ held available for non-spinning upward reserve provision via startup to node $n$ for the stochastic scenario $s$ at timestep $t$

    storages_decommissioned

    Math symbol: $v^{storages\_decommissioned}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of decomissioned storage nodes $n$ for the stochastic scenario $s$ at timestep $t$

    storages_invested

    Math symbol: $v^{storages\_invested}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of storage nodes $n$ invested in at timestep $t$ for the stochastic scenario $s$

    storages_invested_available

    Math symbol: $v^{storages\_invested\_available}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of invested storage nodes $n$ that are available still the stochastic scenario $s$ at timestep $t$

    unit_flow

    Math symbol: $v^{unit\_flow}$

    Indices: (unit=u, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: unit_flow_indices

    Commodity flow associated with node $n$ over the unit $u$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

    unit_flow_op

    Math symbol: $v^{unit\_flow\_op}$

    Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

    Indices function: unit_flow_op_indices

    Contribution of the unit flow assocaited with operating point $i$

    unit_flow_op_active

    Math symbol: $v^{unit\_flow\_op\_active}$

    Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

    Indices function: unit_flow_op_indices

    Control the activation of operating point $i$ of units

    units_invested

    Math symbol: $v^{units\_invested}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of units $u$ for the stochastic scenario $s$ invested in at timestep $t$

    units_invested_available

    Math symbol: $v^{units\_invested\_available}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of invested units $u$ that are available still the stochastic scenario $s$ at timestep $t$

    units_mothballed

    Math symbol: $v^{units\_mothballed}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of units $u$ for the stochastic scenariocenario $s$ mothballed at timestep $t$

    units_on

    Math symbol: $v^{units\_on}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of online units $u$ for the stochastic scenario $s$ at timestep $t$

    units_shut_down

    Math symbol: $v^{units\_shut\_down}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of units $u$ for the stochastic scenario $s$ that switched to offline status at timestep $t$

    units_started_up

    Math symbol: $v^{units\_started\_up}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of units $u$ for the stochastic scenario $s$ that switched to online status at timestep $t$

    +Variables · SpineOpt.jl

    Variables

    binary_gas_connection_flow

    Math symbol: $v^{binary\_gas\_connection\_flow}$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: binary_gas_connection_flow_indices

    Binary variable with the indices node $n$ over the connection $conn$ in the direction $to\_node$ for the stochastic scenario $s$ at timestep $t$ describing if the direction of gas flow for a pressure drive gastransfer is in the indicated direction.

    connection_flow

    Math symbol: $v^{connection\_flow }$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: connection_flow_indices

    Commodity flow associated with node $n$ over the connection $conn$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

    connection_intact_flow

    Math symbol: $v^{connection\_intact\_flow}$

    Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: connection_intact_flow_indices

    ???

    connections_decommissioned

    Math symbol: $v^{connections\_decommissioned}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of decomissioned connections $conn$ for the stochastic scenario $s$ at timestep $t$

    connections_invested

    Math symbol: $v^{connections\_invested}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of connections $conn$ invested at timestep $t$ in for the stochastic scenario $s$

    connections_invested_available

    Math symbol: $v^{connections\_invested\_available}$

    Indices: (connection=conn, stochastic_scenario=s, t=t)

    Indices function: connections_invested_available_indices

    Number of invested connections $conn$ that are available still the stochastic scenario $s$ at timestep $t$

    mp_objective_lowerbound_indices

    Math symbol: $v^{mp\_objective\_lowerbound\_indices}$

    Indices: (t=t)

    Indices function: mp_objective_lowerbound_indices

    Updating lowerbound for master problem of Benders decomposition

    node_injection

    Math symbol: $v^{node\_injection}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_injection_indices

    Commodity injections at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_pressure

    Math symbol: $v^{node\_pressure}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_pressure_indices

    Pressue at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_pressure

    node_slack_neg

    Math symbol: $v^{node\_slack\_neg}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_slack_indices

    Positive slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_slack_pos

    Math symbol: $v^{node\_slack\_pos}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_slack_indices

    Negative slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_state

    Math symbol: $v^{node\_state}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_state_indices

    Storage state at node $n$ for the stochastic scenario $s$ at timestep $t$

    node_voltage_angle

    Math symbol: $v^{node\_voltage\_angle}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: node_voltage_angle_indices

    Voltage angle at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_voltage_angle

    nonspin_units_shut_down

    Math symbol: $v^{nonspin\_units\_shut\_down}$

    Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

    Indices function: nonspin_units_shut_down_indices

    Number of units $u$ held available for non-spinning downward reserve provision via shutdown to node $n$ for the stochastic scenario $s$ at timestep $t$

    nonspin_units_started_up

    Math symbol: $v^{nonspin\_units\_started\_up}$

    Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

    Indices function: nonspin_units_started_up_indices

    Number of units $u$ held available for non-spinning upward reserve provision via startup to node $n$ for the stochastic scenario $s$ at timestep $t$

    storages_decommissioned

    Math symbol: $v^{storages\_decommissioned}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of decomissioned storage nodes $n$ for the stochastic scenario $s$ at timestep $t$

    storages_invested

    Math symbol: $v^{storages\_invested}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of storage nodes $n$ invested in at timestep $t$ for the stochastic scenario $s$

    storages_invested_available

    Math symbol: $v^{storages\_invested\_available}$

    Indices: (node=n, stochastic_scenario=s, t=t)

    Indices function: storages_invested_available_indices

    Number of invested storage nodes $n$ that are available still the stochastic scenario $s$ at timestep $t$

    unit_flow

    Math symbol: $v^{unit\_flow}$

    Indices: (unit=u, node=n, direction=d, stochastic_scenario=s, t=t)

    Indices function: unit_flow_indices

    Commodity flow associated with node $n$ over the unit $u$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

    unit_flow_op

    Math symbol: $v^{unit\_flow\_op}$

    Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

    Indices function: unit_flow_op_indices

    Contribution of the unit flow assocaited with operating point $i$

    unit_flow_op_active

    Math symbol: $v^{unit\_flow\_op\_active}$

    Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

    Indices function: unit_flow_op_indices

    Control the activation of operating point $i$ of units

    units_invested

    Math symbol: $v^{units\_invested}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of units $u$ for the stochastic scenario $s$ invested in at timestep $t$

    units_invested_available

    Math symbol: $v^{units\_invested\_available}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of invested units $u$ that are available still the stochastic scenario $s$ at timestep $t$

    units_mothballed

    Math symbol: $v^{units\_mothballed}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_invested_available_indices

    Number of units $u$ for the stochastic scenariocenario $s$ mothballed at timestep $t$

    units_on

    Math symbol: $v^{units\_on}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of online units $u$ for the stochastic scenario $s$ at timestep $t$

    units_shut_down

    Math symbol: $v^{units\_shut\_down}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of units $u$ for the stochastic scenario $s$ that switched to offline status at timestep $t$

    units_started_up

    Math symbol: $v^{units\_started\_up}$

    Indices: (unit=u, stochastic_scenario=s, t=t)

    Indices function: units_on_indices

    Number of units $u$ for the stochastic scenario $s$ that switched to online status at timestep $t$

    diff --git a/dev/tutorial/capacity_planning/index.html b/dev/tutorial/capacity_planning/index.html index bb937a964b..b8c2543930 100644 --- a/dev/tutorial/capacity_planning/index.html +++ b/dev/tutorial/capacity_planning/index.html @@ -1,2 +1,2 @@ -Capacity planning · SpineOpt.jl

    Capacity Planning Tutorial

    This tutorial provides a step-by-step guide to include investment constraints for capacity planning in a simple energy system with Spine Toolbox for SpineOpt. There is more information to be found in the documentation on investment optimization. To get the most out of this tutorial, we suggest first completing the Simple System tutorial.

    Overview

    In this tutorial we will:

    • start from the simple system tutorial,
    • change the temporal structure from days to months,
    • add a temporal block for investments,
    • and add investment related parameters for the units.

    We end the tutorial with a guide on multi-year investments.

    Spine Toolbox

    Create a new workflow in Spine Toolbox, as you did for the simple system tutorial. In the input database, we import the simple system tutorial (File > import).

    Temporal structure

    For the investment optimization, let us consider a more appropriate time horizon, e.g. 2030-2035. We set the model_start and model_end parameters accordingly to 2030-01-01 and 2036-01-01.

    We'll consider a seasonal operation (to reduce the number of entries later on) so we'll set the resolution of the exiting temporal block to 4M. For clarity we also change the name from flat to operation.

    For the investment period we'll have to add another temporal block called investment. We connect it to the model entity with the model__temporal_block and model__default_investment_temporal_block. The resolution is to be set to 5Y.

    Info

    Instead of a default connection to the model entity, we can also make the investment temporal block specific to a part of the energy system, e.g. with the unit__investment_temporal_block entity.

    In principle we also need to define the default investment stochastic structure. To that end, we can simply connect the existing stochastic structure to the model entity using the model__default_investment_stochastic_structure entity.

    image

    Unit investment parameters

    With the infrastructure for investments in place, we can now ready units for the investment optimization. For both power plants:

    • Set the number_of_units parameter to zero so that the unit is unavailable unless invested in.
    • Set the initial_units_invested_available to zero as well for a similar reason.
    • Set the candidate_units parameter for the unit to 1 to specify that a maximum of 1 new unit of this type may be invested in by the model.
    • Set the unit's investment cost by setting the unit_investment_cost parameter to 1000.0. It is important to mention that, normally, you should use the discounted cost. In this example, the costs in 2030 and in 2035 should be discounted to the discount year, i.e., you would define a time-varying cost to reflect the economic representation.
    • Specify the unit_investment_tech_lifetime of the unit to, say, 10 years (duration 10Y) to specify that this is the minimum amount of time this new unit must be in existence after being invested in.
    • Specify the unit_investment_econ_lifetime to automatically adjust the investment costs. Let's set it equal to the technical lifetime here.
    • Specify the unit_investment_variable_type to unit_investment_variable_type_integer to specify that this is a discrete unit investment decision. By default this is set to continuous and we would see an investment of 0.25 units for power plant b in the solution. That also shows that unit size is set by the unit_capacity parameter of the unit__to_node entity (for power plant b the unit capacity is 200 and multiplied with the investment of 0.25 units we obtain 50 which equals the flow from power plant b).
    • Specify the units_on_cost to apply a cost to units that are on. Sometimes this is necessary to ensure that the units_on variables are created which are needed for the proper functioning of the constraints. Even a value of 0.0 is sufficient to trigger these variables. And that is also what we do here.

    image

    Info

    Investments in storage and connections are very similar. Note that storage is implemented through nodes.

    Examine output

    To be able to see the investments in the results, we'll have to add some more output entities to the report entity, i.e. units_invested and units_on. Commit the changes to the input data base and run the SpineOpt tool. In the output you should now also find the investments. The value should be equal to 1.0 unit.

    Multi-year investments

    Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

    In this tutorial, we consider two investment points, at the start of the modeling horizon (2030), and 5 years later (2035). Operation is assumed to be every 4 months, but only in 2030 and 2035. In other words, we only model 2030 and 2035 as milestone years for the pathway 2030 - 2035.

    To make this work, some adjustments are needed to:

    • the temporal structure,
    • the demand,
    • and the units.

    For the temporal structure, we need a separate operation temporal block for 2030 and 2035 (each with a resolution of 4M). To obtain a discontinuous gap between the years in the model we set the block_start and block_end to the start and end of the respective years. Note that for the temporal block of 2035 we already start in the last season of the previous year. That is to be able to set the boundary conditions for that block.

    image

    Warning

    It is important to delete the temporal blocks that are not used, and only leave the used ones. Otherwise, the temporal structure may be wrong.

    Info

    Discontinuous (or non-consecutive) time slices in SpineOpt need to be initialized in order for SpineOpt to correctly generate constraints. For the first time slice that is done through the initialization parameters as has been done before in the tutorials. For the other time slices, an additional preceding time slice is needed. The boundary conditions for, e.g., initial storage level or online status of units can be set in that preceding time slice.

    Note that we do not yet support linking the boundary conditions with the previous operation temporal block. This additional definition means that we will also have results for it, which is redundant and should be ignored when post-processing.

    The demand data is seasonal (4M). We assume that the demand increases over the years. So, let's take a demand of 100 for all seasons in 2030 and a demand of 400 for all seasons in 2035. In the input database that means that we'll have to change the constant value of the demand to a time series with variable resolution. We can then enter the values for each season. For the initial conditions of the second time slice, we'll add a 0 at 2034-09-01.

    image

    We will allow investments for power_plant_a in both 2030 and 2035, and for power_plant_b only in 2035. This is realised through the definition of candidate_units as a time series with variable resolution.

    • powerplanta: [2030-01-01: 1, 2035-01-01: 2]. Note this means in 2030, 1 unit can be invested, and in 2035, another 1 (instead of 2) can invested. In other words, this parameter includes the previously available units.
    • powerplantb: [2030-01-01: 0, 2035-01-01: 1].

    image image

    We can check the results for power_plant_a first. The below pictures show that in 2030, there is 1 investment, and in 2035, there is another investment. In 2035, there are 2 units on.

    Note we notice a drop between the two periods for operation variables, units_on in this case, because it is a redundant result.

    image image

    We also get 1 investment for power_plant_b in 2035.

    image image

    Debugging

    For debugging purposes, consider an overview of all the parameters in this tutorial below.

    image

    +Capacity planning · SpineOpt.jl

    Capacity Planning Tutorial

    This tutorial provides a step-by-step guide to include investment constraints for capacity planning in a simple energy system with Spine Toolbox for SpineOpt. There is more information to be found in the documentation on investment optimization. To get the most out of this tutorial, we suggest first completing the Simple System tutorial.

    Overview

    In this tutorial we will:

    • start from the simple system tutorial,
    • change the temporal structure from days to months,
    • add a temporal block for investments,
    • and add investment related parameters for the units.

    We end the tutorial with a guide on multi-year investments.

    Spine Toolbox

    Create a new workflow in Spine Toolbox, as you did for the simple system tutorial. In the input database, we import the simple system tutorial (File > import).

    Temporal structure

    For the investment optimization, let us consider a more appropriate time horizon, e.g. 2030-2035. We set the model_start and model_end parameters accordingly to 2030-01-01 and 2036-01-01.

    We'll consider a seasonal operation (to reduce the number of entries later on) so we'll set the resolution of the exiting temporal block to 4M. For clarity we also change the name from flat to operation.

    For the investment period we'll have to add another temporal block called investment. We connect it to the model entity with the model__temporal_block and model__default_investment_temporal_block. The resolution is to be set to 5Y.

    Info

    Instead of a default connection to the model entity, we can also make the investment temporal block specific to a part of the energy system, e.g. with the unit__investment_temporal_block entity.

    In principle we also need to define the default investment stochastic structure. To that end, we can simply connect the existing stochastic structure to the model entity using the model__default_investment_stochastic_structure entity.

    image

    Unit investment parameters

    With the infrastructure for investments in place, we can now ready units for the investment optimization. For both power plants:

    • Set the number_of_units parameter to zero so that the unit is unavailable unless invested in.
    • Set the initial_units_invested_available to zero as well for a similar reason.
    • Set the candidate_units parameter for the unit to 1 to specify that a maximum of 1 new unit of this type may be invested in by the model.
    • Set the unit's investment cost by setting the unit_investment_cost parameter to 1000.0. It is important to mention that, normally, you should use the discounted cost. In this example, the costs in 2030 and in 2035 should be discounted to the discount year, i.e., you would define a time-varying cost to reflect the economic representation.
    • Specify the unit_investment_tech_lifetime of the unit to, say, 10 years (duration 10Y) to specify that this is the minimum amount of time this new unit must be in existence after being invested in.
    • Specify the unit_investment_econ_lifetime to automatically adjust the investment costs. Let's set it equal to the technical lifetime here.
    • Specify the unit_investment_variable_type to unit_investment_variable_type_integer to specify that this is a discrete unit investment decision. By default this is set to continuous and we would see an investment of 0.25 units for power plant b in the solution. That also shows that unit size is set by the unit_capacity parameter of the unit__to_node entity (for power plant b the unit capacity is 200 and multiplied with the investment of 0.25 units we obtain 50 which equals the flow from power plant b).
    • Specify the units_on_cost to apply a cost to units that are on. Sometimes this is necessary to ensure that the units_on variables are created which are needed for the proper functioning of the constraints. Even a value of 0.0 is sufficient to trigger these variables. And that is also what we do here.

    image

    Info

    Investments in storage and connections are very similar. Note that storage is implemented through nodes.

    Examine output

    To be able to see the investments in the results, we'll have to add some more output entities to the report entity, i.e. units_invested and units_on. Commit the changes to the input data base and run the SpineOpt tool. In the output you should now also find the investments. The value should be equal to 1.0 unit.

    Multi-year investments

    Multi-year investments refer to making investment decisions at different points in time, such that a pathway of investments can be modeled. This is particularly useful when long-term scenarios are modeled, but modeling each year is not practical. Or in a business case, investment decisions are supposed to be made in different years which has an impact on the cash flow.

    In this tutorial, we consider two investment points, at the start of the modeling horizon (2030), and 5 years later (2035). Operation is assumed to be every 4 months, but only in 2030 and 2035. In other words, we only model 2030 and 2035 as milestone years for the pathway 2030 - 2035.

    To make this work, some adjustments are needed to:

    • the temporal structure,
    • the demand,
    • and the units.

    For the temporal structure, we need a separate operation temporal block for 2030 and 2035 (each with a resolution of 4M). To obtain a discontinuous gap between the years in the model we set the block_start and block_end to the start and end of the respective years. Note that for the temporal block of 2035 we already start in the last season of the previous year. That is to be able to set the boundary conditions for that block.

    image

    Warning

    It is important to delete the temporal blocks that are not used, and only leave the used ones. Otherwise, the temporal structure may be wrong.

    Info

    Discontinuous (or non-consecutive) time slices in SpineOpt need to be initialized in order for SpineOpt to correctly generate constraints. For the first time slice that is done through the initialization parameters as has been done before in the tutorials. For the other time slices, an additional preceding time slice is needed. The boundary conditions for, e.g., initial storage level or online status of units can be set in that preceding time slice.

    Note that we do not yet support linking the boundary conditions with the previous operation temporal block. This additional definition means that we will also have results for it, which is redundant and should be ignored when post-processing.

    The demand data is seasonal (4M). We assume that the demand increases over the years. So, let's take a demand of 100 for all seasons in 2030 and a demand of 400 for all seasons in 2035. In the input database that means that we'll have to change the constant value of the demand to a time series with variable resolution. We can then enter the values for each season. For the initial conditions of the second time slice, we'll add a 0 at 2034-09-01.

    image

    We will allow investments for power_plant_a in both 2030 and 2035, and for power_plant_b only in 2035. This is realised through the definition of candidate_units as a time series with variable resolution.

    • powerplanta: [2030-01-01: 1, 2035-01-01: 2]. Note this means in 2030, 1 unit can be invested, and in 2035, another 1 (instead of 2) can invested. In other words, this parameter includes the previously available units.
    • powerplantb: [2030-01-01: 0, 2035-01-01: 1].

    image image

    We can check the results for power_plant_a first. The below pictures show that in 2030, there is 1 investment, and in 2035, there is another investment. In 2035, there are 2 units on.

    Note we notice a drop between the two periods for operation variables, units_on in this case, because it is a redundant result.

    image image

    We also get 1 investment for power_plant_b in 2035.

    image image

    Debugging

    For debugging purposes, consider an overview of all the parameters in this tutorial below.

    image

    diff --git a/dev/tutorial/multi-year_investment/index.html b/dev/tutorial/multi-year_investment/index.html index 1bb22d9ed0..ec47e34a82 100644 --- a/dev/tutorial/multi-year_investment/index.html +++ b/dev/tutorial/multi-year_investment/index.html @@ -1,2 +1,2 @@ -Multi-year investments using economic parameters · SpineOpt.jl

    Multi-year Investments Using Pre-defined Internal Parameters Tutorial

    The basics of how to set up a capacity planning model are covered in Capacity planning Tutorial and multi-year investments in Multi-year investments. With those information, You should be able to do multi-year investments already with your own parameters. However, the correct representation for costs across years can be tricky. To make it more user-friendly, SpineOpt has incorporated some pre-defined economic parameters internally, and the goal of this tutorial is to walk you through the set-up for using these parameters.

    Info

    The details of the formulation and economic parameters are given in the concept references.

    Overview

    In this tutorial, we will

    • simplify the simple system tutorial by only using one power_plant,
    • show the necessary parameters to activate and use pre-defined internal parameters,
    • show you how to use these economic parameters,
    • show you how to use milestone years.

    Set-up

    To avoid repetition, we only consider one unit instead of the two units from the simple system tutorial. The easiest way to do this is to import the simple system (file > import) and to remove one of the two power plants. To remove a power plant you can go to the graph view. Use ctrl+click on each of the relevant entities connect to powerplantb (except for the nodes as we still need those for powerplanta). Then right click and select 'remove'. There will be a confirmation box with an overview of all the entities that you will be removing.

    Since we are working with investments, we are going to make a distinction between investments and operation in the time blocks. We retain the original time block but adjust the resolution to 4 months ('4M'). Additionally we add an investment time block with a resolution of 5 years ('5Y') between 2000 and 2006. We have to adjust the time horizon of the model entity accordingly.

    Once we have our setup, we can take a look at the economic representation in SpineOpt. Below is a list of parameters you would need:

    • use_economic_represention: if set to true, it means the model will use its internally-calculated parameters for discounting investment and operation costs. The default value is false.
    • use_milestone_years: this parameter is used to discount operation costs. If set to false (default), it means we use continous operational temporal blocks, and thus the operation cost will be discounted every year. Otherwise, it will be discounted using the investment temporal block.
    • discount_rate: the rate you would like to discount your costs with.
    • discount_year: the year you would like to discount your costs to.
    • unit_investment_tech_lifetime: using units as an example, this is the technical lifetime of the unit.
    • unit_investment_econ_lifetime: using units as an example, this is the economic lifetime of the unit which is used to calculate the economic parameters.
    • [optional] unit_discount_rate_technology_specific: using units as an example, this is used if you would like to have a specific discount rate different from discount_rate.
    • [optional] unit_lead_time: if not specified, the default lead time is 0.
    • unit_investment_cost: using unit as an example, this is the investment cost for the investment year. Suppose you set use_economic_represention to false, then this cost that you put will not be discounted at all. However, if you set it to true, then SpineOpt will discount this cost to the discount_year using discount_rate.

    To be able to see the values of the economic parameters after a run, you have to add them to the report.

    image

    Not using economic parameters

    We start with the case if use_economic_represention is set to false, which means SpineOpt will not create and use its internally-calculated parameters for discounting investment and operation costs. A unit_investment_cost of 100 and a vom_cost of 25 are not discouted at all. See the set-up below.

    image

    Using economic parameters but not using milestone years

    Now we only change use_economic_represention to true while still keep use_milestone_years as false (default). This set-up indicates that we will use the internally-calculated parameters and continous operational temporal blocks. Now the unit_investment_cost and the vom_cost are discounted to 1990 using a discount_rate of 0.05.

    unit_discounted_duration is used to discount operation costs so it has the resolution of the operational temporal block. However, since we only discount per year, this parameter value is constant within a year.

    image

    The rest is for discounting investment costs with the resolution of the investment temporal block.

    image

    Using economic parameters and using milestone years

    Now we also change use_milestone_years to true. This indicates that we want operational temporal block to be discontinous and use the same milestone years as the investment temporal block. In this case, we need to change the definition of temporal blocks, see below picture.

    Info

    If you get confused why the temporal blocks are defined this way, I recommend going back to Multi-year investments for details.

    image

    The values for the parameter unit_discounted_duration are shown below. Note now in 2000, the value becomes 2.79. This parameter value acts as a weight taking into account the discount per year and the resolution of the milestone years. In order words, now the operation costs for the in-between years have also been included.

    image

    +Multi-year investments using economic parameters · SpineOpt.jl

    Multi-year Investments Using Pre-defined Internal Parameters Tutorial

    The basics of how to set up a capacity planning model are covered in Capacity planning Tutorial and multi-year investments in Multi-year investments. With those information, You should be able to do multi-year investments already with your own parameters. However, the correct representation for costs across years can be tricky. To make it more user-friendly, SpineOpt has incorporated some pre-defined economic parameters internally, and the goal of this tutorial is to walk you through the set-up for using these parameters.

    Info

    The details of the formulation and economic parameters are given in the concept references.

    Overview

    In this tutorial, we will

    • simplify the simple system tutorial by only using one power_plant,
    • show the necessary parameters to activate and use pre-defined internal parameters,
    • show you how to use these economic parameters,
    • show you how to use milestone years.

    Set-up

    To avoid repetition, we only consider one unit instead of the two units from the simple system tutorial. The easiest way to do this is to import the simple system (file > import) and to remove one of the two power plants. To remove a power plant you can go to the graph view. Use ctrl+click on each of the relevant entities connect to powerplantb (except for the nodes as we still need those for powerplanta). Then right click and select 'remove'. There will be a confirmation box with an overview of all the entities that you will be removing.

    Since we are working with investments, we are going to make a distinction between investments and operation in the time blocks. We retain the original time block but adjust the resolution to 4 months ('4M'). Additionally we add an investment time block with a resolution of 5 years ('5Y') between 2000 and 2006. We have to adjust the time horizon of the model entity accordingly.

    Once we have our setup, we can take a look at the economic representation in SpineOpt. Below is a list of parameters you would need:

    • use_economic_represention: if set to true, it means the model will use its internally-calculated parameters for discounting investment and operation costs. The default value is false.
    • use_milestone_years: this parameter is used to discount operation costs. If set to false (default), it means we use continous operational temporal blocks, and thus the operation cost will be discounted every year. Otherwise, it will be discounted using the investment temporal block.
    • discount_rate: the rate you would like to discount your costs with.
    • discount_year: the year you would like to discount your costs to.
    • unit_investment_tech_lifetime: using units as an example, this is the technical lifetime of the unit.
    • unit_investment_econ_lifetime: using units as an example, this is the economic lifetime of the unit which is used to calculate the economic parameters.
    • [optional] unit_discount_rate_technology_specific: using units as an example, this is used if you would like to have a specific discount rate different from discount_rate.
    • [optional] unit_lead_time: if not specified, the default lead time is 0.
    • unit_investment_cost: using unit as an example, this is the investment cost for the investment year. Suppose you set use_economic_represention to false, then this cost that you put will not be discounted at all. However, if you set it to true, then SpineOpt will discount this cost to the discount_year using discount_rate.

    To be able to see the values of the economic parameters after a run, you have to add them to the report.

    image

    Not using economic parameters

    We start with the case if use_economic_represention is set to false, which means SpineOpt will not create and use its internally-calculated parameters for discounting investment and operation costs. A unit_investment_cost of 100 and a vom_cost of 25 are not discouted at all. See the set-up below.

    image

    Using economic parameters but not using milestone years

    Now we only change use_economic_represention to true while still keep use_milestone_years as false (default). This set-up indicates that we will use the internally-calculated parameters and continous operational temporal blocks. Now the unit_investment_cost and the vom_cost are discounted to 1990 using a discount_rate of 0.05.

    unit_discounted_duration is used to discount operation costs so it has the resolution of the operational temporal block. However, since we only discount per year, this parameter value is constant within a year.

    image

    The rest is for discounting investment costs with the resolution of the investment temporal block.

    image

    Using economic parameters and using milestone years

    Now we also change use_milestone_years to true. This indicates that we want operational temporal block to be discontinous and use the same milestone years as the investment temporal block. In this case, we need to change the definition of temporal blocks, see below picture.

    Info

    If you get confused why the temporal blocks are defined this way, I recommend going back to Multi-year investments for details.

    image

    The values for the parameter unit_discounted_duration are shown below. Note now in 2000, the value becomes 2.79. This parameter value acts as a weight taking into account the discount per year and the resolution of the milestone years. In order words, now the operation costs for the in-between years have also been included.

    image

    diff --git a/dev/tutorial/ramping/index.html b/dev/tutorial/ramping/index.html index 032dea1119..b32fa29846 100644 --- a/dev/tutorial/ramping/index.html +++ b/dev/tutorial/ramping/index.html @@ -1,2 +1,2 @@ -Ramping constraints · SpineOpt.jl

    Ramping definition tutorial

    This tutorial provides a step-by-step guide to include ramping constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding ramping constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    The ramping constraint limit refers to the maximum rate at which a power unit can increase or decrease its output flow over time. These limits are typically put in place to prevent sudden and destabilizing shifts in power units. However, they may also represent any other physical limitations that a unit may have that is related to changes over time in its output flow.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 3-hour time series instead of a unique value
    • The power_plant_a has the following parameters:
      • Ramp limit of 10% for both up and down
      • Minimum operating point of 10% of its total capacity
      • Startup capacity limit of 10% of its total capacity
      • Shutdown capacity limit of 10% of its total capacity

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the ramping constraints concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Alt + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add ramping constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 3-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below.
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Notice that there is only demand values for 2000-01-01T00:00:00 and 2000-01-01T02:00:00. Therefore, we need to update the start and end of the model. But first, let's change the temporal block.

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h to make easy to follow the results.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [model] class, and select the simple from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Editing the model start and end

    Since the default resolution of the Simple System was 1D, the start and end date of the model needs also to be changed.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, select the model_start parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • Repeat the procedure for the model_end parameter, but now the value is 2000-01-01T03:00:00. The final values should look like that the image below.

    image

    It's important to note that the model must finish in the third hour to account for all the periods of demand in input data, which goes until 2000-01-01T02:00:00.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand in both hours, and then the power_plant_b (i.e., the more expensive unit) has zero production. This is the most economical dispatch since the problem has no extra constraints (so far!).

    Step 2 - Include the ramping limit

    Let's consider the input data where the power_plant_a has a ramping limit of 10% in both directions (i.e., up and down), meaning that the change between two time steps can't be greater than 10MW (since the plant 'a' has a unit capacity of 100MW). The ramping constraints need the following parameters for their definition: minimum operating point, startup limit, and shutdown limit. For more details, please visit the mathematical formulation in the following link

    Adding the new parameters

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table:

      • Select the ramp_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping up limit for power_plant_a.

      • Select the ramp_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping down limit for power_plant_a.

      • Select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point for power_plant_a.

      • Select the start_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the startup capacity limit for power_plant_a.

      • Select the shut_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the shutdown capacity limit for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results with ramp limits

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) output is limited to its ramps limits, therefore it can't follow the demand changes as before. For instance, the unit's power output is 45MW in the first hour, which is lower than the previous result of 50MW in the same hour. This is because the unit needs to gradually decrease its power output and reach 25MW in the last hour. However, due to the imposed ramp-down limit of 10MW, it cannot start from 50MW as before. Therefore, the power_plant_b (i.e., the more expensive unit) must produce to cover the demand that plant 'a' can't due to its ramping limitations. As shown here, the ramping limits might lead to a higher costs in power systems compared to the previous case.

    But...there is something more here...Can you tell what? :anguished:

    It is important to note that the optimal solution we have calculated assumes that the unit 'a' was already producing electricity before the model_start parameter. This is because we have not defined an initial condition for the flow of the unit. Therefore, the flow at the first hour is the most cost-effective solution under this assumption. However, what if we changed this assumption and assumed that the unit had not produced any flow before the model_start parameter? If you are curious to know the answer, join me in the next section.

    Step 3 - Include a initial condition to the flow

    Adding the initial flow

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the initial_unit_flow parameter and the Base alternative, and enter the value 0.0 as seen in the image below. This will set the initial flow for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits with initial conditions

    You know the drill! ;)

    Examining the results with ramp limits with initial conditions

    Create a the Pivot table with the latest results. It will look something like the image below.

    image

    Here, we can see the impact of the initial condition; no longer can the unit have a flow change than its ramp-up limit for the first hour. Therefore, the optimal solution under this assumption changes compared to the previous section.

    This example highlights the importance of considering initial conditions as a crucial assumption in energy system modelling optimization.

    +Ramping constraints · SpineOpt.jl

    Ramping definition tutorial

    This tutorial provides a step-by-step guide to include ramping constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding ramping constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    The ramping constraint limit refers to the maximum rate at which a power unit can increase or decrease its output flow over time. These limits are typically put in place to prevent sudden and destabilizing shifts in power units. However, they may also represent any other physical limitations that a unit may have that is related to changes over time in its output flow.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 3-hour time series instead of a unique value
    • The power_plant_a has the following parameters:
      • Ramp limit of 10% for both up and down
      • Minimum operating point of 10% of its total capacity
      • Startup capacity limit of 10% of its total capacity
      • Shutdown capacity limit of 10% of its total capacity

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the ramping constraints concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Alt + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add ramping constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 3-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below.
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Notice that there is only demand values for 2000-01-01T00:00:00 and 2000-01-01T02:00:00. Therefore, we need to update the start and end of the model. But first, let's change the temporal block.

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h to make easy to follow the results.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [model] class, and select the simple from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Editing the model start and end

    Since the default resolution of the Simple System was 1D, the start and end date of the model needs also to be changed.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, select the model_start parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • Repeat the procedure for the model_end parameter, but now the value is 2000-01-01T03:00:00. The final values should look like that the image below.

    image

    It's important to note that the model must finish in the third hour to account for all the periods of demand in input data, which goes until 2000-01-01T02:00:00.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand in both hours, and then the power_plant_b (i.e., the more expensive unit) has zero production. This is the most economical dispatch since the problem has no extra constraints (so far!).

    Step 2 - Include the ramping limit

    Let's consider the input data where the power_plant_a has a ramping limit of 10% in both directions (i.e., up and down), meaning that the change between two time steps can't be greater than 10MW (since the plant 'a' has a unit capacity of 100MW). The ramping constraints need the following parameters for their definition: minimum operating point, startup limit, and shutdown limit. For more details, please visit the mathematical formulation in the following link

    Adding the new parameters

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table:

      • Select the ramp_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping up limit for power_plant_a.

      • Select the ramp_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping down limit for power_plant_a.

      • Select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point for power_plant_a.

      • Select the start_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the startup capacity limit for power_plant_a.

      • Select the shut_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the shutdown capacity limit for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results with ramp limits

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) output is limited to its ramps limits, therefore it can't follow the demand changes as before. For instance, the unit's power output is 45MW in the first hour, which is lower than the previous result of 50MW in the same hour. This is because the unit needs to gradually decrease its power output and reach 25MW in the last hour. However, due to the imposed ramp-down limit of 10MW, it cannot start from 50MW as before. Therefore, the power_plant_b (i.e., the more expensive unit) must produce to cover the demand that plant 'a' can't due to its ramping limitations. As shown here, the ramping limits might lead to a higher costs in power systems compared to the previous case.

    But...there is something more here...Can you tell what? :anguished:

    It is important to note that the optimal solution we have calculated assumes that the unit 'a' was already producing electricity before the model_start parameter. This is because we have not defined an initial condition for the flow of the unit. Therefore, the flow at the first hour is the most cost-effective solution under this assumption. However, what if we changed this assumption and assumed that the unit had not produced any flow before the model_start parameter? If you are curious to know the answer, join me in the next section.

    Step 3 - Include a initial condition to the flow

    Adding the initial flow

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the initial_unit_flow parameter and the Base alternative, and enter the value 0.0 as seen in the image below. This will set the initial flow for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits with initial conditions

    You know the drill! ;)

    Examining the results with ramp limits with initial conditions

    Create a the Pivot table with the latest results. It will look something like the image below.

    image

    Here, we can see the impact of the initial condition; no longer can the unit have a flow change than its ramp-up limit for the first hour. Therefore, the optimal solution under this assumption changes compared to the previous section.

    This example highlights the importance of considering initial conditions as a crucial assumption in energy system modelling optimization.

    diff --git a/dev/tutorial/reserves/index.html b/dev/tutorial/reserves/index.html index 95272ec986..4ed458090d 100644 --- a/dev/tutorial/reserves/index.html +++ b/dev/tutorial/reserves/index.html @@ -1,2 +1,2 @@ -Reserve requirements · SpineOpt.jl

    Reserve definition tutorial

    This tutorial provides a step-by-step guide to include reserve requirements in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding a new reserve node in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Reserves refer to the capacity or energy that is kept as a backup to ensure the power system's reliability. This reserve capacity can be brought online automatically or manually in the event of unforeseen system disruptions such as generation failure, transmission line failure, or a sudden increase in demand. Operating reserves are essential to ensure that there is always enough generation capacity available to meet demand, even in the face of unforeseen system disruptions.

    Model assumptions

    • The reserve node has a requirement of 20MW for upwards reserve
    • Power plants 'a' and 'b' can both provide reserve to this node

    image

    Guide

    Entering input data

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add a new reserve node to the Simple System.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.
    • Enter the names for the new reserve node as seen in the image below, then press Ok. This will create a new object of class node, called upward_reserve_node.

    image

    • Right click on the node class, and select Add object group from the context menu. The Add object group dialog will pop up. In the Group name field write upward_reserve_group to refer to this group. Then, add as a members of the group the nodes electricity_node and upward_reserve_node, as shown in the image below; then press Ok.
    Note

    In SpineOpt, groups of nodes allow the user to create constraints that involve variables from its members. Later in this tutorial, the group named upward_reserve_group will help to link the flow variables for electricity production and reserve procurement.

    image

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into both the upward_reserve_node and the upward_reserve_group.

    image

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and variable_om_costs under output. Repeat the same procedure in the second line to add the res_proc_costs under output as seen in the image below; then press Ok. This will write the total vom_cost and procurement reserve cost values in the objective function to the output database as a part of report1.

    image

    Specifying object parameter values

    • Back to Object tree, expand the node class and select upward_reserve_node.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • demand parameter and the Base alternative, and enter the value 20. This will establish that there's a demand of '20' at the reverse node.
      • is_reserve_node parameter and the Base alternative, and enter the value True. This will establish that it is a reverse node.
      • upward_reserve parameter and the Base alternative, then right-click on the value cell and then, in the context menu, select 'Edit...' and select the option True. This will establish the direction of the reserve is upwards.
      • nodal_balance_sense parameter and the Base alternative, and enter the value $\geq$. This will establish that the total reserve procurement must be greater or equal than the reserve demand.

    image

    • Select upward_reserve_group in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that there is no need to create an extra balance between the members of the group.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity to provide reserve for power_plant_a.

    Note

    The value is equal to the unit capacity defined for the electricity node. However, the value can be lower if the unit cannot provide reserves with its total capacity.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity to provide reserve for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 100. This will set the total capacity for power_plant_a in the group.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 200. This will set the total capacity for power_plant_b in the group.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    As anticipated, the power_plant_b is supplying the necessary reserve due to its surplus capacity, while power_plant_a is operating at full capacity. Additionally, in this model, we have not allocated a cost for reserve procurement. One way to double-check it is by selecting report__model under Relationship tree and look at the costs the Pivot table, see image below.

    image

    So, is it possible to assign costs to this reserve procurement in SpineOpt? Yes, it is indeed possible.

    Specifying a reserve procurement cost value

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 5 as seen in the image below. This will set the cost of providing reserve for power_plant_a.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 35 as seen in the image below. This will set the cost of providing reserve for power_plant_b.

    image

    Don't forget to commit the new changes to the database!

    Executing the worflow and examining the results again

    • Go back to Spine Toolbox's main window, and hit again the Execute project button as before.

    • Select the output data store and open the Spine DB editor. You can inspect results as before, which should look like the image below.

    image

    Since the cost of reserve procurement is way cheaper in power_plant_a than in power_plant_b, then the optimal solution is to reduce the production of electricity in power_plant_a to provide reserve with this unit rather than power_plant_b as before. By looking at the total costs, we can see that the reserve procurement costs are no longer zero.

    image

    +Reserve requirements · SpineOpt.jl

    Reserve definition tutorial

    This tutorial provides a step-by-step guide to include reserve requirements in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding a new reserve node in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Reserves refer to the capacity or energy that is kept as a backup to ensure the power system's reliability. This reserve capacity can be brought online automatically or manually in the event of unforeseen system disruptions such as generation failure, transmission line failure, or a sudden increase in demand. Operating reserves are essential to ensure that there is always enough generation capacity available to meet demand, even in the face of unforeseen system disruptions.

    Model assumptions

    • The reserve node has a requirement of 20MW for upwards reserve
    • Power plants 'a' and 'b' can both provide reserve to this node

    image

    Guide

    Entering input data

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add a new reserve node to the Simple System.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.
    • Enter the names for the new reserve node as seen in the image below, then press Ok. This will create a new object of class node, called upward_reserve_node.

    image

    • Right click on the node class, and select Add object group from the context menu. The Add object group dialog will pop up. In the Group name field write upward_reserve_group to refer to this group. Then, add as a members of the group the nodes electricity_node and upward_reserve_node, as shown in the image below; then press Ok.
    Note

    In SpineOpt, groups of nodes allow the user to create constraints that involve variables from its members. Later in this tutorial, the group named upward_reserve_group will help to link the flow variables for electricity production and reserve procurement.

    image

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into both the upward_reserve_node and the upward_reserve_group.

    image

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and variable_om_costs under output. Repeat the same procedure in the second line to add the res_proc_costs under output as seen in the image below; then press Ok. This will write the total vom_cost and procurement reserve cost values in the objective function to the output database as a part of report1.

    image

    Specifying object parameter values

    • Back to Object tree, expand the node class and select upward_reserve_node.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • demand parameter and the Base alternative, and enter the value 20. This will establish that there's a demand of '20' at the reverse node.
      • is_reserve_node parameter and the Base alternative, and enter the value True. This will establish that it is a reverse node.
      • upward_reserve parameter and the Base alternative, then right-click on the value cell and then, in the context menu, select 'Edit...' and select the option True. This will establish the direction of the reserve is upwards.
      • nodal_balance_sense parameter and the Base alternative, and enter the value $\geq$. This will establish that the total reserve procurement must be greater or equal than the reserve demand.

    image

    • Select upward_reserve_group in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that there is no need to create an extra balance between the members of the group.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity to provide reserve for power_plant_a.

    Note

    The value is equal to the unit capacity defined for the electricity node. However, the value can be lower if the unit cannot provide reserves with its total capacity.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity to provide reserve for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 100. This will set the total capacity for power_plant_a in the group.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 200. This will set the total capacity for power_plant_b in the group.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    As anticipated, the power_plant_b is supplying the necessary reserve due to its surplus capacity, while power_plant_a is operating at full capacity. Additionally, in this model, we have not allocated a cost for reserve procurement. One way to double-check it is by selecting report__model under Relationship tree and look at the costs the Pivot table, see image below.

    image

    So, is it possible to assign costs to this reserve procurement in SpineOpt? Yes, it is indeed possible.

    Specifying a reserve procurement cost value

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 5 as seen in the image below. This will set the cost of providing reserve for power_plant_a.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 35 as seen in the image below. This will set the cost of providing reserve for power_plant_b.

    image

    Don't forget to commit the new changes to the database!

    Executing the worflow and examining the results again

    • Go back to Spine Toolbox's main window, and hit again the Execute project button as before.

    • Select the output data store and open the Spine DB editor. You can inspect results as before, which should look like the image below.

    image

    Since the cost of reserve procurement is way cheaper in power_plant_a than in power_plant_b, then the optimal solution is to reduce the production of electricity in power_plant_a to provide reserve with this unit rather than power_plant_b as before. By looking at the total costs, we can see that the reserve procurement costs are no longer zero.

    image

    diff --git a/dev/tutorial/simple_system/index.html b/dev/tutorial/simple_system/index.html index 29ef5eb694..86d9c8a01e 100644 --- a/dev/tutorial/simple_system/index.html +++ b/dev/tutorial/simple_system/index.html @@ -1,2 +1,2 @@ -Simple system · SpineOpt.jl

    Simple System tutorial

    Welcome to Spine Toolbox's Simple System tutorial.

    This tutorial provides a step-by-step guide to setup a simple energy system with Spine Toolbox for SpineOpt. Spine Toolbox is used to create a workflow with databases and tools and SpineOpt is the tool that simulates/optimizes the energy system.

    Info

    If you haven't yet installed the tools or you are not sure whether you have the latest version, please follow the installation/upgrade guides:

    About the simple system

    In the simple system:

    • Two power plants take fuel from a source node and release electricity to another node in order to supply a demand.
    • Power plant 'a' has a capacity of 100 MWh, a variable operating cost of 25 euro/fuel unit, and generates 0.7 MWh of electricity per unit of fuel.
    • Power plant 'b' has a capacity of 200 MWh, a variable operating cost of 50 euro/fuel unit, and generates 0.8 MWh of electricity per unit of fuel.
    • The demand at the electricity node is 150 MWh.
    • The fuel node is able to provide infinite energy.

    image

    Spine Toolbox workflow

    The workflow for this tutorial is quite simple: A SpineOpt tool that reads data from an input database, executes the simulation/optimization and writes the results to an output database.

    To create the workflow, it is almost as simple as dragging these items (i.e. Data Store and Run SpineOpt) to the Design View and connecting them by dragging arrows between the blocks but there are some things that need to be configured:

    • The databases need to be initialised. Once you select a database you see the properties panel. Select the dialect of the database. Here we choose sqlite. Then press the button 'new spine db' to create and save the database on your computer (Spine Toolbox will suggest a good folder).

    • Connecting tools with (yellow) arrows in the Toolbox does not mean that the tools will use these items. The arrows in the Toolbox view make items (databases) available. To let SpineOpt know we want to use these items, we need to go to the properties panel of Run SpineOpt and drag the available items to the tool arguments. The order of the items is first the input, then the output. See below for how the property window should look.

    image

    • (optional) The Spine data stores are quite generic. In order for SpineOpt to be able to read the input database, we need to change its format from the Spine format to the SpineOpt format. Luckily we can use templates for this. One of those templates is made available as an item in Spine Toolbox: Load template. The other option is to load templates into the database using the db editor. The templates can also be used to pre-populate the database with some basic components. Here we briefly explain the use of the Load template block and later we show how to import a template and basic components with the spine db editor. To use the Load template block, drag it to the view and connect it to the input database. Just like the Run SpineOpt block we need to drag the available input database to the tool argument.

    The result should look similar to this (+/- the Load template block):

    image

    That is it for the workflow. Now we can enter the data for the setup of the simple system into the input database, run the workflow and view the results in the output database.

    Entering input data

    To enter the necessary data for the simple system, we'll use the Spine DB editor. The Spine DB editor is a dedicated interface within Spine Toolbox for visualizing and managing Spine databases. The default view shows tables (see below) but for viewing energy system configurations it is nice to see a graph. Press the graph button in the toolbar. The graph view only shows what you select in the root menu and what your selected entities are connected to.

    To open the editor:

    • Double click the input Data Store item (or select the 'input' Data Store item in the Design View, go to Data Store Properties and hit Open editor).

    image

    In the following we enter the input data for the simple system.

    Importing the SpineOpt database template

    A SpineOpt database is a spine database but a spine database is not necessarily a SpineOpt database. Therefore we first need to format the database to a SpineOpt database with the SpineOpt template. The SpineOpt template contains the fundamental entity classes and parameter definitions that SpineOpt recognizes and expects. One option to load the template is to use the 'Load template' tool as mentioned before. Another option is to import the template with the Spine DB editor. To that end:

    • Download the SpineOpt database template (right click on the link, then select Save link as...)

    • To import the template to the database, click on File -> Import..., and then select the template file you previously downloaded (spineopt_template.json). The contents of that file will be imported into the current database, and you should then see classes like 'commodity', 'connection' and 'model' under the root menu.

    • To save our changes, press the Commit button in the toolbar. Enter a commit message, e.g. 'Import SpineOpt template', in the popup dialog and click Commit.

    image

    Model settings

    A typical SpineOpt database has two parts: the model settings and the physical system.

    The model settings that we need for this tutorial are also available as a template that we can import. The SpineOpt basic model template contains some predefined entities for a common deterministic model with a 'flat' temporal structure.

    • Download the basic SpineOpt model (right click on the link, then select Save link as...)

    • Import the template to the database through File -> Import..., and then select the template file you previously downloaded (basic_model_template.json).

    • Commit (save) the changes through the Commit button in the toolbar.

    One of the predefined entities is the report. The report determines which variables of the SpineOpt model show up in the results later on. Currently, there is no output connected to the report. We'll have to do that manually:

    • Locate the Entity tree in the Spine DB editor (typically at the top-left).

    • Press the '+' next to the report__output class. The Add entities dialog will pop up.

    • We'll have to fill in the field for the report and the output. Double click the field to see the options. For the 'report' field we need 'report1' and for the 'output' field we only need 'unit_flow'.

    • Press Ok.

    • Commit (save) the changes through the Commit button in the toolbar.

    • Enter report1 under report, and unit_flow under output, as seen in the image below; then press Ok. This will tell SpineOpt to write the value of the unit_flow optimization variable to the output database, as part of report1.

    image

    The resulting model structure can then be seen in the picture below (by selecting the model, the stochastic structure and the report in the root menu).

    image

    Creating nodes and units

    As for the physical system, we start with creating nodes and units. As shown before, the simple system contains 2 nodes and 2 units.

    Info

    In SpineOpt, nodes are points where an energy balance takes place, whereas units are energy conversion devices that can take energy from nodes, and release energy to nodes.

    To create the nodes:

    • Locate the Entity tree in the Spine DB editor (typically at the top-left).

    • Right click on the [node] class, and select Add objects from the context menu (or press the '+' icon next to it). The Add entities dialog will pop up.

    • Enter the names for the system nodes as seen in the image below, then press Ok. This will create two entities of class node, called fuel and electricity.

    image

    To create the units we do the same thing:

    • Press '+' next to the unit class, and add two units called power_plant_a and power_plant_b.

    image

    Info

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Creating relationships between the nodes and units

    For the simple system we need to link the nodes and the units. Intuitively, we know that we need to make flows from the 'fuel' node to the units and to the 'electricity' node. Additionally we'll have to add a 'unit__node_node' entity to be able to add data on properties to the relation between the input and the output of the units.

    For the flow from the 'fuel' node to the units:

    • Press '+' next to the unit__from_node class, you'll see a 'unit' field and a 'node' field.

    • Double click the unit or node field to see the options.

    • Select each unit once and the 'fuel' node twice, resulting in the combinations 'power_plant_a'-'fuel' and 'power_plant_b'-'fuel'.

    Info

    Alternatively right click the objects in the graph view and add relationships will show the available relationships. You can then make the desired relations visually. Note that this only works when the involved units/nodes/... are visible in the graph view. To make an object visible, simply click on the object in the list of objects/object classes. You can select multiple objects with ctrl or shift.

    image

    For the flow from the units to the 'electricity' node, we do the same:

    • Press '+' next to the unit__to_node class and choose each unit once and the 'electricity' node twice, resulting in the combinations 'power_plant_a'-'electricity' and 'power_plant_b'-'electricity'

    image

    These flows so far only determine what happens between the node and the unit. However, we also need to determine what happens between the input and output of the unit. As there can be multiple inputs and outputs, we'll have to define which flows exactly contribute to the input/output behaviour. To that end we use a unit__node__node class.

    • Press '+' next to the unit__node__node class and choose the unit, its output node and its input node, resulting in the combinations 'power_plant_a'-'electricity'-'fuel' and 'power_plant_b'-'electricity'-'fuel'

    image

    Info

    The unit__node__node relationship is necessary to limit the flow (flows are unbound by default) and to define an efficiency. The order of the nodes is important for that definition (see later on). It may seem unintuitive to define an efficiency through a three-way relationship instead of a property of a unit, but this approach allows you to define efficiencies between any flow(s) coming in and out of the unit (e.g. CHP).

    The resulting system can be seen in the picture below (by selecting the node in the root menu).

    image

    Adding parameter values

    With the system in place, we can now enter the data as described in the beginning of this tutorial, i.e. the capacities, efficiencies, demand, etc. To enter the data we'll be using the table (typically in the center or below the graph view).

    Info

    The table view has three tabs below the table. We use the parameter value tab to enter values and the parameter definition tab to get an overview of the available parameters and their default values.

    Let's start with adding an electricity demand of 150 at the electricity node.

    • Select the 'electricity' node in the root menu, in the graph view or in the list after double clicking the entity_by_name field in the table.

    • Double click the parameter_name field and select demand.

    • Double click the alternativet_name field and select Base.

    • Double click the value field and enter 150.

    image

    Info

    The alternative name is not optional. If you don't select Base (or another name) you will not be able to save your data. Speaking of which, when is the last time you saved/committed?

    For the fuel node we want an infinite supply. Since the default behaviour of a node is to balance all incoming and outgoing flows, we'll have to take that balance away.

    In the table, select

    • entity_by_name: 'fuel' node

    • parameter_name: balance_type

    • alternative_name: Base

    • value: balance_type_none

    image

    For the power plants we want to specify the variable operation and maintenance (VOM) cost, the capacity and the efficiency. Each of these parameters are defined in different parts of the system. That is, again, because it is possible to define multiple inputs and outputs. To pinpoint the correct flows, the parameters are therefore related to the flows rather than the unit. In particular, the VOM cost is related to the input flow and as such to unit__from_node between the unit and the 'fuel' node. The capacity is related to the output flow and as such to unit__to_node between the unit and the 'electricity' node. The efficiency is related to the relation between the input and the output and as such to unit__node_node between the unit, the 'electricity' node and the 'fuel' node.

    We enter these values again in the table.

    For the VOM cost of the power plants:

    • select the unit__from_node entity class

    • entity_by_name: 'power_plant_a|fuel'

    • parameter_name: vom_cost

    • alternative_name: Base

    • value: 25.0

    • Do the same for 'power_plant_b' with a value of 50.0

    image

    For the capacity of the power plants:

    • select the unit__to_node entity class

    • entity_by_name: 'power_plant_a|electricity'

    • parameter_name: unit_capacity

    • alternative_name: Base

    • value: 100.0

    • Do the same for 'power_plant_b' with a value of 200.0

    image

    For the efficiency of the power plants:

    • select the unit__node_node entity class

    • entity_by_name: 'power_plant_a|electricity|fuel'

    • parameter_name: fix_ratio_out_in_unit_flow

    • alternative_name: Base

    • value: 0.7

    • Do the same for 'power_plant_b' with a value of 0.8

    image

    Info

    The order of the nodes is important for the fix_ratio_out_in_unit_flow parameter. If you have swapped the nodes or inverted the efficiency values, the Run SpineOpt tool will run into errors.

    When you're ready, save/commit all changes to the database.

    Select the root in the entity tree to see an overview of all parameters in the table.

    image

    Executing the workflow

    With the input database ready, we are ready to run SpineOpt.

    • Go back to Spine Toolbox's main window, and hit the Execute project button from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console (after clicking the object activity control in older versions).

    Examining the results

    If everything went well, the output should be written to the output database. Opening the output database in the Spine DB editor, we can inspect its values. Note that the entity tree looks different as there is no SpineOpt template loaded here. Regardless, the output is available in the displayed tables.

    By default all runs are shown in the tables. By selecting a specific run in the the alternatives (typically on the right), you can instead view the results of a single run.

    Typically there will be Time Series in the values. Double click these to view the values.

    For 'power_plant_a' you should see a value of 100 and for 'power_plant_b' a value of 50.

    +Simple system · SpineOpt.jl

    Simple System tutorial

    Welcome to Spine Toolbox's Simple System tutorial.

    This tutorial provides a step-by-step guide to setup a simple energy system with Spine Toolbox for SpineOpt. Spine Toolbox is used to create a workflow with databases and tools and SpineOpt is the tool that simulates/optimizes the energy system.

    Info

    If you haven't yet installed the tools or you are not sure whether you have the latest version, please follow the installation/upgrade guides:

    About the simple system

    In the simple system:

    • Two power plants take fuel from a source node and release electricity to another node in order to supply a demand.
    • Power plant 'a' has a capacity of 100 MWh, a variable operating cost of 25 euro/fuel unit, and generates 0.7 MWh of electricity per unit of fuel.
    • Power plant 'b' has a capacity of 200 MWh, a variable operating cost of 50 euro/fuel unit, and generates 0.8 MWh of electricity per unit of fuel.
    • The demand at the electricity node is 150 MWh.
    • The fuel node is able to provide infinite energy.

    image

    Spine Toolbox workflow

    The workflow for this tutorial is quite simple: A SpineOpt tool that reads data from an input database, executes the simulation/optimization and writes the results to an output database.

    To create the workflow, it is almost as simple as dragging these items (i.e. Data Store and Run SpineOpt) to the Design View and connecting them by dragging arrows between the blocks but there are some things that need to be configured:

    • The databases need to be initialised. Once you select a database you see the properties panel. Select the dialect of the database. Here we choose sqlite. Then press the button 'new spine db' to create and save the database on your computer (Spine Toolbox will suggest a good folder).

    • Connecting tools with (yellow) arrows in the Toolbox does not mean that the tools will use these items. The arrows in the Toolbox view make items (databases) available. To let SpineOpt know we want to use these items, we need to go to the properties panel of Run SpineOpt and drag the available items to the tool arguments. The order of the items is first the input, then the output. See below for how the property window should look.

    image

    • (optional) The Spine data stores are quite generic. In order for SpineOpt to be able to read the input database, we need to change its format from the Spine format to the SpineOpt format. Luckily we can use templates for this. One of those templates is made available as an item in Spine Toolbox: Load template. The other option is to load templates into the database using the db editor. The templates can also be used to pre-populate the database with some basic components. Here we briefly explain the use of the Load template block and later we show how to import a template and basic components with the spine db editor. To use the Load template block, drag it to the view and connect it to the input database. Just like the Run SpineOpt block we need to drag the available input database to the tool argument.

    The result should look similar to this (+/- the Load template block):

    image

    That is it for the workflow. Now we can enter the data for the setup of the simple system into the input database, run the workflow and view the results in the output database.

    Entering input data

    To enter the necessary data for the simple system, we'll use the Spine DB editor. The Spine DB editor is a dedicated interface within Spine Toolbox for visualizing and managing Spine databases. The default view shows tables (see below) but for viewing energy system configurations it is nice to see a graph. Press the graph button in the toolbar. The graph view only shows what you select in the root menu and what your selected entities are connected to.

    To open the editor:

    • Double click the input Data Store item (or select the 'input' Data Store item in the Design View, go to Data Store Properties and hit Open editor).

    image

    In the following we enter the input data for the simple system.

    Importing the SpineOpt database template

    A SpineOpt database is a spine database but a spine database is not necessarily a SpineOpt database. Therefore we first need to format the database to a SpineOpt database with the SpineOpt template. The SpineOpt template contains the fundamental entity classes and parameter definitions that SpineOpt recognizes and expects. One option to load the template is to use the 'Load template' tool as mentioned before. Another option is to import the template with the Spine DB editor. To that end:

    • Download the SpineOpt database template (right click on the link, then select Save link as...)

    • To import the template to the database, click on File -> Import..., and then select the template file you previously downloaded (spineopt_template.json). The contents of that file will be imported into the current database, and you should then see classes like 'commodity', 'connection' and 'model' under the root menu.

    • To save our changes, press the Commit button in the toolbar. Enter a commit message, e.g. 'Import SpineOpt template', in the popup dialog and click Commit.

    image

    Model settings

    A typical SpineOpt database has two parts: the model settings and the physical system.

    The model settings that we need for this tutorial are also available as a template that we can import. The SpineOpt basic model template contains some predefined entities for a common deterministic model with a 'flat' temporal structure.

    • Download the basic SpineOpt model (right click on the link, then select Save link as...)

    • Import the template to the database through File -> Import..., and then select the template file you previously downloaded (basic_model_template.json).

    • Commit (save) the changes through the Commit button in the toolbar.

    One of the predefined entities is the report. The report determines which variables of the SpineOpt model show up in the results later on. Currently, there is no output connected to the report. We'll have to do that manually:

    • Locate the Entity tree in the Spine DB editor (typically at the top-left).

    • Press the '+' next to the report__output class. The Add entities dialog will pop up.

    • We'll have to fill in the field for the report and the output. Double click the field to see the options. For the 'report' field we need 'report1' and for the 'output' field we only need 'unit_flow'.

    • Press Ok.

    • Commit (save) the changes through the Commit button in the toolbar.

    • Enter report1 under report, and unit_flow under output, as seen in the image below; then press Ok. This will tell SpineOpt to write the value of the unit_flow optimization variable to the output database, as part of report1.

    image

    The resulting model structure can then be seen in the picture below (by selecting the model, the stochastic structure and the report in the root menu).

    image

    Creating nodes and units

    As for the physical system, we start with creating nodes and units. As shown before, the simple system contains 2 nodes and 2 units.

    Info

    In SpineOpt, nodes are points where an energy balance takes place, whereas units are energy conversion devices that can take energy from nodes, and release energy to nodes.

    To create the nodes:

    • Locate the Entity tree in the Spine DB editor (typically at the top-left).

    • Right click on the [node] class, and select Add objects from the context menu (or press the '+' icon next to it). The Add entities dialog will pop up.

    • Enter the names for the system nodes as seen in the image below, then press Ok. This will create two entities of class node, called fuel and electricity.

    image

    To create the units we do the same thing:

    • Press '+' next to the unit class, and add two units called power_plant_a and power_plant_b.

    image

    Info

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Creating relationships between the nodes and units

    For the simple system we need to link the nodes and the units. Intuitively, we know that we need to make flows from the 'fuel' node to the units and to the 'electricity' node. Additionally we'll have to add a 'unit__node_node' entity to be able to add data on properties to the relation between the input and the output of the units.

    For the flow from the 'fuel' node to the units:

    • Press '+' next to the unit__from_node class, you'll see a 'unit' field and a 'node' field.

    • Double click the unit or node field to see the options.

    • Select each unit once and the 'fuel' node twice, resulting in the combinations 'power_plant_a'-'fuel' and 'power_plant_b'-'fuel'.

    Info

    Alternatively right click the objects in the graph view and add relationships will show the available relationships. You can then make the desired relations visually. Note that this only works when the involved units/nodes/... are visible in the graph view. To make an object visible, simply click on the object in the list of objects/object classes. You can select multiple objects with ctrl or shift.

    image

    For the flow from the units to the 'electricity' node, we do the same:

    • Press '+' next to the unit__to_node class and choose each unit once and the 'electricity' node twice, resulting in the combinations 'power_plant_a'-'electricity' and 'power_plant_b'-'electricity'

    image

    These flows so far only determine what happens between the node and the unit. However, we also need to determine what happens between the input and output of the unit. As there can be multiple inputs and outputs, we'll have to define which flows exactly contribute to the input/output behaviour. To that end we use a unit__node__node class.

    • Press '+' next to the unit__node__node class and choose the unit, its output node and its input node, resulting in the combinations 'power_plant_a'-'electricity'-'fuel' and 'power_plant_b'-'electricity'-'fuel'

    image

    Info

    The unit__node__node relationship is necessary to limit the flow (flows are unbound by default) and to define an efficiency. The order of the nodes is important for that definition (see later on). It may seem unintuitive to define an efficiency through a three-way relationship instead of a property of a unit, but this approach allows you to define efficiencies between any flow(s) coming in and out of the unit (e.g. CHP).

    The resulting system can be seen in the picture below (by selecting the node in the root menu).

    image

    Adding parameter values

    With the system in place, we can now enter the data as described in the beginning of this tutorial, i.e. the capacities, efficiencies, demand, etc. To enter the data we'll be using the table (typically in the center or below the graph view).

    Info

    The table view has three tabs below the table. We use the parameter value tab to enter values and the parameter definition tab to get an overview of the available parameters and their default values.

    Let's start with adding an electricity demand of 150 at the electricity node.

    • Select the 'electricity' node in the root menu, in the graph view or in the list after double clicking the entity_by_name field in the table.

    • Double click the parameter_name field and select demand.

    • Double click the alternativet_name field and select Base.

    • Double click the value field and enter 150.

    image

    Info

    The alternative name is not optional. If you don't select Base (or another name) you will not be able to save your data. Speaking of which, when is the last time you saved/committed?

    For the fuel node we want an infinite supply. Since the default behaviour of a node is to balance all incoming and outgoing flows, we'll have to take that balance away.

    In the table, select

    • entity_by_name: 'fuel' node

    • parameter_name: balance_type

    • alternative_name: Base

    • value: balance_type_none

    image

    For the power plants we want to specify the variable operation and maintenance (VOM) cost, the capacity and the efficiency. Each of these parameters are defined in different parts of the system. That is, again, because it is possible to define multiple inputs and outputs. To pinpoint the correct flows, the parameters are therefore related to the flows rather than the unit. In particular, the VOM cost is related to the input flow and as such to unit__from_node between the unit and the 'fuel' node. The capacity is related to the output flow and as such to unit__to_node between the unit and the 'electricity' node. The efficiency is related to the relation between the input and the output and as such to unit__node_node between the unit, the 'electricity' node and the 'fuel' node.

    We enter these values again in the table.

    For the VOM cost of the power plants:

    • select the unit__from_node entity class

    • entity_by_name: 'power_plant_a|fuel'

    • parameter_name: vom_cost

    • alternative_name: Base

    • value: 25.0

    • Do the same for 'power_plant_b' with a value of 50.0

    image

    For the capacity of the power plants:

    • select the unit__to_node entity class

    • entity_by_name: 'power_plant_a|electricity'

    • parameter_name: unit_capacity

    • alternative_name: Base

    • value: 100.0

    • Do the same for 'power_plant_b' with a value of 200.0

    image

    For the efficiency of the power plants:

    • select the unit__node_node entity class

    • entity_by_name: 'power_plant_a|electricity|fuel'

    • parameter_name: fix_ratio_out_in_unit_flow

    • alternative_name: Base

    • value: 0.7

    • Do the same for 'power_plant_b' with a value of 0.8

    image

    Info

    The order of the nodes is important for the fix_ratio_out_in_unit_flow parameter. If you have swapped the nodes or inverted the efficiency values, the Run SpineOpt tool will run into errors.

    When you're ready, save/commit all changes to the database.

    Select the root in the entity tree to see an overview of all parameters in the table.

    image

    Executing the workflow

    With the input database ready, we are ready to run SpineOpt.

    • Go back to Spine Toolbox's main window, and hit the Execute project button from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console (after clicking the object activity control in older versions).

    Examining the results

    If everything went well, the output should be written to the output database. Opening the output database in the Spine DB editor, we can inspect its values. Note that the entity tree looks different as there is no SpineOpt template loaded here. Regardless, the output is available in the displayed tables.

    By default all runs are shown in the tables. By selecting a specific run in the the alternatives (typically on the right), you can instead view the results of a single run.

    Typically there will be Time Series in the values. Double click these to view the values.

    For 'power_plant_a' you should see a value of 100 and for 'power_plant_b' a value of 50.

    diff --git a/dev/tutorial/stochastic_system/index.html b/dev/tutorial/stochastic_system/index.html index 739fc48ee4..705445710e 100644 --- a/dev/tutorial/stochastic_system/index.html +++ b/dev/tutorial/stochastic_system/index.html @@ -1,2 +1,2 @@ -Stochastic structure · SpineOpt.jl

    Stochastic structure tutorial

    Welcome to Spine Toolbox's Stochastic System tutorial.

    This tutorial provides a step-by-step guide to get started with the stochastic structure. More information can be found in the documentation on the stochastic structure. It is recommended to make sure you are able to get the simple system tutorial working first.

    In this tutorial we will take a look at independent scenarios and stochastic paths.

    Info

    In theory it is also possible to have different stochastic structures in different parts of your system. In practice that is very much prone to errors. As much of the functionality of different stochastic structures can be accomplished with a clever DAG, it is recommended to work with a single stochastic structure at all times.

    Setup starting from simple system tutorial

    We create a new Spine Toolbox project and start from the simple system tutorial.

    For the Spine Toolbox project

    • Open Spine Toolbox
    • Create a new Spine Toolbox project
    • Add two data store items (input and output)
      • set the dialect to sqlite
      • push the new database button
    • Add the run SpineOpt tool
      • connect the databases to the SpineOpt tool
      • in the properties pane of the SpineOpt tool,
      move the available resources to the tool arguments

    For the simple system tutorial

    • Download the simple system database (json file)

    from the examples folder in the SpineOpt repository (you can save the json file in your Spine Toolbox project folder)

    • Enter the input database such that you are in the spine db editor
    • Go to the hamburger menu (Alt+F) and select import
    • Locate the downloaded file to import the simple system
    • We save our results when we commit to the database,

    so go again to the hamburger menu and select commit. The update message can be something like this: import simple system tutorial.

    Note

    The graph view is not always enabled by default. If you want to see the simple system, go to the hamburger menu and select graph.

    Independent scenarios

    Recall from the simple system tutorial that there actually already is a stochastic structure present. Let us take a closer look at that structure.

    image

    The scenarios are the labels that are available to the user to label their data. Don't worry, we'll come back to that later. Here, there is currently one scenario realization.

    The scenarios are managed by the stochastic structure. Foremost, the stochastic structure is connected to the model with the model__stochastic_structure relationship. The stochastic structure is also connected to different parts of the energy system to manage the stochastic structure in these parts. With the model__defaultstochasticstructure relationship we can connect the scenario to the entire energy system. Here, there is one stochastic structure deterministic which is also the systems default.

    It is quite simple to add an independent scenario to this existing stochastic structure.

    • Add a scenario object and call it 'independent'
    • Add a stochastic_structure__stochastic_scenario relationship between independent and deterministic

    either from the tree view (right click -> new relationship) or from the graph view (right click -> add relationship)

    image

    Now we can use these labels in the values for the energy system.

    • Change the demand parameter at the electricity_node from 150.0 to a map

    (right click -> edit, parameter type map)

    • for the x column we can use our scenario labels, for the Value column we can choose our values
    • Choose realization 150.0 and independent 100.0
    • Save/Commit the results

    image

    That is it! We can now run the model and the output database will show the results for both scenarios. In the realization scenario power plant b produces an output of 50. In the independent scenario power plant b does not produce anything as the demand is low enough for power plant a to produce all the necessary energy.

    Stochastic path

    SpineOpt always works with stochastic paths. The stochastic path describes which scenario is active at each time step. There can be multiple stochastic paths in parallel. The stochastic structure collects the stochastic paths in a direct acyclic graph (DAG).

    But let's make that more clear with an example. We can continue from the previous structure, but let's rename the structure and scenarios. (optional step)

    • Right click the object (either in the tree view or the graph view) and select edit
    • Rename the stochastic structure from deterministic to DAG
    • Rename the realization scenario to base
    • Rename the independent scenario to forecast1

    Perhaps from the name you already guessed it, we are going to add some scenarios.

    • Add two scenario objects forecast2 and forecast3
    • Connect the two scenarios to stochastic structure

    And we need to adjust the map for the electricity demand accordingly.

    • Edit the map and provide a value for each scenario

    (see image below)

    image

    All these scenarios are independently available to the stochastic structure but now we want to define the underlying relationships to make a stochastic path. In particular, we want to start from a base scenario and later split in the forecast scenarios. For SpineOpt that means that the base scenario is the parent scenario and the following forecast scenarios are the child scenarios.

    • add the parent_stochastic_structure__child_stochastic_structure

    for each forecast scenario and select the base scenario as its parent (the first scenario is the parent scenario and the second scenario is its child)

    image

    We also need to tell SpineOpt what the probability is that we end up in a certain child. That information is stored in the stochastic structure so you'll find the corresponding parameter in the stochastic_structure__stochastic_scenario relationship. Here we assume that each forecast is equally likely to happen.

    • for each DAG | forecast relationship, add a value for

    the weight_relative_to_parent parameter; the sum needs to be equal to 1

    image

    That results in the stochastic structure below.

    image

    We can run the SpineOpt tool on this database but we will only see the values for the base scenario. That is because SpineOpt assumes that a scenario runs forever. So, we need to tell SpineOpt when the base scenario ends.

    • The current resolution of the system is 1D

    but we need a higher resolution if we want to switch scenarios. So, set the resolution parameter of the temporal block flat to 1h.

    • To end the base structure after 6 h,

    we go to the DAG | base relationship and set the parameter stochastic_scenario_end to a 6h duration value (to obtain a duration value we need to right click the value field and select the parameter type duration)

    Do not forget to save/commit from time to time.

    When we run the model now, we will obtain values for all scenarios.

    Note

    For the sake of completion we will also tell you what to do when you want converge the forecasts into an end scenario.

    • add a scenario called end
    • map the end scenario for the electricity demand to the value 200.0
    • connect the end scenario to the stochastic structure
    • connect the end scenario to each of the forecasts,

    where the forecasts are considered as the parents

    • set the weight of the end scenario to 1
    • let the forecasts scenarios end after a duration of 16 hours

    image

    Warning

    The stochastic_scenario_end parameter starts counting from the start of the simulation! In the examples above, when the base scenario has a duration of 6h and the forecast scenarios have a duration of 16h, the forecast scenarios will only be active for 10 hours between hour 6 and hour 16!

    +Stochastic structure · SpineOpt.jl

    Stochastic structure tutorial

    Welcome to Spine Toolbox's Stochastic System tutorial.

    This tutorial provides a step-by-step guide to get started with the stochastic structure. More information can be found in the documentation on the stochastic structure. It is recommended to make sure you are able to get the simple system tutorial working first.

    In this tutorial we will take a look at independent scenarios and stochastic paths.

    Info

    In theory it is also possible to have different stochastic structures in different parts of your system. In practice that is very much prone to errors. As much of the functionality of different stochastic structures can be accomplished with a clever DAG, it is recommended to work with a single stochastic structure at all times.

    Setup starting from simple system tutorial

    We create a new Spine Toolbox project and start from the simple system tutorial.

    For the Spine Toolbox project

    • Open Spine Toolbox
    • Create a new Spine Toolbox project
    • Add two data store items (input and output)
      • set the dialect to sqlite
      • push the new database button
    • Add the run SpineOpt tool
      • connect the databases to the SpineOpt tool
      • in the properties pane of the SpineOpt tool,
      move the available resources to the tool arguments

    For the simple system tutorial

    • Download the simple system database (json file)

    from the examples folder in the SpineOpt repository (you can save the json file in your Spine Toolbox project folder)

    • Enter the input database such that you are in the spine db editor
    • Go to the hamburger menu (Alt+F) and select import
    • Locate the downloaded file to import the simple system
    • We save our results when we commit to the database,

    so go again to the hamburger menu and select commit. The update message can be something like this: import simple system tutorial.

    Note

    The graph view is not always enabled by default. If you want to see the simple system, go to the hamburger menu and select graph.

    Independent scenarios

    Recall from the simple system tutorial that there actually already is a stochastic structure present. Let us take a closer look at that structure.

    image

    The scenarios are the labels that are available to the user to label their data. Don't worry, we'll come back to that later. Here, there is currently one scenario realization.

    The scenarios are managed by the stochastic structure. Foremost, the stochastic structure is connected to the model with the model__stochastic_structure relationship. The stochastic structure is also connected to different parts of the energy system to manage the stochastic structure in these parts. With the model__defaultstochasticstructure relationship we can connect the scenario to the entire energy system. Here, there is one stochastic structure deterministic which is also the systems default.

    It is quite simple to add an independent scenario to this existing stochastic structure.

    • Add a scenario object and call it 'independent'
    • Add a stochastic_structure__stochastic_scenario relationship between independent and deterministic

    either from the tree view (right click -> new relationship) or from the graph view (right click -> add relationship)

    image

    Now we can use these labels in the values for the energy system.

    • Change the demand parameter at the electricity_node from 150.0 to a map

    (right click -> edit, parameter type map)

    • for the x column we can use our scenario labels, for the Value column we can choose our values
    • Choose realization 150.0 and independent 100.0
    • Save/Commit the results

    image

    That is it! We can now run the model and the output database will show the results for both scenarios. In the realization scenario power plant b produces an output of 50. In the independent scenario power plant b does not produce anything as the demand is low enough for power plant a to produce all the necessary energy.

    Stochastic path

    SpineOpt always works with stochastic paths. The stochastic path describes which scenario is active at each time step. There can be multiple stochastic paths in parallel. The stochastic structure collects the stochastic paths in a direct acyclic graph (DAG).

    But let's make that more clear with an example. We can continue from the previous structure, but let's rename the structure and scenarios. (optional step)

    • Right click the object (either in the tree view or the graph view) and select edit
    • Rename the stochastic structure from deterministic to DAG
    • Rename the realization scenario to base
    • Rename the independent scenario to forecast1

    Perhaps from the name you already guessed it, we are going to add some scenarios.

    • Add two scenario objects forecast2 and forecast3
    • Connect the two scenarios to stochastic structure

    And we need to adjust the map for the electricity demand accordingly.

    • Edit the map and provide a value for each scenario

    (see image below)

    image

    All these scenarios are independently available to the stochastic structure but now we want to define the underlying relationships to make a stochastic path. In particular, we want to start from a base scenario and later split in the forecast scenarios. For SpineOpt that means that the base scenario is the parent scenario and the following forecast scenarios are the child scenarios.

    • add the parent_stochastic_structure__child_stochastic_structure

    for each forecast scenario and select the base scenario as its parent (the first scenario is the parent scenario and the second scenario is its child)

    image

    We also need to tell SpineOpt what the probability is that we end up in a certain child. That information is stored in the stochastic structure so you'll find the corresponding parameter in the stochastic_structure__stochastic_scenario relationship. Here we assume that each forecast is equally likely to happen.

    • for each DAG | forecast relationship, add a value for

    the weight_relative_to_parent parameter; the sum needs to be equal to 1

    image

    That results in the stochastic structure below.

    image

    We can run the SpineOpt tool on this database but we will only see the values for the base scenario. That is because SpineOpt assumes that a scenario runs forever. So, we need to tell SpineOpt when the base scenario ends.

    • The current resolution of the system is 1D

    but we need a higher resolution if we want to switch scenarios. So, set the resolution parameter of the temporal block flat to 1h.

    • To end the base structure after 6 h,

    we go to the DAG | base relationship and set the parameter stochastic_scenario_end to a 6h duration value (to obtain a duration value we need to right click the value field and select the parameter type duration)

    Do not forget to save/commit from time to time.

    When we run the model now, we will obtain values for all scenarios.

    Note

    For the sake of completion we will also tell you what to do when you want converge the forecasts into an end scenario.

    • add a scenario called end
    • map the end scenario for the electricity demand to the value 200.0
    • connect the end scenario to the stochastic structure
    • connect the end scenario to each of the forecasts,

    where the forecasts are considered as the parents

    • set the weight of the end scenario to 1
    • let the forecasts scenarios end after a duration of 16 hours

    image

    Warning

    The stochastic_scenario_end parameter starts counting from the start of the simulation! In the examples above, when the base scenario has a duration of 6h and the forecast scenarios have a duration of 16h, the forecast scenarios will only be active for 10 hours between hour 6 and hour 16!

    diff --git a/dev/tutorial/temporal_resolution/index.html b/dev/tutorial/temporal_resolution/index.html index 45e52dd8b8..81316d53ba 100644 --- a/dev/tutorial/temporal_resolution/index.html +++ b/dev/tutorial/temporal_resolution/index.html @@ -22,4 +22,4 @@ `Value type: Duration`\ `Value: 1, 1, 1, 2`

    The array should look like this:

    image

    In the Entity tree window,

    `model: simple`\
     `temporal_block: not_flat`

    This tells the model that not_flat is a valid temporal block for the simple model.

    image

    Now you have seen how to define a varying temporal resolution. You could give "notflat" the attribute of "modeldefaulttemporalblock" to change the entire model to this variable resolution - but instead we're going to assign it to a specific entity to show how you can mix resolutions in the same model.

    Assigning an entity a unique resolution

    In the Entity tree window:

    `node: fuel_node`\
    -`temporal_block: not_flat`

    This sets the fuel node's temporal resolution to "not_flat" instead of the default of "flat"

    image

    Running the model & viewing results

    See how the yellow line (fuel demand of Powerplant A) now ends at a value of 50, which is equal to the last two demand values averaged over the 2hr window (70 + 30) / 2 = 50.

    image

    +`temporal_block: not_flat`

    This sets the fuel node's temporal resolution to "not_flat" instead of the default of "flat"

    image

    Running the model & viewing results

    See how the yellow line (fuel demand of Powerplant A) now ends at a value of 50, which is equal to the last two demand values averaged over the 2hr window (70 + 30) / 2 = 50.

    image

    diff --git a/dev/tutorial/unit_commitment/index.html b/dev/tutorial/unit_commitment/index.html index 83050e0bef..90c3fcb648 100644 --- a/dev/tutorial/unit_commitment/index.html +++ b/dev/tutorial/unit_commitment/index.html @@ -1,2 +1,2 @@ -Unit Commitment · SpineOpt.jl

    Unit commitment constraints tutorial

    This tutorial provides a step-by-step guide to include unit commitment constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding unit commitment constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 24-hour time series instead of a unique value
    • The power_plant_b has new parameters to account for the unit commitment constraints, such as minimum operating point, minimum uptime, and minimum downtime
    • The optimization is done a mixed-integer programming (MIP) to account for the binary nature of the unit commitment decision variables

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the unit commitment concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add unit commitment constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 24-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below. You can copy and paste the values from the file: ucelectricitynode_demand.csv
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h since our unit commitment case study is for a day-ahead dispatch of 24 hours.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Establishing new output relationships

    Since we will have the new unit commitment variables, we want to see the results of these variables and their total cost in the objective function. So, we will create new relationships to report these results:

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Enter report1 under report, and units_on under output. Repete the same procedure for the following outputs as seen in the image below; then press OK.
    • This will write the unit commitment variable values and costs in the objective function to the output database as a part of report1.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand first until its maximum capacity, and then the power_plant_b (i.e., the more expensive unit) covers the demand that is left. This is the most economical dispatch since the problem has no extra constraints (so far!).

    image

    To explore the cost results, the pivot table view shows a more user-friendly option to analyze the results. Remember that you can find a description of how to create the pivot table view in the Simple System tutorial here. The cost components in the objective function are shown in the image below. As expected, all the costs are associated with the variable_om_costs since we haven't included the unit-commitment constraints yet.

    image

    Step 2 - Include the minimum operating point

    Let's assume that the power_plant_b has a minimum operating point of 10%, meaning that if the power plant is on, it must produce at least 20MW.

    Adding the minium operating point

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • In Relationship tree, expand the unit__to_node class and select power_plant_b | electricity_node.
    • In the Relationship parameter table (typically at the bottom-center), select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point of power_plant_b when producing electricity.

    image

    Adding the unit commitment costs and initial states

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • online_variable_type parameter and the Base alternative, and select the value unit_online_variable_type_binary. This will define that the unit commitment variables will be binary. SpineOpt identifies this situation from the input data and internally changes the model from LP to MIP.
      • shut_down_cost parameter and the Base alternative, and enter the value 7. This will establish that there's a cost of '7' EUR per shutdown.
      • start_up_cost parameter and the Base alternative, and enter the value 5. This will establish that there's a cost of '5' EUR per startup.
      • units_on_cost parameter and the Base alternative, and enter the value 3. This will establish that there's a cost of '3' EUR per units on (e.g., idling cost).
      • initial_units_on parameter and the Base alternative, and enter the value 0. This will establish that there are no units 'on' before the first time step.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum operating point

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    • Do you notice something different in your solver log? Depending on the solver, the output might change, but you should be able to see that the solver is using MIP to solve the problem. For instance, if you are using the solver HiGHS (i.e., the default solver in SpineOpt), then you will see something like "Solving MIP model with:" and the Branch and Bound (B&B) tree solution. Since this is a tiny problem, sometimes the solver can find the optimal solution from the presolve step, avoiding going into the B&B step.

    Examining the results including the minimum operating point

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Any difference? What happended to the flows in power_plant_a and power_plant_b?

    image

    • Let's take a look to the units_on and units_started_up in the image below to get wider perspective.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on', then power_plant_a needs to reduce its output even though it has the lower variable cost, making the total system cost (i.e., objective function) more expensive than in the previous run. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 3 - Include the minimum uptime

    Let's assume that the power_plant_b also has a minimum uptime of 8 hours, meaning that if the power plant starts up, it must be on at least eight hours.

    Adding the minimum uptime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_up_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum uptime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum uptime

    You know the drill, go ahead :wink:

    Examining the results including the minimum uptime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Interesting. Don't you think?

    image

    • Let's take a look again to the units_on and units_started_up in the image below.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on' and also needs to be at least 8h 'on' each time it starts, then power_plant_b starts even before the demand is greater than the capacity of power_plant_a. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 4 - Include the minimum downtime

    Let's assume that the power_plant_b also has a minimum downtime of 8 hours, meaning that if the power plant shuts down, it must be off at least eight hours.

    Adding the minimum downtime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_down_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum downtime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum downtime

    One last time, don't give up!

    Examining the results including the minimum downtime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Wow! This result is even more interesting :stuckouttonguewinkingeye:. Do you know what happened?

    image

    • Let's take a look again to the units_on and units_started_up in the image below. Instead of two start-ups, the power_plant_b only starts once. Why?

    image

    • Since power_plant_b needs to be at least producing 20MW when it is 'on' plus also needs to be at least 8h 'on' each time it starts, and it needs to be at least 8h 'off' if it shutdowns, then power_plant_b never shuts down and stays 'on' after it starts because it is the only way to fulfil the unit commitment constraints. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs (which is zero this time), and the increase in the variable_om_costs due to flow changes.

    image

    If you have completed this tutorial, congratulations! You have mastered the basic concepts of unit commitment using SpineToolbox and SpineOpt. Keep up the good work!

    +Unit Commitment · SpineOpt.jl

    Unit commitment constraints tutorial

    This tutorial provides a step-by-step guide to include unit commitment constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding unit commitment constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 24-hour time series instead of a unique value
    • The power_plant_b has new parameters to account for the unit commitment constraints, such as minimum operating point, minimum uptime, and minimum downtime
    • The optimization is done a mixed-integer programming (MIP) to account for the binary nature of the unit commitment decision variables

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the unit commitment concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add unit commitment constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 24-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below. You can copy and paste the values from the file: ucelectricitynode_demand.csv
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h since our unit commitment case study is for a day-ahead dispatch of 24 hours.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Establishing new output relationships

    Since we will have the new unit commitment variables, we want to see the results of these variables and their total cost in the objective function. So, we will create new relationships to report these results:

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Enter report1 under report, and units_on under output. Repete the same procedure for the following outputs as seen in the image below; then press OK.
    • This will write the unit commitment variable values and costs in the objective function to the output database as a part of report1.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand first until its maximum capacity, and then the power_plant_b (i.e., the more expensive unit) covers the demand that is left. This is the most economical dispatch since the problem has no extra constraints (so far!).

    image

    To explore the cost results, the pivot table view shows a more user-friendly option to analyze the results. Remember that you can find a description of how to create the pivot table view in the Simple System tutorial here. The cost components in the objective function are shown in the image below. As expected, all the costs are associated with the variable_om_costs since we haven't included the unit-commitment constraints yet.

    image

    Step 2 - Include the minimum operating point

    Let's assume that the power_plant_b has a minimum operating point of 10%, meaning that if the power plant is on, it must produce at least 20MW.

    Adding the minium operating point

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • In Relationship tree, expand the unit__to_node class and select power_plant_b | electricity_node.
    • In the Relationship parameter table (typically at the bottom-center), select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point of power_plant_b when producing electricity.

    image

    Adding the unit commitment costs and initial states

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • online_variable_type parameter and the Base alternative, and select the value unit_online_variable_type_binary. This will define that the unit commitment variables will be binary. SpineOpt identifies this situation from the input data and internally changes the model from LP to MIP.
      • shut_down_cost parameter and the Base alternative, and enter the value 7. This will establish that there's a cost of '7' EUR per shutdown.
      • start_up_cost parameter and the Base alternative, and enter the value 5. This will establish that there's a cost of '5' EUR per startup.
      • units_on_cost parameter and the Base alternative, and enter the value 3. This will establish that there's a cost of '3' EUR per units on (e.g., idling cost).
      • initial_units_on parameter and the Base alternative, and enter the value 0. This will establish that there are no units 'on' before the first time step.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum operating point

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    • Do you notice something different in your solver log? Depending on the solver, the output might change, but you should be able to see that the solver is using MIP to solve the problem. For instance, if you are using the solver HiGHS (i.e., the default solver in SpineOpt), then you will see something like "Solving MIP model with:" and the Branch and Bound (B&B) tree solution. Since this is a tiny problem, sometimes the solver can find the optimal solution from the presolve step, avoiding going into the B&B step.

    Examining the results including the minimum operating point

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Any difference? What happended to the flows in power_plant_a and power_plant_b?

    image

    • Let's take a look to the units_on and units_started_up in the image below to get wider perspective.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on', then power_plant_a needs to reduce its output even though it has the lower variable cost, making the total system cost (i.e., objective function) more expensive than in the previous run. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 3 - Include the minimum uptime

    Let's assume that the power_plant_b also has a minimum uptime of 8 hours, meaning that if the power plant starts up, it must be on at least eight hours.

    Adding the minimum uptime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_up_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum uptime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum uptime

    You know the drill, go ahead :wink:

    Examining the results including the minimum uptime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Interesting. Don't you think?

    image

    • Let's take a look again to the units_on and units_started_up in the image below.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on' and also needs to be at least 8h 'on' each time it starts, then power_plant_b starts even before the demand is greater than the capacity of power_plant_a. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 4 - Include the minimum downtime

    Let's assume that the power_plant_b also has a minimum downtime of 8 hours, meaning that if the power plant shuts down, it must be off at least eight hours.

    Adding the minimum downtime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_down_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum downtime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum downtime

    One last time, don't give up!

    Examining the results including the minimum downtime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Wow! This result is even more interesting :stuckouttonguewinkingeye:. Do you know what happened?

    image

    • Let's take a look again to the units_on and units_started_up in the image below. Instead of two start-ups, the power_plant_b only starts once. Why?

    image

    • Since power_plant_b needs to be at least producing 20MW when it is 'on' plus also needs to be at least 8h 'on' each time it starts, and it needs to be at least 8h 'off' if it shutdowns, then power_plant_b never shuts down and stays 'on' after it starts because it is the only way to fulfil the unit commitment constraints. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs (which is zero this time), and the increase in the variable_om_costs due to flow changes.

    image

    If you have completed this tutorial, congratulations! You have mastered the basic concepts of unit commitment using SpineToolbox and SpineOpt. Keep up the good work!

    diff --git a/dev/tutorial/webinars/index.html b/dev/tutorial/webinars/index.html index bf3b2c4947..f7ed7f0530 100644 --- a/dev/tutorial/webinars/index.html +++ b/dev/tutorial/webinars/index.html @@ -1,2 +1,2 @@ -Webinars and examples · SpineOpt.jl
    +Webinars and examples · SpineOpt.jl