From 0512bef8e40fb04798bf2d1074548aecd45e0378 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 27 Mar 2024 08:15:13 +0000 Subject: [PATCH] build based on 8b2d728 --- dev/.documenter-siteinfo.json | 2 +- dev/advanced_concepts/Lossless_DC_power_flow/index.html | 2 +- .../cumulated_flow_restrictions/index.html | 2 +- dev/advanced_concepts/decomposition/index.html | 2 +- dev/advanced_concepts/investment_optimization/index.html | 2 +- dev/advanced_concepts/mga/index.html | 2 +- dev/advanced_concepts/multi_stage/index.html | 2 +- dev/advanced_concepts/powerflow/index.html | 2 +- .../pressure_driven_gas_transfer/index.html | 2 +- dev/advanced_concepts/ramping/index.html | 2 +- .../representative_days_w_seasonal_storage/index.html | 2 +- dev/advanced_concepts/reserves/index.html | 2 +- dev/advanced_concepts/stochastic_framework/index.html | 2 +- dev/advanced_concepts/temporal_framework/index.html | 2 +- dev/advanced_concepts/unit_commitment/index.html | 2 +- dev/advanced_concepts/user_constraints/index.html | 2 +- dev/concept_reference/Object Classes/index.html | 2 +- dev/concept_reference/Parameter Value Lists/index.html | 2 +- dev/concept_reference/Parameters/index.html | 2 +- dev/concept_reference/Relationship Classes/index.html | 2 +- dev/concept_reference/_example/index.html | 2 +- dev/concept_reference/balance_type/index.html | 2 +- dev/concept_reference/balance_type_list/index.html | 2 +- dev/concept_reference/big_m/index.html | 2 +- dev/concept_reference/block_end/index.html | 2 +- dev/concept_reference/block_start/index.html | 2 +- dev/concept_reference/boolean_value_list/index.html | 2 +- dev/concept_reference/candidate_connections/index.html | 2 +- dev/concept_reference/candidate_storages/index.html | 2 +- dev/concept_reference/candidate_units/index.html | 2 +- dev/concept_reference/commodity/index.html | 2 +- dev/concept_reference/commodity_lodf_tolerance/index.html | 2 +- dev/concept_reference/commodity_physics/index.html | 2 +- .../commodity_physics_duration/index.html | 2 +- dev/concept_reference/commodity_physics_list/index.html | 2 +- dev/concept_reference/commodity_ptdf_threshold/index.html | 2 +- dev/concept_reference/compression_factor/index.html | 2 +- dev/concept_reference/connection/index.html | 2 +- dev/concept_reference/connection__from_node/index.html | 2 +- .../connection__from_node__unit_constraint/index.html | 2 +- .../index.html | 2 +- .../connection__investment_temporal_block/index.html | 2 +- dev/concept_reference/connection__node__node/index.html | 2 +- dev/concept_reference/connection__to_node/index.html | 2 +- .../connection__to_node__unit_constraint/index.html | 2 +- .../connection_availability_factor/index.html | 2 +- dev/concept_reference/connection_capacity/index.html | 2 +- dev/concept_reference/connection_contingency/index.html | 2 +- .../connection_conv_cap_to_flow/index.html | 2 +- .../connection_emergency_capacity/index.html | 2 +- .../connection_flow_coefficient/index.html | 2 +- dev/concept_reference/connection_flow_cost/index.html | 2 +- dev/concept_reference/connection_flow_delay/index.html | 2 +- .../connection_investment_cost/index.html | 2 +- .../connection_investment_lifetime/index.html | 2 +- .../connection_investment_variable_type/index.html | 2 +- .../connection_investment_variable_type_list/index.html | 2 +- .../connection_linepack_constant/index.html | 2 +- dev/concept_reference/connection_monitored/index.html | 2 +- dev/concept_reference/connection_reactance/index.html | 2 +- .../connection_reactance_base/index.html | 2 +- dev/concept_reference/connection_resistance/index.html | 2 +- dev/concept_reference/connection_type/index.html | 2 +- dev/concept_reference/connection_type_list/index.html | 2 +- .../connections_invested_avaiable_coefficient/index.html | 2 +- .../connections_invested_big_m_mga/index.html | 2 +- .../connections_invested_coefficient/index.html | 2 +- dev/concept_reference/connections_invested_mga/index.html | 2 +- dev/concept_reference/constraint_sense/index.html | 2 +- dev/concept_reference/constraint_sense_list/index.html | 2 +- dev/concept_reference/curtailment_cost/index.html | 2 +- dev/concept_reference/cyclic_condition/index.html | 2 +- dev/concept_reference/db_lp_solver/index.html | 2 +- dev/concept_reference/db_lp_solver_list/index.html | 2 +- dev/concept_reference/db_lp_solver_options/index.html | 2 +- dev/concept_reference/db_mip_solver/index.html | 2 +- dev/concept_reference/db_mip_solver_list/index.html | 2 +- dev/concept_reference/db_mip_solver_options/index.html | 2 +- dev/concept_reference/demand/index.html | 2 +- dev/concept_reference/demand_coefficient/index.html | 2 +- dev/concept_reference/diff_coeff/index.html | 2 +- dev/concept_reference/downward_reserve/index.html | 2 +- dev/concept_reference/duration_unit/index.html | 2 +- dev/concept_reference/duration_unit_list/index.html | 2 +- .../fix_binary_gas_connection_flow/index.html | 2 +- dev/concept_reference/fix_connection_flow/index.html | 2 +- .../fix_connection_intact_flow/index.html | 2 +- dev/concept_reference/fix_connections_invested/index.html | 2 +- .../fix_connections_invested_available/index.html | 2 +- dev/concept_reference/fix_node_pressure/index.html | 2 +- dev/concept_reference/fix_node_state/index.html | 2 +- dev/concept_reference/fix_node_voltage_angle/index.html | 2 +- .../fix_nonspin_units_shut_down/index.html | 2 +- .../fix_nonspin_units_started_up/index.html | 2 +- .../fix_ratio_in_in_unit_flow/index.html | 2 +- .../fix_ratio_in_out_unit_flow/index.html | 2 +- .../fix_ratio_out_in_connection_flow/index.html | 2 +- .../fix_ratio_out_in_unit_flow/index.html | 2 +- .../fix_ratio_out_out_unit_flow/index.html | 2 +- dev/concept_reference/fix_storages_invested/index.html | 2 +- .../fix_storages_invested_available/index.html | 2 +- dev/concept_reference/fix_unit_flow/index.html | 2 +- dev/concept_reference/fix_unit_flow_op/index.html | 2 +- dev/concept_reference/fix_units_invested/index.html | 2 +- .../fix_units_invested_available/index.html | 2 +- dev/concept_reference/fix_units_on/index.html | 2 +- .../fix_units_on_coefficient_in_in/index.html | 2 +- .../fix_units_on_coefficient_in_out/index.html | 2 +- .../fix_units_on_coefficient_out_in/index.html | 2 +- .../fix_units_on_coefficient_out_out/index.html | 2 +- .../fixed_pressure_constant_0/index.html | 2 +- .../fixed_pressure_constant_1/index.html | 2 +- dev/concept_reference/fom_cost/index.html | 2 +- dev/concept_reference/frac_state_loss/index.html | 2 +- dev/concept_reference/fractional_demand/index.html | 2 +- dev/concept_reference/fuel_cost/index.html | 2 +- dev/concept_reference/graph_view_position/index.html | 2 +- dev/concept_reference/has_binary_gas_flow/index.html | 2 +- dev/concept_reference/has_pressure/index.html | 2 +- dev/concept_reference/has_state/index.html | 2 +- dev/concept_reference/has_voltage_angle/index.html | 2 +- dev/concept_reference/investment_group/index.html | 2 +- dev/concept_reference/is_active/index.html | 2 +- dev/concept_reference/is_non_spinning/index.html | 2 +- dev/concept_reference/is_renewable/index.html | 2 +- dev/concept_reference/is_reserve_node/index.html | 2 +- .../max_cum_in_unit_flow_bound/index.html | 2 +- dev/concept_reference/max_gap/index.html | 2 +- dev/concept_reference/max_iterations/index.html | 2 +- dev/concept_reference/max_mga_iterations/index.html | 2 +- dev/concept_reference/max_mga_slack/index.html | 2 +- dev/concept_reference/max_node_pressure/index.html | 2 +- .../max_ratio_in_in_unit_flow/index.html | 2 +- .../max_ratio_in_out_unit_flow/index.html | 2 +- .../max_ratio_out_in_connection_flow/index.html | 2 +- .../max_ratio_out_in_unit_flow/index.html | 2 +- .../max_ratio_out_out_unit_flow/index.html | 2 +- .../max_total_cumulated_unit_flow_from_node/index.html | 2 +- .../max_total_cumulated_unit_flow_to_node/index.html | 2 +- .../max_units_on_coefficient_in_in/index.html | 2 +- .../max_units_on_coefficient_in_out/index.html | 2 +- .../max_units_on_coefficient_out_in/index.html | 2 +- .../max_units_on_coefficient_out_out/index.html | 2 +- dev/concept_reference/max_voltage_angle/index.html | 2 +- dev/concept_reference/mga_diff_relative/index.html | 2 +- dev/concept_reference/min_capacity_margin/index.html | 2 +- .../min_capacity_margin_penalty/index.html | 2 +- dev/concept_reference/min_down_time/index.html | 2 +- dev/concept_reference/min_node_pressure/index.html | 2 +- .../min_ratio_in_in_unit_flow/index.html | 2 +- .../min_ratio_in_out_unit_flow/index.html | 2 +- .../min_ratio_out_in_connection_flow/index.html | 2 +- .../min_ratio_out_in_unit_flow/index.html | 2 +- .../min_ratio_out_out_unit_flow/index.html | 2 +- .../min_scheduled_outage_duration/index.html | 2 +- .../min_total_cumulated_unit_flow_from_node/index.html | 2 +- .../min_total_cumulated_unit_flow_to_node/index.html | 2 +- .../min_units_on_coefficient_in_in/index.html | 2 +- .../min_units_on_coefficient_in_out/index.html | 2 +- .../min_units_on_coefficient_out_in/index.html | 2 +- .../min_units_on_coefficient_out_out/index.html | 2 +- dev/concept_reference/min_up_time/index.html | 2 +- dev/concept_reference/min_voltage_angle/index.html | 2 +- dev/concept_reference/minimum_operating_point/index.html | 2 +- .../minimum_reserve_activation_time/index.html | 2 +- dev/concept_reference/model/index.html | 2 +- .../index.html | 2 +- .../model__default_investment_temporal_block/index.html | 2 +- .../model__default_stochastic_structure/index.html | 2 +- .../model__default_temporal_block/index.html | 2 +- dev/concept_reference/model__report/index.html | 2 +- .../model__stochastic_structure/index.html | 2 +- dev/concept_reference/model__temporal_block/index.html | 2 +- dev/concept_reference/model_end/index.html | 2 +- dev/concept_reference/model_start/index.html | 2 +- dev/concept_reference/model_type/index.html | 2 +- dev/concept_reference/model_type_list/index.html | 2 +- .../mp_min_res_gen_to_demand_ratio/index.html | 2 +- .../index.html | 2 +- dev/concept_reference/nodal_balance_sense/index.html | 2 +- dev/concept_reference/node/index.html | 2 +- dev/concept_reference/node__commodity/index.html | 2 +- .../node__investment_stochastic_structure/index.html | 2 +- .../node__investment_temporal_block/index.html | 2 +- dev/concept_reference/node__node/index.html | 2 +- .../node__stochastic_structure/index.html | 2 +- dev/concept_reference/node__temporal_block/index.html | 2 +- dev/concept_reference/node__unit_constraint/index.html | 2 +- dev/concept_reference/node_opf_type/index.html | 2 +- dev/concept_reference/node_opf_type_list/index.html | 2 +- dev/concept_reference/node_slack_penalty/index.html | 2 +- dev/concept_reference/node_state_cap/index.html | 2 +- dev/concept_reference/node_state_coefficient/index.html | 2 +- dev/concept_reference/node_state_min/index.html | 2 +- dev/concept_reference/number_of_units/index.html | 2 +- dev/concept_reference/online_variable_type/index.html | 2 +- dev/concept_reference/operating_cost/index.html | 2 +- dev/concept_reference/operating_points/index.html | 2 +- dev/concept_reference/ordered_unit_flow_op/index.html | 2 +- dev/concept_reference/outage_variable_type/index.html | 2 +- dev/concept_reference/output/index.html | 2 +- dev/concept_reference/output_db_url/index.html | 2 +- dev/concept_reference/output_resolution/index.html | 2 +- .../overwrite_results_on_rolling/index.html | 2 +- .../index.html | 2 +- dev/concept_reference/ramp_down_limit/index.html | 2 +- dev/concept_reference/ramp_up_limit/index.html | 2 +- dev/concept_reference/report/index.html | 2 +- dev/concept_reference/report__output/index.html | 2 +- .../representative_periods_mapping/index.html | 2 +- dev/concept_reference/reserve_procurement_cost/index.html | 2 +- dev/concept_reference/resolution/index.html | 2 +- dev/concept_reference/right_hand_side/index.html | 2 +- dev/concept_reference/roll_forward/index.html | 2 +- dev/concept_reference/shut_down_cost/index.html | 2 +- dev/concept_reference/shut_down_limit/index.html | 2 +- dev/concept_reference/start_up_cost/index.html | 2 +- dev/concept_reference/start_up_limit/index.html | 2 +- dev/concept_reference/state_coeff/index.html | 2 +- dev/concept_reference/stochastic_scenario/index.html | 2 +- dev/concept_reference/stochastic_scenario_end/index.html | 2 +- dev/concept_reference/stochastic_structure/index.html | 2 +- .../stochastic_structure__stochastic_scenario/index.html | 2 +- dev/concept_reference/storage_investment_cost/index.html | 2 +- .../storage_investment_lifetime/index.html | 2 +- .../storage_investment_variable_type/index.html | 2 +- .../storages_invested_avaiable_coefficient/index.html | 2 +- .../storages_invested_big_m_mga/index.html | 2 +- .../storages_invested_coefficient/index.html | 2 +- dev/concept_reference/storages_invested_mga/index.html | 2 +- dev/concept_reference/tax_in_unit_flow/index.html | 2 +- dev/concept_reference/tax_net_unit_flow/index.html | 2 +- dev/concept_reference/tax_out_unit_flow/index.html | 2 +- dev/concept_reference/temporal_block/index.html | 2 +- dev/concept_reference/the_basics/index.html | 2 +- dev/concept_reference/unit/index.html | 2 +- dev/concept_reference/unit__commodity/index.html | 2 +- dev/concept_reference/unit__from_node/index.html | 2 +- .../unit__from_node__unit_constraint/index.html | 2 +- .../unit__investment_stochastic_structure/index.html | 2 +- .../unit__investment_temporal_block/index.html | 2 +- dev/concept_reference/unit__node__node/index.html | 2 +- dev/concept_reference/unit__to_node/index.html | 2 +- .../unit__to_node__unit_constraint/index.html | 2 +- dev/concept_reference/unit__unit_constraint/index.html | 2 +- dev/concept_reference/unit_availability_factor/index.html | 2 +- dev/concept_reference/unit_capacity/index.html | 2 +- dev/concept_reference/unit_conv_cap_to_flow/index.html | 2 +- dev/concept_reference/unit_flow_coefficient/index.html | 2 +- dev/concept_reference/unit_idle_heat_rate/index.html | 2 +- .../unit_incremental_heat_rate/index.html | 2 +- dev/concept_reference/unit_investment_cost/index.html | 2 +- dev/concept_reference/unit_investment_lifetime/index.html | 2 +- .../unit_investment_variable_type/index.html | 2 +- .../unit_investment_variable_type_list/index.html | 2 +- .../unit_online_variable_type_list/index.html | 2 +- dev/concept_reference/unit_start_flow/index.html | 2 +- .../units_invested_avaiable_coefficient/index.html | 2 +- dev/concept_reference/units_invested_big_m_mga/index.html | 2 +- .../units_invested_coefficient/index.html | 2 +- dev/concept_reference/units_invested_mga/index.html | 2 +- .../units_on__stochastic_structure/index.html | 2 +- dev/concept_reference/units_on__temporal_block/index.html | 2 +- dev/concept_reference/units_on_coefficient/index.html | 2 +- dev/concept_reference/units_on_cost/index.html | 2 +- .../units_on_non_anticipativity_time/index.html | 2 +- .../units_started_up_coefficient/index.html | 2 +- dev/concept_reference/units_unavailable/index.html | 2 +- dev/concept_reference/upward_reserve/index.html | 2 +- dev/concept_reference/user_constraint/index.html | 2 +- dev/concept_reference/variable_type_list/index.html | 2 +- dev/concept_reference/vom_cost/index.html | 2 +- dev/concept_reference/weight/index.html | 2 +- .../weight_relative_to_parents/index.html | 2 +- dev/concept_reference/window_weight/index.html | 2 +- dev/concept_reference/write_lodf_file/index.html | 2 +- dev/concept_reference/write_mps_file/index.html | 2 +- dev/concept_reference/write_mps_file_list/index.html | 2 +- dev/concept_reference/write_ptdf_file/index.html | 2 +- dev/getting_started/archetypes/index.html | 2 +- dev/getting_started/creating_your_own_model/index.html | 2 +- dev/getting_started/installation/index.html | 2 +- dev/getting_started/output_data/index.html | 2 +- dev/getting_started/setup_workflow/index.html | 2 +- dev/how_to/change_the_solver/index.html | 2 +- dev/how_to/define_an_efficiency/index.html | 2 +- dev/how_to/print_the_model/index.html | 2 +- dev/implementation_details/documentation/index.html | 2 +- .../how_does_the_model_update_itself/index.html | 2 +- .../how_to_write_a_constraint/index.html | 2 +- dev/implementation_details/time_slices/index.html | 2 +- dev/index.html | 2 +- dev/library/index.html | 8 ++++---- dev/mathematical_formulation/constraints/index.html | 2 +- .../constraints_automatically_generated/index.html | 2 +- .../objective_function/index.html | 2 +- dev/mathematical_formulation/sets/index.html | 2 +- dev/mathematical_formulation/variables/index.html | 2 +- dev/tutorial/case_study_a5/index.html | 2 +- dev/tutorial/ramping/index.html | 2 +- dev/tutorial/reserves/index.html | 2 +- dev/tutorial/simple_system/index.html | 2 +- dev/tutorial/temporal_resolution/index.html | 2 +- dev/tutorial/tutorialTwoHydro/index.html | 2 +- dev/tutorial/unit_commitment/index.html | 2 +- dev/tutorial/webinars/index.html | 2 +- 306 files changed, 309 insertions(+), 309 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index e4331fe4e7..6d508b101b 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-27T07:25:47","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-27T08:15:02","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/dev/advanced_concepts/Lossless_DC_power_flow/index.html b/dev/advanced_concepts/Lossless_DC_power_flow/index.html index e258f3cc4e..29810662e1 100644 --- a/dev/advanced_concepts/Lossless_DC_power_flow/index.html +++ b/dev/advanced_concepts/Lossless_DC_power_flow/index.html @@ -1,2 +1,2 @@ -Lossless nodal DC power flows · SpineOpt.jl

Lossless nodal DC power flows

Currently, there are two different methods to represent lossless DC power flows. In the following the implementation of the nodal model is presented, based of node voltage angles.

Key concepts

In the following, it is described how to set up a connection in order to represent a nodal lossless DC power flow network. Therefore, key object - and relationship classes as well as parameters are introduced.

  1. connection: A connection represents the electricity line being modelled. A physical property of a connection is its connection_reactance, which is defined on the connection object. Furthermore, if the reactance is given in a p.u. different from the standard unit used (e.g. p.u. = 100MVA), the parameter connection_reactance_base can be used to perform this conversion.
  2. node: In a lossless DC power flow model, nodes correspond to buses. To use voltage angles for the representation of a lossless DC model, the has_voltage_angle needs to be true for these nodes (which will trigger the generation of the node_voltage_angle variable). Limits on the voltage angle can be enforced through the max_voltage_angle and min_voltage_angle parameters. The reference node of the system should have a voltage angle equal to zero, assigned through the parameter fix_node_voltage_angle.
  3. connection__to_node and connection__from_node : These relationships need to be introduced between the connection and each node, in order to allow power flows (i.e. connection_flow). Furthermore, a capacity limit on the connection line can be introduced on these relationships through the parameter connection_capacity.
  4. connection__node__node: To ensure energy conservation across the power line, a fixed ratio between incoming and outgoing flows should be given. The fix_ratio_out_in_connection_flow parameter enforces a fixed ratio between outgoing flows (i.e. to_node) and incoming flows (i.e. from_node). This parameter should be defined for both flow direction.

The mathematical formulation of the lossless DC power flow model using voltage angles is fully described here.

+Lossless nodal DC power flows · SpineOpt.jl

Lossless nodal DC power flows

Currently, there are two different methods to represent lossless DC power flows. In the following the implementation of the nodal model is presented, based of node voltage angles.

Key concepts

In the following, it is described how to set up a connection in order to represent a nodal lossless DC power flow network. Therefore, key object - and relationship classes as well as parameters are introduced.

  1. connection: A connection represents the electricity line being modelled. A physical property of a connection is its connection_reactance, which is defined on the connection object. Furthermore, if the reactance is given in a p.u. different from the standard unit used (e.g. p.u. = 100MVA), the parameter connection_reactance_base can be used to perform this conversion.
  2. node: In a lossless DC power flow model, nodes correspond to buses. To use voltage angles for the representation of a lossless DC model, the has_voltage_angle needs to be true for these nodes (which will trigger the generation of the node_voltage_angle variable). Limits on the voltage angle can be enforced through the max_voltage_angle and min_voltage_angle parameters. The reference node of the system should have a voltage angle equal to zero, assigned through the parameter fix_node_voltage_angle.
  3. connection__to_node and connection__from_node : These relationships need to be introduced between the connection and each node, in order to allow power flows (i.e. connection_flow). Furthermore, a capacity limit on the connection line can be introduced on these relationships through the parameter connection_capacity.
  4. connection__node__node: To ensure energy conservation across the power line, a fixed ratio between incoming and outgoing flows should be given. The fix_ratio_out_in_connection_flow parameter enforces a fixed ratio between outgoing flows (i.e. to_node) and incoming flows (i.e. from_node). This parameter should be defined for both flow direction.

The mathematical formulation of the lossless DC power flow model using voltage angles is fully described here.

diff --git a/dev/advanced_concepts/cumulated_flow_restrictions/index.html b/dev/advanced_concepts/cumulated_flow_restrictions/index.html index 562ab3f816..247cfa33bf 100644 --- a/dev/advanced_concepts/cumulated_flow_restrictions/index.html +++ b/dev/advanced_concepts/cumulated_flow_restrictions/index.html @@ -1,2 +1,2 @@ -Imposing renewable energy targets · SpineOpt.jl

Imposing renewable energy targets

This advanced concept illustrates how renewable targets can be realized in SpineOpt.

Imposing lower limits on renewable production

Imposing a lower bound on the cumulated flow of a unit group by an absolute value

In the current landscape of energy systems modeling, especially in investment models, it is a common idea to implement a lower limit on the amount of electricity that is generated by renewable sources. SpineOpt allows the user to implement such restrictions by means of the min_total_cumulated_unit_flow_to_node parameter. Which will trigger the creation of the constraint_total_cumulated_unit_flow.

To impose a limit on overall renewable generation over the entire optimization horizon, the following objects, relationships, and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. electricity generation from wind), where one

or multiple unit_flows are associated with renewable generation

  1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing electricity demand. (Note: To distinguish e.g. between regions there can also be more than one electricity node)
  2. unit__to_node: To associate electricity flows with a unit, the relationship between the unit and the electricity node needs to be imposed, to trigger the generation of a electricity-unit_flow variable.
  3. min_total_cumulated_unit_flow_to_node: This parameter triggers a lower bound on all cumulated flows from a unit (or a group of units), e.g. the group of all renewable generators, to a node (or node group).

Let's take a look at a simple example to see how this works. Suppose that we have a system with only one node, which represents the demand for electricity, and two units: a wind farm, and a conventional gas unit. To connect the wind farm to the electricity node, the unit__to_node relationship has to be defined.

One can then simply define the min_total_cumulated_unit_flow_to_node parameter for the 'windfarm__toelectricity_node' relationship to impose a lower bound on the total generation origination from the wind farm.

Note that the value of this parameter is expected to be given as an absolute value, thus care has to be taken to make sure that the units match with the ones used for the unit_flow variable.

The main source of flexibility in the use of this constraint lies in the possibility to define the parameter for relationships that link 'nodegroups' and/or 'unitgroups'. For example, by grouping multiple units that are considered renewable sources (e.g. PV, and wind), targets can be implemented across multiple renewable sources. Similarly, by defining multiple electricity nodes, generation targets can be spatially disagreggated.

Limiting the cumulated flow of a unit group by a share of the demand

For convenience, we want to be able to define the min_total_cumulated_unit_flow_to_node, when used to set a renewable target, as a share of the demand. At the moment an absolute lower bound needs to be provided by the user, but we want to automate this preprocessing in SpineOpt. (to be implemented)

Imposing an upper limit on carbon emissions

Imposing an upper limit on carbon emissions over the entire optimization horizon

To impose a limit on overall carbon emissions over the entire optimization horizon, the following objects, relationships and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

or multiple unit_flows are associated with carbon emissions

  1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. (Note: To distinguish e.g. between regions there can also be more than one carbon node)
  2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
  3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
  4. max_total_cumulated_unit_flow_to_node (and unit__to_node): This parameter triggers a limit on all flows from a unit (or a group of units), e.g. the group of all conventional generators, to a node (or node groups), e.g. considering the atmosphere as a fictive CO2 node, over the entire modelling horizon (e.g. a carbon budget). For example this could be defined on a relationship between a gasplant and a Carbon node, but can also be defined a unit group of all conventional generators and a carbon node. See also: constraint_total_cumulated_unit_flow

Imposing an upper bound on the cumulated flows of a unit group for a specific period of time (advanced method)

If the desired functionality is not to cap emissions over the entire modelling horizon, but rather for specific periods of time (e.g., to impose decreasing carbon caps over time), an alternative method can be used, which will be described in the following.

To illustrate this functionality, we will assume that there is a ficticious cap of 100 for a period of time 2025-2030, and a cap of 50 for the period of time 2030-2035. In this simple example, we will assume that one carbon-emitting unit carbon_unit is present with two outgoing commodity flows, e.g. here electricity and carbon.

Three nodes are required to represent this system: an electricity node, a carbon_cap_1 node (with has_state=true and node_state_cap=100), and a carbon_cap_2 node (with has_state=true and node_state_cap=50).

Further we introduce the unit__node__node relationships between carbon_unit__carbon_cap1__electricity and carbon_unit__carbon_cap2__electricity. On these relationships, we will define the ratio between emissions and electricity production. In this fictious example, we will assume 0.5 units of emissions per unit of electricity.

The fix_ratio_out_out parameter will now be defined as a time varying parameter in the following way (simplified representation of TimeSeries parameter):

fix_ratio_out_out(carbon_unit__carbon_cap1__electricity) = [2025: 0.5; 2030: 0] fix_ratio_out_out(carbon_unit__carbon_cap2__electricity) = [2025: 0; 2030: 0.5]

This way the first emission-cap node carbon_cap1 can only be "filled" during the 2025-2030, while carbon_cap2 can only be "filled" during the second period 2030-2035.

Note that it would also be possible to have, e.g., one node with time-varying node_state_cap. However, in this case, "unused" carbon emissions in the first period of time would be availble for the second period of time.

Imposing a carbon tax

To include carbon pricing in a model, the following objects, relationships and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

or multiple unit_flows are associated with carbon emissions

  1. node and tax_in_unit_flow: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. To associate a carbon-tax with all incoming unit_flows, the tax_in_unit_flow parameter can be defined on this node (Note: To distinguish e.g. between regions there can also be more than one carbon node)
  2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
  3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (Gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
+Imposing renewable energy targets · SpineOpt.jl

Imposing renewable energy targets

This advanced concept illustrates how renewable targets can be realized in SpineOpt.

Imposing lower limits on renewable production

Imposing a lower bound on the cumulated flow of a unit group by an absolute value

In the current landscape of energy systems modeling, especially in investment models, it is a common idea to implement a lower limit on the amount of electricity that is generated by renewable sources. SpineOpt allows the user to implement such restrictions by means of the min_total_cumulated_unit_flow_to_node parameter. Which will trigger the creation of the constraint_total_cumulated_unit_flow.

To impose a limit on overall renewable generation over the entire optimization horizon, the following objects, relationships, and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. electricity generation from wind), where one

or multiple unit_flows are associated with renewable generation

  1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing electricity demand. (Note: To distinguish e.g. between regions there can also be more than one electricity node)
  2. unit__to_node: To associate electricity flows with a unit, the relationship between the unit and the electricity node needs to be imposed, to trigger the generation of a electricity-unit_flow variable.
  3. min_total_cumulated_unit_flow_to_node: This parameter triggers a lower bound on all cumulated flows from a unit (or a group of units), e.g. the group of all renewable generators, to a node (or node group).

Let's take a look at a simple example to see how this works. Suppose that we have a system with only one node, which represents the demand for electricity, and two units: a wind farm, and a conventional gas unit. To connect the wind farm to the electricity node, the unit__to_node relationship has to be defined.

One can then simply define the min_total_cumulated_unit_flow_to_node parameter for the 'windfarm__toelectricity_node' relationship to impose a lower bound on the total generation origination from the wind farm.

Note that the value of this parameter is expected to be given as an absolute value, thus care has to be taken to make sure that the units match with the ones used for the unit_flow variable.

The main source of flexibility in the use of this constraint lies in the possibility to define the parameter for relationships that link 'nodegroups' and/or 'unitgroups'. For example, by grouping multiple units that are considered renewable sources (e.g. PV, and wind), targets can be implemented across multiple renewable sources. Similarly, by defining multiple electricity nodes, generation targets can be spatially disagreggated.

Limiting the cumulated flow of a unit group by a share of the demand

For convenience, we want to be able to define the min_total_cumulated_unit_flow_to_node, when used to set a renewable target, as a share of the demand. At the moment an absolute lower bound needs to be provided by the user, but we want to automate this preprocessing in SpineOpt. (to be implemented)

Imposing an upper limit on carbon emissions

Imposing an upper limit on carbon emissions over the entire optimization horizon

To impose a limit on overall carbon emissions over the entire optimization horizon, the following objects, relationships and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

or multiple unit_flows are associated with carbon emissions

  1. node: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. (Note: To distinguish e.g. between regions there can also be more than one carbon node)
  2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
  3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
  4. max_total_cumulated_unit_flow_to_node (and unit__to_node): This parameter triggers a limit on all flows from a unit (or a group of units), e.g. the group of all conventional generators, to a node (or node groups), e.g. considering the atmosphere as a fictive CO2 node, over the entire modelling horizon (e.g. a carbon budget). For example this could be defined on a relationship between a gasplant and a Carbon node, but can also be defined a unit group of all conventional generators and a carbon node. See also: constraint_total_cumulated_unit_flow

Imposing an upper bound on the cumulated flows of a unit group for a specific period of time (advanced method)

If the desired functionality is not to cap emissions over the entire modelling horizon, but rather for specific periods of time (e.g., to impose decreasing carbon caps over time), an alternative method can be used, which will be described in the following.

To illustrate this functionality, we will assume that there is a ficticious cap of 100 for a period of time 2025-2030, and a cap of 50 for the period of time 2030-2035. In this simple example, we will assume that one carbon-emitting unit carbon_unit is present with two outgoing commodity flows, e.g. here electricity and carbon.

Three nodes are required to represent this system: an electricity node, a carbon_cap_1 node (with has_state=true and node_state_cap=100), and a carbon_cap_2 node (with has_state=true and node_state_cap=50).

Further we introduce the unit__node__node relationships between carbon_unit__carbon_cap1__electricity and carbon_unit__carbon_cap2__electricity. On these relationships, we will define the ratio between emissions and electricity production. In this fictious example, we will assume 0.5 units of emissions per unit of electricity.

The fix_ratio_out_out parameter will now be defined as a time varying parameter in the following way (simplified representation of TimeSeries parameter):

fix_ratio_out_out(carbon_unit__carbon_cap1__electricity) = [2025: 0.5; 2030: 0] fix_ratio_out_out(carbon_unit__carbon_cap2__electricity) = [2025: 0; 2030: 0.5]

This way the first emission-cap node carbon_cap1 can only be "filled" during the 2025-2030, while carbon_cap2 can only be "filled" during the second period 2030-2035.

Note that it would also be possible to have, e.g., one node with time-varying node_state_cap. However, in this case, "unused" carbon emissions in the first period of time would be availble for the second period of time.

Imposing a carbon tax

To include carbon pricing in a model, the following objects, relationships and parameters are relevant:

  1. unit: In this case, a unit represents a process (e.g. conversion of Gas to Electricity), where one

or multiple unit_flows are associated with carbon emissions

  1. node and tax_in_unit_flow: Besides from nodes required to denote e.g. a fuel node or a supply node, at least one node should be introduced representing carbon emissions. To associate a carbon-tax with all incoming unit_flows, the tax_in_unit_flow parameter can be defined on this node (Note: To distinguish e.g. between regions there can also be more than one carbon node)
  2. unit__to_node: To associate carbon flows with a unit, the relationship between the unit and the carbon node needs to be imposed, to trigger the generation of a carbon-unit_flow variable.
  3. unit__node__node and **fix_ratio_out_out **: Ratio between e.g. output and output unit flows; e.g. how carbon intensive an electricity flow of a unit is. The parameter is defined on a unit__node__node relationship, for example (Gasplant, Carbon, Electricity). (Note: For a full list of possible ratios, see also unit__node__node and associated parameters)
diff --git a/dev/advanced_concepts/decomposition/index.html b/dev/advanced_concepts/decomposition/index.html index 8a1fcf3931..08c4cab42c 100644 --- a/dev/advanced_concepts/decomposition/index.html +++ b/dev/advanced_concepts/decomposition/index.html @@ -1,2 +1,2 @@ -Decomposition · SpineOpt.jl

Decomposition

Decomposition approaches take advantage of certain problem structures to separate them into multiple related problems which are each more easily solved. Decomposition also allows us to do the inverse, which is to combine independent problems into a single problem, where each can be solved separately but with communication between them (e.g. investments and operations problems)

Decomposition thus allows us to do a number of things

  • Solve larger problems which are otherwise intractable
  • Include more detail in problems which otherwise need to be simplified
  • Combine related problems (e.g. investments/operations) in a more scientific way (rather than ad-hoc).
  • Employ parallel computing methods to solve multiple problems simultaneously.

High-level Decomposition Algorithm

The high-level algorithm is described below. For a more detailed description please see Benders decomposition

  • Model initialisation (preprocessdatastructure, generate temporal structures etc.)
  • For each benders_iteration
    • Solve master problem
    • Process master-problem solution:
      • set units_invested_bi(unit=u) equal to the investment variables solution from the master problem
    • Solve operations problem loop
    • Process operations sub-problem
      • set units_on_mv(unit=u) equal to the marginal value of the units_on bound constraint
    • Test for convergence
    • Update master problem
    • Rewind operations problem
    • Next benders iteration

Duals and reduced costs calculation for decomposition

The marginal values above are computed as the reduced costs of relevant optimisation variables. However, the dual solution to a MIP problem is not well defined. The standard approach to obtaining marginal values from a MIP model is to relax the integer variables, fix them to their last solution value and re-solve the problem as an LP. This is the standard approach in energy system modelling to obtain energy prices. However, although this is the standard approach, it does need to be used with caution. The main hazard associated with inferring duals in this way is that the impact on costs of an investment may be overstated. However, since these duals are used in Benders decomposition to obtain a lower bound on costs (i.e. the maximum potential value from an investment), this is ok and can be "corrected" in the next iteration. And finally, the benders gap will tell us how close our decomposed problem is to the optimal global solution.

Reporting dual values and reduced costs

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's relaxed problem marginal value will be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced cost for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of units_on in the final fixed LP to be written to the output db.

Using Decomposition

Assuming one has set up a conventional investments problem as described in Investment Optimization the following additional steps are required to utilise the decomposition framework:

  • Set the model_type parameter for your model to spineopt_benders.
  • Specify max_gap parameter for your model - This determines the master problem convergence criterion for the relative benders gap. A value of 0.05 will represent a relative benders gap of 5%.
  • Specify the max_iterations parameter for your model - This determines the master problem convergence criterion for the number of iterations. A value of 10 could be appropriate but this is highly dependent on the size and nature of the problem.

Once the above is set, all investment decisions in the model are automatically decomposed and optimised in a Benders master problem. This behaviour may change in the future to allow some investment decisions to be optimised in the operations problem and some optimised in the master problem as desired.

+Decomposition · SpineOpt.jl

Decomposition

Decomposition approaches take advantage of certain problem structures to separate them into multiple related problems which are each more easily solved. Decomposition also allows us to do the inverse, which is to combine independent problems into a single problem, where each can be solved separately but with communication between them (e.g. investments and operations problems)

Decomposition thus allows us to do a number of things

  • Solve larger problems which are otherwise intractable
  • Include more detail in problems which otherwise need to be simplified
  • Combine related problems (e.g. investments/operations) in a more scientific way (rather than ad-hoc).
  • Employ parallel computing methods to solve multiple problems simultaneously.

High-level Decomposition Algorithm

The high-level algorithm is described below. For a more detailed description please see Benders decomposition

  • Model initialisation (preprocessdatastructure, generate temporal structures etc.)
  • For each benders_iteration
    • Solve master problem
    • Process master-problem solution:
      • set units_invested_bi(unit=u) equal to the investment variables solution from the master problem
    • Solve operations problem loop
    • Process operations sub-problem
      • set units_on_mv(unit=u) equal to the marginal value of the units_on bound constraint
    • Test for convergence
    • Update master problem
    • Rewind operations problem
    • Next benders iteration

Duals and reduced costs calculation for decomposition

The marginal values above are computed as the reduced costs of relevant optimisation variables. However, the dual solution to a MIP problem is not well defined. The standard approach to obtaining marginal values from a MIP model is to relax the integer variables, fix them to their last solution value and re-solve the problem as an LP. This is the standard approach in energy system modelling to obtain energy prices. However, although this is the standard approach, it does need to be used with caution. The main hazard associated with inferring duals in this way is that the impact on costs of an investment may be overstated. However, since these duals are used in Benders decomposition to obtain a lower bound on costs (i.e. the maximum potential value from an investment), this is ok and can be "corrected" in the next iteration. And finally, the benders gap will tell us how close our decomposed problem is to the optimal global solution.

Reporting dual values and reduced costs

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's relaxed problem marginal value will be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced cost for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of units_on in the final fixed LP to be written to the output db.

Using Decomposition

Assuming one has set up a conventional investments problem as described in Investment Optimization the following additional steps are required to utilise the decomposition framework:

  • Set the model_type parameter for your model to spineopt_benders.
  • Specify max_gap parameter for your model - This determines the master problem convergence criterion for the relative benders gap. A value of 0.05 will represent a relative benders gap of 5%.
  • Specify the max_iterations parameter for your model - This determines the master problem convergence criterion for the number of iterations. A value of 10 could be appropriate but this is highly dependent on the size and nature of the problem.

Once the above is set, all investment decisions in the model are automatically decomposed and optimised in a Benders master problem. This behaviour may change in the future to allow some investment decisions to be optimised in the operations problem and some optimised in the master problem as desired.

diff --git a/dev/advanced_concepts/investment_optimization/index.html b/dev/advanced_concepts/investment_optimization/index.html index c12ae1ba91..11577f867c 100644 --- a/dev/advanced_concepts/investment_optimization/index.html +++ b/dev/advanced_concepts/investment_optimization/index.html @@ -1,2 +1,2 @@ -Investment Optimization · SpineOpt.jl

Investment Optimization

SpineOpt offers numerous ways to optimise investment decisions energy system models and in particular, offers a number of methologogies for capturing increased detail in investment models while containing the impact on run time. The basic principles of investments will be discussed first and this will be followed by more advanced approaches.

Key concepts for investments

Investment Decisions

These are the investment decisions that SpineOpt currently supports. At a high level, this means that the activity of the entities in question is controlled by an investment decision variable. The current implementation supports investments in:

Investment Variable Types

In all cases the capacity of the unit or connection or the maximum node state of a node is multiplied by the investment variable which may either be continuous or integer. This is determined, for units, by setting the unit_investment_variable_type parameter accordingly. Similary, for connections and node storages the connection_investment_variable_type and storage_investment_variable_type are specified.

Identiying Investment Candidate Units, Connections and Storages

The parameter candidate_units represents the number of units of this type that may be invested in. candidate_units determines the upper bound of the investment variable and setting this to a value greater than 0 identifies the unit as an investment candidate unit in the optimisation. If the unit_investment_variable_type is set to :unit_investment_variable_type_integer, the investment variable can be interpreted as the number of discrete units that may be invested in. However, if unit_investment_variable_type is :unit_investment_variable_type_continuous and the unit_capacity is set to unity, the investment decision variable can then be interpreted as the capacity of the unit rather than the number of units with candidate_units being the maximum capacity that can be invested in. Finally, we can invest in discrete blocks of capacity by setting unit_capacity to the size of the investment capacity blocks and have unit_investment_variable_type set to :unit_investment_variable_type_integer with candidate_units representing the maximum number of capacity blocks that may be invested in. The key points here are:

Investment Costs

Investment costs are specified by setting the appropriate *_investment\_cost parameter. The investment cost for units are specified by setting the unit_investment_cost parameter. This is currently interpreted as the full cost over the investment period for the unit. See the section below on investment temporal structure for setting the investment period. If the investment period is 1 year, then the corresponding unit_investment_cost is the annualised investment cost. For connections and storages, the investment cost parameters are connection_investment_cost and storage_investment_cost, respectively.

Temporal and Stochastic Structure of Investment Decisions

SpineOpt's flexible stochastic and temporal structure extend to investments where individual investment decisions can have their own temporal and stochastic structure independent of other investment decisions and other model variables. A global temporal resolution for all investment decisions can be defined by specifying the relationship model__default_investment_temporal_block. If a specific temporal resolution is required for specific investment decisions, then one can specify the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_temporal_block.

Similarly, a global stochastic structure can be defined for all investment decisions by specifying the relationship model__default_investment_stochastic_structure. If a specific stochastic structure is required for specific investment decisions, then one can specifying the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_stochastic_structure.

Impact of connection investments on network characteristics

The model parameter use_connection_intact_flow is available to control whether or not the impact of connection investments on the network characteristics should be captured. If set to true, then the model will use line outage distribution factors (LODF) to compute the impact of each connection investment over the flow across the network. Note that this introduces another variable, connection_intact_flow, representing the hypothetical flow on a connection in case all connection investments were in place. Also note that the impact of each connection is captured individually.

Creating an Investment Candidate Unit Example

If we have model that is not currently set up for investments and we wish to create an investment candidate unit, we can take the following steps.

Model Reference

Variables for investments

Variable NameIndicesDescription
units_invested_availableunit, s, tThe number of invested in units that are available at a given (s, t)
units_investedunit, s, tThe point-in-time investment decision corresponding to the number of units invested in at (s,t)
units_mothballedunit, s, t"Instantaneous" decision variable to mothball a unit
connections_invested_availableconnection, s, tThe number of invested-in connectionss that are available at a given (s, t)
connections_investedconnection, s, tThe point-in-time investment decision corresponding to the number of connectionss invested in at (s,t)
connections_decommissionedconnection, s, t"Instantaneous" decision variable to decommission a connection
storages_invested_availablenode, s, tThe number of invested-in storages that are available at a given (s, t)
storages_investednode, s, tThe point-in-time investment decision corresponding to the number of storages invested in at (s,t)
storages_decommissionednode, s, t"instantaneous" decision variable to decommission a storage

Relationships for investments

Relationship NameRelated Object Class ListDescription
model__default_investment_temporal_blockmodel, temporal_blockDefault temporal resolution for investment decisions effective if unit__investmenttemporalblock is not specified
model__default_investment_stochastic_structuremodel, stochastic_structureDefault stochastic structure for investment decisions effective if unit__investmentstochasticstructure is not specified
unit__investment_temporal_blockunit, temporal_blockSet temporal resolution of investment decisions - overrides model__defaultinvestmenttemporal_block
unit__investment_stochastic_structureunit, stochastic_structureSet stochastic structure for investment decisions - overrides model__defaultinvestmentstochastic_structure

Parameters for investments

Parameter NameObject Class ListDescription
candidate_unitsunitThe number of additional units of this type that can be invested in
unit_investment_costunitThe total overnight investment cost per candidate unit over the model horizon
unit_investment_lifetimeunitThe investment lifetime of the unit - once invested-in, a unit must exist for at least this amount of time
unit_investment_variable_typeunitWhether the units_invested_available variable is continuous, integer or binary
fix_units_investedunitFix the value of units_invested
fix_units_invested_availableunitFix the value of connections_invested_available
candidate_connectionsconnectionThe number of additional connections of this type that can be invested in
connection_investment_costconnectionThe total overnight investment cost per candidate connection over the model horizon
connection_investment_lifetimeconnectionThe investment lifetime of the connection - once invested-in, a connection must exist for at least this amount of time
connection_investment_variable_typeconnectionWhether the connections_invested_available variable is continuous, integer or binary
fix_connections_investedconnectionFix the value of connections_invested
fix_connections_invested_availableconnectionFix the value of connection_invested_available
candidate_storagesnodeThe number of additional storages of this type that can be invested in at node
storage_investment_costnodeThe total overnight investment cost per candidate storage over the model horizon
storage_investment_lifetimenodeThe investment lifetime of the storage - once invested-in, a storage must exist for at least this amount of time
storage_investment_variable_typenodeWhether the storages_invested_available variable is continuous, integer or binary
fix_storages_investednodeFix the value of storages_invested
fix_storages_invested_availablenodeFix the value of storages_invested_available
FilenameRelative PathDescription
constraintunitsinvested_available.jl\constraintsconstrains units_invested_available to be less than candidate_units
constraintunitsinvested_transition.jl\constraintsdefines the relationship between units_invested_available, units_invested and units_mothballed. Analagous to units_on, units_started and units_shutdown
constraintunitlifetime.jl\constraintsonce a unit is invested-in, it must remain in existence for at least unit_investment_lifetime - analagous to min_up_time.
constraintunitsavailable.jl\constraintsEnforces units_available is the sum of number_of_units and units_invested_available
constraintconnectionsinvested_available.jl\constraintsconstrains connections_invested_available to be less than candidate_connections
constraintconnectionsinvested_transition.jl\constraintsdefines the relationship between connections_invested_available, connections_invested and connections_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintconnectionlifetime.jl\constraintsonce a connection is invested-in, it must remain in existence for at least connection_investment_lifetime - analagous to min_up_time.
constraintstoragesinvested_available.jl\constraintsconstrains storages_invested_available to be less than candidate_storages
constraintstoragesinvested_transition.jl\constraintsdefines the relationship between storages_invested_available, storages_invested and storages_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintstoragelifetime.jl\constraintsonce a storage is invested-in, it must remain in existence for at least storage_investment_lifetime - analagous to min_up_time.
+Investment Optimization · SpineOpt.jl

Investment Optimization

SpineOpt offers numerous ways to optimise investment decisions energy system models and in particular, offers a number of methologogies for capturing increased detail in investment models while containing the impact on run time. The basic principles of investments will be discussed first and this will be followed by more advanced approaches.

Key concepts for investments

Investment Decisions

These are the investment decisions that SpineOpt currently supports. At a high level, this means that the activity of the entities in question is controlled by an investment decision variable. The current implementation supports investments in:

Investment Variable Types

In all cases the capacity of the unit or connection or the maximum node state of a node is multiplied by the investment variable which may either be continuous or integer. This is determined, for units, by setting the unit_investment_variable_type parameter accordingly. Similary, for connections and node storages the connection_investment_variable_type and storage_investment_variable_type are specified.

Identiying Investment Candidate Units, Connections and Storages

The parameter candidate_units represents the number of units of this type that may be invested in. candidate_units determines the upper bound of the investment variable and setting this to a value greater than 0 identifies the unit as an investment candidate unit in the optimisation. If the unit_investment_variable_type is set to :unit_investment_variable_type_integer, the investment variable can be interpreted as the number of discrete units that may be invested in. However, if unit_investment_variable_type is :unit_investment_variable_type_continuous and the unit_capacity is set to unity, the investment decision variable can then be interpreted as the capacity of the unit rather than the number of units with candidate_units being the maximum capacity that can be invested in. Finally, we can invest in discrete blocks of capacity by setting unit_capacity to the size of the investment capacity blocks and have unit_investment_variable_type set to :unit_investment_variable_type_integer with candidate_units representing the maximum number of capacity blocks that may be invested in. The key points here are:

Investment Costs

Investment costs are specified by setting the appropriate *_investment\_cost parameter. The investment cost for units are specified by setting the unit_investment_cost parameter. This is currently interpreted as the full cost over the investment period for the unit. See the section below on investment temporal structure for setting the investment period. If the investment period is 1 year, then the corresponding unit_investment_cost is the annualised investment cost. For connections and storages, the investment cost parameters are connection_investment_cost and storage_investment_cost, respectively.

Temporal and Stochastic Structure of Investment Decisions

SpineOpt's flexible stochastic and temporal structure extend to investments where individual investment decisions can have their own temporal and stochastic structure independent of other investment decisions and other model variables. A global temporal resolution for all investment decisions can be defined by specifying the relationship model__default_investment_temporal_block. If a specific temporal resolution is required for specific investment decisions, then one can specify the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_temporal_block.

Similarly, a global stochastic structure can be defined for all investment decisions by specifying the relationship model__default_investment_stochastic_structure. If a specific stochastic structure is required for specific investment decisions, then one can specifying the following relationships:

Specifying any of the above relationships will override the corresponding model__default_investment_stochastic_structure.

Impact of connection investments on network characteristics

The model parameter use_connection_intact_flow is available to control whether or not the impact of connection investments on the network characteristics should be captured. If set to true, then the model will use line outage distribution factors (LODF) to compute the impact of each connection investment over the flow across the network. Note that this introduces another variable, connection_intact_flow, representing the hypothetical flow on a connection in case all connection investments were in place. Also note that the impact of each connection is captured individually.

Creating an Investment Candidate Unit Example

If we have model that is not currently set up for investments and we wish to create an investment candidate unit, we can take the following steps.

Model Reference

Variables for investments

Variable NameIndicesDescription
units_invested_availableunit, s, tThe number of invested in units that are available at a given (s, t)
units_investedunit, s, tThe point-in-time investment decision corresponding to the number of units invested in at (s,t)
units_mothballedunit, s, t"Instantaneous" decision variable to mothball a unit
connections_invested_availableconnection, s, tThe number of invested-in connectionss that are available at a given (s, t)
connections_investedconnection, s, tThe point-in-time investment decision corresponding to the number of connectionss invested in at (s,t)
connections_decommissionedconnection, s, t"Instantaneous" decision variable to decommission a connection
storages_invested_availablenode, s, tThe number of invested-in storages that are available at a given (s, t)
storages_investednode, s, tThe point-in-time investment decision corresponding to the number of storages invested in at (s,t)
storages_decommissionednode, s, t"instantaneous" decision variable to decommission a storage

Relationships for investments

Relationship NameRelated Object Class ListDescription
model__default_investment_temporal_blockmodel, temporal_blockDefault temporal resolution for investment decisions effective if unit__investmenttemporalblock is not specified
model__default_investment_stochastic_structuremodel, stochastic_structureDefault stochastic structure for investment decisions effective if unit__investmentstochasticstructure is not specified
unit__investment_temporal_blockunit, temporal_blockSet temporal resolution of investment decisions - overrides model__defaultinvestmenttemporal_block
unit__investment_stochastic_structureunit, stochastic_structureSet stochastic structure for investment decisions - overrides model__defaultinvestmentstochastic_structure

Parameters for investments

Parameter NameObject Class ListDescription
candidate_unitsunitThe number of additional units of this type that can be invested in
unit_investment_costunitThe total overnight investment cost per candidate unit over the model horizon
unit_investment_lifetimeunitThe investment lifetime of the unit - once invested-in, a unit must exist for at least this amount of time
unit_investment_variable_typeunitWhether the units_invested_available variable is continuous, integer or binary
fix_units_investedunitFix the value of units_invested
fix_units_invested_availableunitFix the value of connections_invested_available
candidate_connectionsconnectionThe number of additional connections of this type that can be invested in
connection_investment_costconnectionThe total overnight investment cost per candidate connection over the model horizon
connection_investment_lifetimeconnectionThe investment lifetime of the connection - once invested-in, a connection must exist for at least this amount of time
connection_investment_variable_typeconnectionWhether the connections_invested_available variable is continuous, integer or binary
fix_connections_investedconnectionFix the value of connections_invested
fix_connections_invested_availableconnectionFix the value of connection_invested_available
candidate_storagesnodeThe number of additional storages of this type that can be invested in at node
storage_investment_costnodeThe total overnight investment cost per candidate storage over the model horizon
storage_investment_lifetimenodeThe investment lifetime of the storage - once invested-in, a storage must exist for at least this amount of time
storage_investment_variable_typenodeWhether the storages_invested_available variable is continuous, integer or binary
fix_storages_investednodeFix the value of storages_invested
fix_storages_invested_availablenodeFix the value of storages_invested_available
FilenameRelative PathDescription
constraintunitsinvested_available.jl\constraintsconstrains units_invested_available to be less than candidate_units
constraintunitsinvested_transition.jl\constraintsdefines the relationship between units_invested_available, units_invested and units_mothballed. Analagous to units_on, units_started and units_shutdown
constraintunitlifetime.jl\constraintsonce a unit is invested-in, it must remain in existence for at least unit_investment_lifetime - analagous to min_up_time.
constraintunitsavailable.jl\constraintsEnforces units_available is the sum of number_of_units and units_invested_available
constraintconnectionsinvested_available.jl\constraintsconstrains connections_invested_available to be less than candidate_connections
constraintconnectionsinvested_transition.jl\constraintsdefines the relationship between connections_invested_available, connections_invested and connections_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintconnectionlifetime.jl\constraintsonce a connection is invested-in, it must remain in existence for at least connection_investment_lifetime - analagous to min_up_time.
constraintstoragesinvested_available.jl\constraintsconstrains storages_invested_available to be less than candidate_storages
constraintstoragesinvested_transition.jl\constraintsdefines the relationship between storages_invested_available, storages_invested and storages_decommissioned. Analagous to units_on, units_started and units_shutdown
constraintstoragelifetime.jl\constraintsonce a storage is invested-in, it must remain in existence for at least storage_investment_lifetime - analagous to min_up_time.
diff --git a/dev/advanced_concepts/mga/index.html b/dev/advanced_concepts/mga/index.html index 76f67b9224..78c209a475 100644 --- a/dev/advanced_concepts/mga/index.html +++ b/dev/advanced_concepts/mga/index.html @@ -1,2 +1,2 @@ -Modelling to generate alternatives · SpineOpt.jl

Modelling to generate alternatives

Through modelling to generate alternatives (short MGA), near-optimal solutions can be explored under certain conditions. Currently, SpineOpt supports two methods for MGA are available.

Modelling to generate alternative: Maximally different portfolios

The idea is that an orginal problem is solved, and subsequently solved again under the condition that the realization of variables should be maximally different from the previous iteration(s), while keeping the objective function within a certain threshold (defined by max_mga_slack).

In SpineOpt, we choose units_invested_available, connections_invested_available, and storages_invested_available as variables that can be considered for the maximum-difference-problem. The implementation is based on Modelling to generate alternatives: A technique to explore uncertainty in energy-environment-economy models.

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. You should also define the number of iterations (max_mga_iterations, and the maximum allowed deviation from the original objective function (max_mga_slack).
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA difference maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • The original MGA formulation is non-convex (maximization problem of an absolute function), but has been linearized through big M method. For this purpose, one should always make sure that units_invested_big_m_mga, connections_invested_big_m_mga, or storages_invested_big_m_mga, respectively, is sufficently large to always be larger the the maximum possible difference per MGA iteration. (Typically the number of candidates could suffice.)
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.

Modelling to generate alternative: Trade-offs between technology investments

The idea of this approach is to explore near-optimal solutions that maximize/minimize investment in a certain technology (or multiple technologies simultanesously).

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. The maximum allowed deviation from the original objective function should be defined via max_mga_slack. Note that for this method, we don't define an explicit number of iteration via the max_mga_iterations parameter (see also below)
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA min/maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • To explore near-optimal solutions using this methodology, the units_invested_mga_weight, connections_invested_mga_weight, and storages_invested_mga_weight parameters are used to define near-optimal solutions. For this purpose, these parameters are defined as Arrays, defining the weight of the technology per iterations. Note that the length of these Arrays should be the same for all technologies, as this will correspond to the number of MGA iterations, i.e., the number of near-optimal solutions. To analyze the trade-off between two technology types, we can, e.g., define units_invested_mga_weight for unit group 1 as [-1,-0.5,0], whereas the use the weights [0,-0.5,-1] for the other technology storage group 1 in question. Note that a negative sign will correspond to a minimization of investments in the corresponding technology type, while positive signs correspond to a maximization of the respective technology. In the given example, we would hence first minimize the investments in unit group 1, then minimize the two technologies simultaneuously, and finally only minimize investments in storage group 2.
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.
+Modelling to generate alternatives · SpineOpt.jl

Modelling to generate alternatives

Through modelling to generate alternatives (short MGA), near-optimal solutions can be explored under certain conditions. Currently, SpineOpt supports two methods for MGA are available.

Modelling to generate alternative: Maximally different portfolios

The idea is that an orginal problem is solved, and subsequently solved again under the condition that the realization of variables should be maximally different from the previous iteration(s), while keeping the objective function within a certain threshold (defined by max_mga_slack).

In SpineOpt, we choose units_invested_available, connections_invested_available, and storages_invested_available as variables that can be considered for the maximum-difference-problem. The implementation is based on Modelling to generate alternatives: A technique to explore uncertainty in energy-environment-economy models.

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. You should also define the number of iterations (max_mga_iterations, and the maximum allowed deviation from the original objective function (max_mga_slack).
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA difference maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • The original MGA formulation is non-convex (maximization problem of an absolute function), but has been linearized through big M method. For this purpose, one should always make sure that units_invested_big_m_mga, connections_invested_big_m_mga, or storages_invested_big_m_mga, respectively, is sufficently large to always be larger the the maximum possible difference per MGA iteration. (Typically the number of candidates could suffice.)
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.

Modelling to generate alternative: Trade-offs between technology investments

The idea of this approach is to explore near-optimal solutions that maximize/minimize investment in a certain technology (or multiple technologies simultanesously).

How to set up an MGA problem

  • model: In order to explore an MGA model, you will need one model of type spineopt_mga. The maximum allowed deviation from the original objective function should be defined via max_mga_slack. Note that for this method, we don't define an explicit number of iteration via the max_mga_iterations parameter (see also below)
  • at least one investment candidate of type unit, connection, or node. For more details on how to set up an investment problem please see: Investment Optimization.
  • To include the investment decisions in the MGA min/maximization, the parameter units_invested_mga, connections_invested_mga, or storages_invested_mga need to be set to true, respectively.
  • To explore near-optimal solutions using this methodology, the units_invested_mga_weight, connections_invested_mga_weight, and storages_invested_mga_weight parameters are used to define near-optimal solutions. For this purpose, these parameters are defined as Arrays, defining the weight of the technology per iterations. Note that the length of these Arrays should be the same for all technologies, as this will correspond to the number of MGA iterations, i.e., the number of near-optimal solutions. To analyze the trade-off between two technology types, we can, e.g., define units_invested_mga_weight for unit group 1 as [-1,-0.5,0], whereas the use the weights [0,-0.5,-1] for the other technology storage group 1 in question. Note that a negative sign will correspond to a minimization of investments in the corresponding technology type, while positive signs correspond to a maximization of the respective technology. In the given example, we would hence first minimize the investments in unit group 1, then minimize the two technologies simultaneuously, and finally only minimize investments in storage group 2.
  • As outputs are used to intermediately store solutions from different MGA runs, it is important that units_invested, connections_invested, or storages_invested, respectively, are defined as output objects in your database.
diff --git a/dev/advanced_concepts/multi_stage/index.html b/dev/advanced_concepts/multi_stage/index.html index 531f5cb9fc..2bf6b56cb5 100644 --- a/dev/advanced_concepts/multi_stage/index.html +++ b/dev/advanced_concepts/multi_stage/index.html @@ -1,2 +1,2 @@ -Multi-stage optimisation · SpineOpt.jl

Multi-stage optimisation

Note

This section describes how to run multi-stage optimisations with SpineOpt using the stage class - not to be confused with the rolling horizon optimisation technique described in Temporal Framework, nor the Benders decomposition algorithm described in Decomposition.

Warning

This feature is experimental. It may change in future versions without notice.

By default, SpineOpt is solved as a 'single-stage' optimisation problem. However you can add additional stages to the optimisation by creating stage objects in your DB.

To motivate this discussion, say you want to model a storage over a year with hourly resolution. The model is large, so you would like to solve it using a rolling horizon of, say, one day - so it solves quickly (see roll_forward and the Temporal Framework section). But this wouldn't capture the long-term value of your storage!

To remediate this, you can introduce an additional 'stage' that solves the entire year at once with a lower temporal resolution (say, one day instead of one hour), and then fixes the storage level at certain points for your higher-resolution rolling horizon model. Both models, the year-long model at daily resolution and the rolling horizon model at hourly resolution, will solve faster than the year-long model at hourly resolution - hopefully much faster - leading to a good compromise between speed and accuracy.

So how do you do that? You use a stage.

The stage class

In SpineOpt, a stage is an additional optimisation model that fixes certain outputs for another set of models declared as their children.

The children of a stage are defined via stage__child_stage relationships (with the parent stage in the first dimension). If a stage has no stage__child_stage relationships as a parent, then it is assumed to have only one children: the model itself.

The outputs that a stage fixes for its children are defined via stage__output relationships. By default, the output is fixed at the end of each child's rolling window. However, you can fix it at other points in time by specifying the output_resolution parameter as a duration (or array of durations) relative to the start of the child's rolling window.

For example, if you specify an output_resolution of 1 day, then the output will be fixed at one day after the child's window start. If you specify something like [1 day, 2 days], then it will be fixed at one day after the window start, and then at two days after that (i.e., three days after the window start).

The optimisation model that a stage solves is given by the stage_scenario parameter value, which must be a scenario in your DB.

And that's basically it!

Example

In case of the year-long storage model with hourly resolution, here is how you would do it.

First, the basic setup:

  1. Create your model.
  2. Specify model_start and model_end for your model to cover the year of interest.
  3. Specify roll_forward for your model as 1 day.
  4. Create a temporal_block called "flat".
  5. Specify resolution for your temporal_block as 1 hour.
  6. Create a model__default_temporal_block between your model and your temporal_block (to keep things simple, but of course you can use node__temporal_block, etc., as needed).
  7. Create the rest of your model (the storage node, etc.)

With the above, you will have a rolling-horizon model that would probably solve in reasonable time but wouldn't capture the long-term value of your storage.

Now, the 'stage' stuff:

  1. Create an alternative called "ltstoragealt".
  2. Create a scenario called "ltstoragescen" with the "ltstoragealt" alternative in the highest rank.
  3. Create a stage called "lt_storage".
  4. (Don't create any stage__child_stage relationsips - the only child is the model - plus you don't have/need other stages).
  5. Create a stage__output between your stage and the "node_state" output.
  6. Don't specify output_resolution so the output is fixed at the end of the model's rolling window.
  7. Specify roll_forward for your model in the "ltstoragealt" alternative as nothing - so the model doesn't roll - the entire year is solved at once.
  8. Specify resolution for the "flat" temporal_block in the "ltstoragealt" alternative as 1 day.
  9. Specify stage_scenario for the "ltstorage" stage as `"ltstorage_scen"`.
+Multi-stage optimisation · SpineOpt.jl

Multi-stage optimisation

Note

This section describes how to run multi-stage optimisations with SpineOpt using the stage class - not to be confused with the rolling horizon optimisation technique described in Temporal Framework, nor the Benders decomposition algorithm described in Decomposition.

Warning

This feature is experimental. It may change in future versions without notice.

By default, SpineOpt is solved as a 'single-stage' optimisation problem. However you can add additional stages to the optimisation by creating stage objects in your DB.

To motivate this discussion, say you want to model a storage over a year with hourly resolution. The model is large, so you would like to solve it using a rolling horizon of, say, one day - so it solves quickly (see roll_forward and the Temporal Framework section). But this wouldn't capture the long-term value of your storage!

To remediate this, you can introduce an additional 'stage' that solves the entire year at once with a lower temporal resolution (say, one day instead of one hour), and then fixes the storage level at certain points for your higher-resolution rolling horizon model. Both models, the year-long model at daily resolution and the rolling horizon model at hourly resolution, will solve faster than the year-long model at hourly resolution - hopefully much faster - leading to a good compromise between speed and accuracy.

So how do you do that? You use a stage.

The stage class

In SpineOpt, a stage is an additional optimisation model that fixes certain outputs for another set of models declared as their children.

The children of a stage are defined via stage__child_stage relationships (with the parent stage in the first dimension). If a stage has no stage__child_stage relationships as a parent, then it is assumed to have only one children: the model itself.

The outputs that a stage fixes for its children are defined via stage__output relationships. By default, the output is fixed at the end of each child's rolling window. However, you can fix it at other points in time by specifying the output_resolution parameter as a duration (or array of durations) relative to the start of the child's rolling window.

For example, if you specify an output_resolution of 1 day, then the output will be fixed at one day after the child's window start. If you specify something like [1 day, 2 days], then it will be fixed at one day after the window start, and then at two days after that (i.e., three days after the window start).

The optimisation model that a stage solves is given by the stage_scenario parameter value, which must be a scenario in your DB.

And that's basically it!

Example

In case of the year-long storage model with hourly resolution, here is how you would do it.

First, the basic setup:

  1. Create your model.
  2. Specify model_start and model_end for your model to cover the year of interest.
  3. Specify roll_forward for your model as 1 day.
  4. Create a temporal_block called "flat".
  5. Specify resolution for your temporal_block as 1 hour.
  6. Create a model__default_temporal_block between your model and your temporal_block (to keep things simple, but of course you can use node__temporal_block, etc., as needed).
  7. Create the rest of your model (the storage node, etc.)

With the above, you will have a rolling-horizon model that would probably solve in reasonable time but wouldn't capture the long-term value of your storage.

Now, the 'stage' stuff:

  1. Create an alternative called "ltstoragealt".
  2. Create a scenario called "ltstoragescen" with the "ltstoragealt" alternative in the highest rank.
  3. Create a stage called "lt_storage".
  4. (Don't create any stage__child_stage relationsips - the only child is the model - plus you don't have/need other stages).
  5. Create a stage__output between your stage and the "node_state" output.
  6. Don't specify output_resolution so the output is fixed at the end of the model's rolling window.
  7. Specify roll_forward for your model in the "ltstoragealt" alternative as nothing - so the model doesn't roll - the entire year is solved at once.
  8. Specify resolution for the "flat" temporal_block in the "ltstoragealt" alternative as 1 day.
  9. Specify stage_scenario for the "ltstorage" stage as `"ltstorage_scen"`.
diff --git a/dev/advanced_concepts/powerflow/index.html b/dev/advanced_concepts/powerflow/index.html index c97fc30aa2..3831b30189 100644 --- a/dev/advanced_concepts/powerflow/index.html +++ b/dev/advanced_concepts/powerflow/index.html @@ -1,2 +1,2 @@ -PTDF-Based Powerflow · SpineOpt.jl

Power transfer distribution factors (PTDF) based DC power flow

There are two main methodologies for directly including DC powerflow in unit commitment/energy system models. One method is to directly include the bus voltage angles as variables in the model. This method is described in Nodal lossless DC Powerflow.

Here we discuss the method of using power transfer distribution factors (PTDF) for DC power flow and line outage distribution factors (lodf) for security constrained unit commitment.

Warning

The calculations for investments using the PTDF method do not consider the mutual effect of multiple simultaneous investments. In other words, the results are (increasingly more) incorrect for (more) lines that interact with each other. Yet, this method remains useful for choosing between multiple simultaneous investments that are assumed non-interacting and/or multiple mutually exclusive investments.

On the other hand, investments using the angle based method work for multiple lines but this method is slower and does not take into account the N-1 rule.

Warning

Connecting AC lines through two DC lines is also not supported in our implementation of the PTDF method but it is possible to do this with our implementation of the angle based method.

Key concepts

  1. ptdf: The power transfer distribution factors are a property of the network reactances and their derivation may be found here. ptdf(n, c) represents the fraction of an injection at node n that will flow on connection c. The flow on connection c is then the sum over all nodes of ptdf(n, c)*net_injection(c). The advantage of this method is that it introduces no additional variables into the problem and instead, introduces only one constraint for each connection whose flow we are interested in monitoring.
  2. lodf: Line outage distribution factors are a function of the network ptdfs and their derivation is also found here. lodf(c_contingency, c_monitored) represents the fraction of the pre-contingency flow on connection c_contingency that will flow on c_monitored if c_contingency is disconnected. Therefore, the post contingency flow on connection c_monitored is the pre_contingency flow plus lodf(c_contingency, c_monitored)\*pre_contingency_flow(c_contingency)). Therefore, consideration of N contingencies on M monitored lines introduces N x M constraints into the model. Usually one wishes to contain this number and methods are given below to achieve this.
  3. Defining your network To identify the network for which ptdfs, lodfs and connection_flows will be calculated according to the ptdf method, one does the following:
    • Create node objects for each bus in the model.
    • Create connection objects representing each line of the network: For each connection specify the connection_reactance parameter and the connection_type parameter. Setting connection_type=connection_type_lossless_bidirectional simplifies the amount of data that needs to be specified for an eletrical network. See connection_type for more details
    • Set the connection__to_node and connection__from_node relationships to define the topology of each connection along with the connection_capacity parameter on one or both of these relationships.
    • Set the connection_emergency_capacity parameter to define the post contingency rating if lodf-based N-1 security constraints are to be included
    • Create a commodity object and node__commodity relationships for all the nodes that comprise the electrical network for which PTDFs are to be calculated.
    • Specify the commodity_physics parameter for the commodity to :commodity_physics_ptdf if ptdf-based DC load flow is desired with no N-1 security constraints or to :commodity_physics_lodf if it is desired to include lodf-based N-1 security constraints
    • To identify the reference bus(node) specify the node_opf_type parameter for the appropriate node with the value node_opf_type_reference.
  4. Controlling problem size
    • The lines to be monitored are specified by setting the connection_monitored property for each connection for which a flow constraint is to be generated
    • The contingencies to be considered are specified by setting the connection_contingency property for the appropriate connections. For N contingencies and M monitored lines, N x M constraints will be generated.
    • If the lodf(c_contingency, c_monitored) is very small, it means the outage of c_contingency has a small impact on the flow on c_monitoredand there is little point in including this constraint in the model. This can be achieved by setting the commodity_lodf_tolerance commodity parameter. Contingency / Monotired line combinations with lodfs below this value will be ignored, reducing the size of the model.
    • If ptdf(n, c) is very small, it means an injection at n has a small impact on the flow on c and there is little point in considering it. This can be achieved by setting the commodity_ptdf_threshold commodity parameter. Node / Monotired line combinations with ptdfs below this value will be ignored, reducing the number of coefficients in the model.
    • To more easily identify which connections are worth being monitored or which contingencies are worth being considered, you can add the contingency_is_binding output to any of your reports (via a report__output relationship). This will run the model without the security constraints, and instead write a parameter called contingency_is_binding to the output database for each pair of contingency and monitored connection. The value of the parameter will be a (possibly stochastic) time-series where a value of one will indicate that the corresponding security constraint is binding, and zero otherwise.
+PTDF-Based Powerflow · SpineOpt.jl

Power transfer distribution factors (PTDF) based DC power flow

There are two main methodologies for directly including DC powerflow in unit commitment/energy system models. One method is to directly include the bus voltage angles as variables in the model. This method is described in Nodal lossless DC Powerflow.

Here we discuss the method of using power transfer distribution factors (PTDF) for DC power flow and line outage distribution factors (lodf) for security constrained unit commitment.

Warning

The calculations for investments using the PTDF method do not consider the mutual effect of multiple simultaneous investments. In other words, the results are (increasingly more) incorrect for (more) lines that interact with each other. Yet, this method remains useful for choosing between multiple simultaneous investments that are assumed non-interacting and/or multiple mutually exclusive investments.

On the other hand, investments using the angle based method work for multiple lines but this method is slower and does not take into account the N-1 rule.

Warning

Connecting AC lines through two DC lines is also not supported in our implementation of the PTDF method but it is possible to do this with our implementation of the angle based method.

Key concepts

  1. ptdf: The power transfer distribution factors are a property of the network reactances and their derivation may be found here. ptdf(n, c) represents the fraction of an injection at node n that will flow on connection c. The flow on connection c is then the sum over all nodes of ptdf(n, c)*net_injection(c). The advantage of this method is that it introduces no additional variables into the problem and instead, introduces only one constraint for each connection whose flow we are interested in monitoring.
  2. lodf: Line outage distribution factors are a function of the network ptdfs and their derivation is also found here. lodf(c_contingency, c_monitored) represents the fraction of the pre-contingency flow on connection c_contingency that will flow on c_monitored if c_contingency is disconnected. Therefore, the post contingency flow on connection c_monitored is the pre_contingency flow plus lodf(c_contingency, c_monitored)\*pre_contingency_flow(c_contingency)). Therefore, consideration of N contingencies on M monitored lines introduces N x M constraints into the model. Usually one wishes to contain this number and methods are given below to achieve this.
  3. Defining your network To identify the network for which ptdfs, lodfs and connection_flows will be calculated according to the ptdf method, one does the following:
    • Create node objects for each bus in the model.
    • Create connection objects representing each line of the network: For each connection specify the connection_reactance parameter and the connection_type parameter. Setting connection_type=connection_type_lossless_bidirectional simplifies the amount of data that needs to be specified for an eletrical network. See connection_type for more details
    • Set the connection__to_node and connection__from_node relationships to define the topology of each connection along with the connection_capacity parameter on one or both of these relationships.
    • Set the connection_emergency_capacity parameter to define the post contingency rating if lodf-based N-1 security constraints are to be included
    • Create a commodity object and node__commodity relationships for all the nodes that comprise the electrical network for which PTDFs are to be calculated.
    • Specify the commodity_physics parameter for the commodity to :commodity_physics_ptdf if ptdf-based DC load flow is desired with no N-1 security constraints or to :commodity_physics_lodf if it is desired to include lodf-based N-1 security constraints
    • To identify the reference bus(node) specify the node_opf_type parameter for the appropriate node with the value node_opf_type_reference.
  4. Controlling problem size
    • The lines to be monitored are specified by setting the connection_monitored property for each connection for which a flow constraint is to be generated
    • The contingencies to be considered are specified by setting the connection_contingency property for the appropriate connections. For N contingencies and M monitored lines, N x M constraints will be generated.
    • If the lodf(c_contingency, c_monitored) is very small, it means the outage of c_contingency has a small impact on the flow on c_monitoredand there is little point in including this constraint in the model. This can be achieved by setting the commodity_lodf_tolerance commodity parameter. Contingency / Monotired line combinations with lodfs below this value will be ignored, reducing the size of the model.
    • If ptdf(n, c) is very small, it means an injection at n has a small impact on the flow on c and there is little point in considering it. This can be achieved by setting the commodity_ptdf_threshold commodity parameter. Node / Monotired line combinations with ptdfs below this value will be ignored, reducing the number of coefficients in the model.
    • To more easily identify which connections are worth being monitored or which contingencies are worth being considered, you can add the contingency_is_binding output to any of your reports (via a report__output relationship). This will run the model without the security constraints, and instead write a parameter called contingency_is_binding to the output database for each pair of contingency and monitored connection. The value of the parameter will be a (possibly stochastic) time-series where a value of one will indicate that the corresponding security constraint is binding, and zero otherwise.
diff --git a/dev/advanced_concepts/pressure_driven_gas_transfer/index.html b/dev/advanced_concepts/pressure_driven_gas_transfer/index.html index 3e9facb381..0c0f4325c5 100644 --- a/dev/advanced_concepts/pressure_driven_gas_transfer/index.html +++ b/dev/advanced_concepts/pressure_driven_gas_transfer/index.html @@ -1,2 +1,2 @@ -Pressure driven gas transfer · SpineOpt.jl

Pressure driven gas transfer

The generic formulation of SpineOpt is based on a trade based model. However, network physics can be different depending on the traded commodity. This chapter specifically addresses the use of pressure driven gas transfer models and enabling linepack flexibility in SpineOpt. To this date, investments in pressure driven pipelines are not yet supported within SpineOpt. The use of multiple feed-in nodes, e.g. to represent multiple commodity flows through a pipeline is not yet supported.

For the representation of pressure driven gas transfer, we use the MILP formulation, as described in Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling. Here, the non-linearities associated with the Weymouth equation are convexified through an outer approximation of the Weymouth equation through fixed pressure points.

Key concept

Here, we briefly describe the key objects and relationships required to model pressure driven gas transfers in SpineOpt.

  1. connection: A connection represents the gas pipeline being modelled. Usually the direction of flow is not known a priory. To ensure that the flow through the gas pipeline is unidirectional, the parameter has_binary_gas_flow needs to be set to true.
  2. node: Nodes with different characteristics are used for the representation of pressure driven gas transfer.
    • For each connection, there will be two nodes representing the start and end point of the pipeline. Associated with these nodes are the following parameters: the has_pressure parameter, which needs to be set to true, in order to create the variable node_pressure; the max_node_pressure and min_node_pressure to constrain the pressure variable.
    • To leverage linepack flexibility, a third node is introduced representing the linepack storage of the pipeline. To trigger the storage linepack and hence, node_state variables, the has_state parameter needs to be set to true.
  3. connection__to_node and connection__from_node To enable flows through the pipeline and into the linepack storage, each node has to have both these relationships in common with the connection pipeline. These relationships will trigger the generation of connection_flow variables in all possible directions.
  4. connection__node__node This relationship is key to the pressure driven gas transfer, holding the information about the pipeline characteristics and bringing the elements into interaction.
    • The parameter connection_linepack_constant holds the linepack constant and triggers the generation of the line pack storage constraint. Note that the first node should be the linepack storage node, while the second node should be a node_group of both, the start and the end node of the pipeline.
    • The linearization of the Weymouth equation through outer approximation relies on the use of fixed pressure points. For this purpose, the two parameters fixed_pressure_constant_1 and fixed_pressure_constant_0 hold the fixed pressure constants and trigger the generation of the constraint_fix_node_pressure_point. The constraint introduces the relationship between pressure and gas flows. Note, that the pressure constants should be entered in a way, that the first node represents the origin node, the second node the destination node. Each connection should have a connection__node__node to each combination of its start and end nodes (and associated parameters). (See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling)
    • By default, pipelines are considered to be passive. However, a compression station between two pipeline pressure nodes can be represented by defining a compression_factor. The relationship should be defined in such a manner, that the first node represents the sending node, the second node represents the receiving node, which pressure is equal or smaller to the pressure at the sending node times the compression factor.
    • Lastly, to ensure the balance between incoming/outgoing flows and flows into the linepack, the ratio between the flows need to be fixed. The average incoming flows of the node group (of the pressure start and end nodes) have to equal the flows into the linepack storage, and vice versa. Therefore, the fix_ratio_out_in_connection_flow needs to be set to a value (typically 1) for the (pressure group, linepack storage) node pair, and for the (linepack storage, pressure group) node pair.

A gas pipeline and its connected nodes are illustrated below. A complete mathematical formulation can be found here.

Illustration of gas pipeline

+Pressure driven gas transfer · SpineOpt.jl

Pressure driven gas transfer

The generic formulation of SpineOpt is based on a trade based model. However, network physics can be different depending on the traded commodity. This chapter specifically addresses the use of pressure driven gas transfer models and enabling linepack flexibility in SpineOpt. To this date, investments in pressure driven pipelines are not yet supported within SpineOpt. The use of multiple feed-in nodes, e.g. to represent multiple commodity flows through a pipeline is not yet supported.

For the representation of pressure driven gas transfer, we use the MILP formulation, as described in Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling. Here, the non-linearities associated with the Weymouth equation are convexified through an outer approximation of the Weymouth equation through fixed pressure points.

Key concept

Here, we briefly describe the key objects and relationships required to model pressure driven gas transfers in SpineOpt.

  1. connection: A connection represents the gas pipeline being modelled. Usually the direction of flow is not known a priory. To ensure that the flow through the gas pipeline is unidirectional, the parameter has_binary_gas_flow needs to be set to true.
  2. node: Nodes with different characteristics are used for the representation of pressure driven gas transfer.
    • For each connection, there will be two nodes representing the start and end point of the pipeline. Associated with these nodes are the following parameters: the has_pressure parameter, which needs to be set to true, in order to create the variable node_pressure; the max_node_pressure and min_node_pressure to constrain the pressure variable.
    • To leverage linepack flexibility, a third node is introduced representing the linepack storage of the pipeline. To trigger the storage linepack and hence, node_state variables, the has_state parameter needs to be set to true.
  3. connection__to_node and connection__from_node To enable flows through the pipeline and into the linepack storage, each node has to have both these relationships in common with the connection pipeline. These relationships will trigger the generation of connection_flow variables in all possible directions.
  4. connection__node__node This relationship is key to the pressure driven gas transfer, holding the information about the pipeline characteristics and bringing the elements into interaction.
    • The parameter connection_linepack_constant holds the linepack constant and triggers the generation of the line pack storage constraint. Note that the first node should be the linepack storage node, while the second node should be a node_group of both, the start and the end node of the pipeline.
    • The linearization of the Weymouth equation through outer approximation relies on the use of fixed pressure points. For this purpose, the two parameters fixed_pressure_constant_1 and fixed_pressure_constant_0 hold the fixed pressure constants and trigger the generation of the constraint_fix_node_pressure_point. The constraint introduces the relationship between pressure and gas flows. Note, that the pressure constants should be entered in a way, that the first node represents the origin node, the second node the destination node. Each connection should have a connection__node__node to each combination of its start and end nodes (and associated parameters). (See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling)
    • By default, pipelines are considered to be passive. However, a compression station between two pipeline pressure nodes can be represented by defining a compression_factor. The relationship should be defined in such a manner, that the first node represents the sending node, the second node represents the receiving node, which pressure is equal or smaller to the pressure at the sending node times the compression factor.
    • Lastly, to ensure the balance between incoming/outgoing flows and flows into the linepack, the ratio between the flows need to be fixed. The average incoming flows of the node group (of the pressure start and end nodes) have to equal the flows into the linepack storage, and vice versa. Therefore, the fix_ratio_out_in_connection_flow needs to be set to a value (typically 1) for the (pressure group, linepack storage) node pair, and for the (linepack storage, pressure group) node pair.

A gas pipeline and its connected nodes are illustrated below. A complete mathematical formulation can be found here.

Illustration of gas pipeline

diff --git a/dev/advanced_concepts/ramping/index.html b/dev/advanced_concepts/ramping/index.html index 8676273084..ded2898cc4 100644 --- a/dev/advanced_concepts/ramping/index.html +++ b/dev/advanced_concepts/ramping/index.html @@ -1,2 +1,2 @@ -Ramping · SpineOpt.jl

Ramping

To enable the representation of units with a high level of technical detail, the ramping capability of units can be constrained in SpineOpt. This means that the user has the freedom to impose restrictions on the change in the output (or input) of units over time, for online (spinning) units, units starting up and units shutting down. In this section, the concept of ramps in SpineOpt will be introduced.

Relevant objects, relationships and parameters

Everything that is related to ramping is defined in parameters of either the unit__to_node or unit__from_node relationship (where the node can be a group). Generally speaking, the ramping constraints will impose restrictions on the change in the unit_flow variable between two consecutive timesteps.

All parameters that limit the ramping abilities of a unit are expressed as a fraction of the unit capacity. This means that a value of 1 indicates the full capacity of a unit.

The discussion here will be conceptual. For the mathematical formulation the reader is referred to the Ramping constraints

Constraining spinning up and down ramps

Constraining start up and shut down ramps

General principle and example use cases

The general principle of the Spine modelling ramping constraints is that all of these parameters can be defined separately for each unit. This allows the user to incorporate different units (which can either represent a single unit or a technology type) with different flexibility characteristics.

It should be noted that it is perfectly possible to omit all of the ramp constraining parameters mentioned above, or to specify only some of them. Anything that is omitted is interpreted as if it shouldn't be constrained. For example, if you only specify start_up_limit and ramp_down_limit, then only the flow increase during start up and the flow decrease during online operation will be constrained (but not any other flow increase or decrease).

Illustrative examples

Step 1: Simple case of unrestricted unit

When none of the ramping parameters mentioned above are specified, the unit is considered to have full ramping flexibility. This means that over any period of time, its flow can be any value between 0 and its capacity, regardless of what the flow of the unit was in previous timesteps, and regardless of the on- or offline status of the unit in previous timesteps (while still respecting, of course, the Unit commitment restrictions that are defined for this unit). This is equivalent to specifying the following:

  • shut_down_limit : 1
  • start_up_limit : 1
  • ramp_up_limit : 1
  • ramp_down_limit : 1

Step 2: Spinning ramp restriction

A unit which is only restricted in spinning ramping can be created by changing the ramp_up/down_limit parameters:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit flow cannot increase more than $0.2 * 200$ and cannot decrease more than $0.4 * 200$ over a period of time equal to 'one' duration_unit. For example, when the unit is running at an output of $100$ in some timestep $t$, its output for the next 'one' duration_unit must be somewhere in the interval $[20, 140]$ - unless it shuts down completely.

Step 3: Shutdown restrictions

By specifying the parameter shut_down_limit, an additional restriction is imposed on the maximum flow of the unit at the moment it goes offline:

  • shut_down_limit : 0.5
  • minimum_operating_point : 0.3

When the unit goes offline in a given timestep $t$, the output of the unit must be below $0.5 * 200 = 100$ in the timestep right before that $t$ (and of course, above $0.3 * 200 = 60$ - the minimum operating point).

Step 4: Startup restrictions

The start up restrictions are very similar to the shut down restrictions, but of course apply to units that are starting up. THey are activated by specifying start_up_limit:

  • start_up_limit : 0.4
  • minimum_operating_point : 0.2

When the unit goes online in a given timestep $t$, its output will be restricted to the interval $[40, 80]$.

Using node groups to constraint aggregated flow ramps

SpineOpt allows the user to constrain ramping abilities of units that are linked to multiple nodes by defining node groups. When a node group is defined, ramping restrictions can be imposed both on the group level (thus for the unit as a whole) as well as for the individual nodes. For example, let's assume that we have one unit and two nodes in a model. The unit is linked via unit__to_node relationships to each node individually, and on top of that, it is linked to a node group containing both nodes.

If, for example a ramp_up_limit is defined for the node group, the sum of upward ramping of the two nodes will be restricted by this parameter. However, it is still possible to limit the individual flows to the nodes as well. Let's say that our unit is capable of ramping up by 20% of its capacity and down by 40%. We might want to impose tighter restrictions for the flows towards one of the nodes (e.g. because the energy has to be provided in a shorter time than the duration_unit). One can then simply define an additional parameter for that unit__to_node relationship as follows.

  • ramp_up_limit : 0.15

Which now restricts the flow of the unit into that node to 15% of its capacity.

Please note that by default, node groups are balanced in the same way as individual nodes. So if you're using node groups for the sole purpose of constraining flow ramps, you should set the balance type of the group to balance_type_none.

Ramping with reserves

If a unit is set to provide reserves, then it should be able to provide that reserve within one duration_unit. For this reason, reserve provision must be accounted for within ramp constraints. Please see Reserves for details on how to setup a node as a reserve.

Examples

Let's assume that we have one unit and two nodes in a model, one for reserves and one for regular demand. The unit is then linked by the unit__to_node relationships to both the reserves and regular demand node.

Spinning ramp restriction

The unit can be restricted in spinning ramping by defining the ramp_up/down_limit parameters in the unit__to_node relationship for the regular demand node:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit's flow to the regular demand node cannot increase more than $0.2 * 200 - upward\_reserve\_demand$ or decrease more than $0.4 * 200 - downward\_reserve\_demand$ over one duration_unit. For example, when the unit is running at an output of $100$ and there is an upward reserve demand of $10$, then its output over the next duration_unit must be somewhere in the interval $[20, 130]$.

It can be seen in this example that the demand for reserves is subtracted from the ramping capacity of the unit that is available for regular operation. This stems from the fact that in providing reserve capacity, the unit is expected to be able to provide the demanded reserve within one duration_unit as stated above.

+Ramping · SpineOpt.jl

Ramping

To enable the representation of units with a high level of technical detail, the ramping capability of units can be constrained in SpineOpt. This means that the user has the freedom to impose restrictions on the change in the output (or input) of units over time, for online (spinning) units, units starting up and units shutting down. In this section, the concept of ramps in SpineOpt will be introduced.

Relevant objects, relationships and parameters

Everything that is related to ramping is defined in parameters of either the unit__to_node or unit__from_node relationship (where the node can be a group). Generally speaking, the ramping constraints will impose restrictions on the change in the unit_flow variable between two consecutive timesteps.

All parameters that limit the ramping abilities of a unit are expressed as a fraction of the unit capacity. This means that a value of 1 indicates the full capacity of a unit.

The discussion here will be conceptual. For the mathematical formulation the reader is referred to the Ramping constraints

Constraining spinning up and down ramps

Constraining start up and shut down ramps

General principle and example use cases

The general principle of the Spine modelling ramping constraints is that all of these parameters can be defined separately for each unit. This allows the user to incorporate different units (which can either represent a single unit or a technology type) with different flexibility characteristics.

It should be noted that it is perfectly possible to omit all of the ramp constraining parameters mentioned above, or to specify only some of them. Anything that is omitted is interpreted as if it shouldn't be constrained. For example, if you only specify start_up_limit and ramp_down_limit, then only the flow increase during start up and the flow decrease during online operation will be constrained (but not any other flow increase or decrease).

Illustrative examples

Step 1: Simple case of unrestricted unit

When none of the ramping parameters mentioned above are specified, the unit is considered to have full ramping flexibility. This means that over any period of time, its flow can be any value between 0 and its capacity, regardless of what the flow of the unit was in previous timesteps, and regardless of the on- or offline status of the unit in previous timesteps (while still respecting, of course, the Unit commitment restrictions that are defined for this unit). This is equivalent to specifying the following:

  • shut_down_limit : 1
  • start_up_limit : 1
  • ramp_up_limit : 1
  • ramp_down_limit : 1

Step 2: Spinning ramp restriction

A unit which is only restricted in spinning ramping can be created by changing the ramp_up/down_limit parameters:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit flow cannot increase more than $0.2 * 200$ and cannot decrease more than $0.4 * 200$ over a period of time equal to 'one' duration_unit. For example, when the unit is running at an output of $100$ in some timestep $t$, its output for the next 'one' duration_unit must be somewhere in the interval $[20, 140]$ - unless it shuts down completely.

Step 3: Shutdown restrictions

By specifying the parameter shut_down_limit, an additional restriction is imposed on the maximum flow of the unit at the moment it goes offline:

  • shut_down_limit : 0.5
  • minimum_operating_point : 0.3

When the unit goes offline in a given timestep $t$, the output of the unit must be below $0.5 * 200 = 100$ in the timestep right before that $t$ (and of course, above $0.3 * 200 = 60$ - the minimum operating point).

Step 4: Startup restrictions

The start up restrictions are very similar to the shut down restrictions, but of course apply to units that are starting up. THey are activated by specifying start_up_limit:

  • start_up_limit : 0.4
  • minimum_operating_point : 0.2

When the unit goes online in a given timestep $t$, its output will be restricted to the interval $[40, 80]$.

Using node groups to constraint aggregated flow ramps

SpineOpt allows the user to constrain ramping abilities of units that are linked to multiple nodes by defining node groups. When a node group is defined, ramping restrictions can be imposed both on the group level (thus for the unit as a whole) as well as for the individual nodes. For example, let's assume that we have one unit and two nodes in a model. The unit is linked via unit__to_node relationships to each node individually, and on top of that, it is linked to a node group containing both nodes.

If, for example a ramp_up_limit is defined for the node group, the sum of upward ramping of the two nodes will be restricted by this parameter. However, it is still possible to limit the individual flows to the nodes as well. Let's say that our unit is capable of ramping up by 20% of its capacity and down by 40%. We might want to impose tighter restrictions for the flows towards one of the nodes (e.g. because the energy has to be provided in a shorter time than the duration_unit). One can then simply define an additional parameter for that unit__to_node relationship as follows.

  • ramp_up_limit : 0.15

Which now restricts the flow of the unit into that node to 15% of its capacity.

Please note that by default, node groups are balanced in the same way as individual nodes. So if you're using node groups for the sole purpose of constraining flow ramps, you should set the balance type of the group to balance_type_none.

Ramping with reserves

If a unit is set to provide reserves, then it should be able to provide that reserve within one duration_unit. For this reason, reserve provision must be accounted for within ramp constraints. Please see Reserves for details on how to setup a node as a reserve.

Examples

Let's assume that we have one unit and two nodes in a model, one for reserves and one for regular demand. The unit is then linked by the unit__to_node relationships to both the reserves and regular demand node.

Spinning ramp restriction

The unit can be restricted in spinning ramping by defining the ramp_up/down_limit parameters in the unit__to_node relationship for the regular demand node:

  • ramp_up_limit : 0.2
  • ramp_down_limit : 0.4

This parameter choice implies that the unit's flow to the regular demand node cannot increase more than $0.2 * 200 - upward\_reserve\_demand$ or decrease more than $0.4 * 200 - downward\_reserve\_demand$ over one duration_unit. For example, when the unit is running at an output of $100$ and there is an upward reserve demand of $10$, then its output over the next duration_unit must be somewhere in the interval $[20, 130]$.

It can be seen in this example that the demand for reserves is subtracted from the ramping capacity of the unit that is available for regular operation. This stems from the fact that in providing reserve capacity, the unit is expected to be able to provide the demanded reserve within one duration_unit as stated above.

diff --git a/dev/advanced_concepts/representative_days_w_seasonal_storage/index.html b/dev/advanced_concepts/representative_days_w_seasonal_storage/index.html index 222fe3d438..7dc58ec7e6 100644 --- a/dev/advanced_concepts/representative_days_w_seasonal_storage/index.html +++ b/dev/advanced_concepts/representative_days_w_seasonal_storage/index.html @@ -1,2 +1,2 @@ -Representative days with seasonal storages · SpineOpt.jl

Representative days with seasonal storages

In order to reduce the problem size, representative periods are often used in optimization models. However, this often limits the ability to properly account for seasonal storages.

In SpineOpt, we provide functionality to use representative days with seasonal storages.

General idea

The general idea is to mimick the seasonal effects throughout a non-representative period, e.g. a year of optimization, by introducing a specific sequence of the representative periods.

Usage of representative days and seasonal storages for investment problems

Assuming you already have an investment model with a certain temporal structure that works, you can turn it into a representative periods model with the following steps.

  1. Select the representative periods. For example if you are modelling a year, you can select a few weeks (one in summer, one in winder, and one in mid season).
  2. For each representative period, create a temporal_block specifying block_start, block_end and resolution.
  3. Associate these temporal_blocks to some nodes and units in your system, via node__temporal_block and units_on__temporal_block relationships.
  4. Finally, for each original temporal_block associated to the nodes and units above, specify the value of the representative_periods_mapping parameter. This should be a map where each entry associates a date-time to the name of one of the representative period temporal_blocks created in step 3. More specifically, an entry with t as the key and b as the value means that time slices from the original block starting at t, are 'represented' by time slices from the b block. In other words, time slices between t and t plus the duration of b are represented by b.

In SpineOpt, this will be interpreted in the following way:

  • For each node and unit associated to any of your representative temporal_blocks, the operational variables (with the exception of node_state) will be created only for the representative periods. For the non-representative periods, SpineOpt will use the variable of the corresponding representative period according to the value of the representative_periods_mapping parameter.
  • The node_state variable and the investment variables will be created for all periods, representative and non-representative.

The SpinePeriods.jl package provides an alternative, perhaps simpler way to setup a representative periods model based on the automatic selection and ordering of periods.

+Representative days with seasonal storages · SpineOpt.jl

Representative days with seasonal storages

In order to reduce the problem size, representative periods are often used in optimization models. However, this often limits the ability to properly account for seasonal storages.

In SpineOpt, we provide functionality to use representative days with seasonal storages.

General idea

The general idea is to mimick the seasonal effects throughout a non-representative period, e.g. a year of optimization, by introducing a specific sequence of the representative periods.

Usage of representative days and seasonal storages for investment problems

Assuming you already have an investment model with a certain temporal structure that works, you can turn it into a representative periods model with the following steps.

  1. Select the representative periods. For example if you are modelling a year, you can select a few weeks (one in summer, one in winder, and one in mid season).
  2. For each representative period, create a temporal_block specifying block_start, block_end and resolution.
  3. Associate these temporal_blocks to some nodes and units in your system, via node__temporal_block and units_on__temporal_block relationships.
  4. Finally, for each original temporal_block associated to the nodes and units above, specify the value of the representative_periods_mapping parameter. This should be a map where each entry associates a date-time to the name of one of the representative period temporal_blocks created in step 3. More specifically, an entry with t as the key and b as the value means that time slices from the original block starting at t, are 'represented' by time slices from the b block. In other words, time slices between t and t plus the duration of b are represented by b.

In SpineOpt, this will be interpreted in the following way:

  • For each node and unit associated to any of your representative temporal_blocks, the operational variables (with the exception of node_state) will be created only for the representative periods. For the non-representative periods, SpineOpt will use the variable of the corresponding representative period according to the value of the representative_periods_mapping parameter.
  • The node_state variable and the investment variables will be created for all periods, representative and non-representative.

The SpinePeriods.jl package provides an alternative, perhaps simpler way to setup a representative periods model based on the automatic selection and ordering of periods.

diff --git a/dev/advanced_concepts/reserves/index.html b/dev/advanced_concepts/reserves/index.html index 2893799e91..71e8ab8266 100644 --- a/dev/advanced_concepts/reserves/index.html +++ b/dev/advanced_concepts/reserves/index.html @@ -1,2 +1,2 @@ -Reserves · SpineOpt.jl

Reserves

SpineOpt provides a way to include reserve provision in a model by creating reserve nodes. Reserve provision is different from regular operations as it involves withholding capacity, rather than producing a certain commodity (e.g., energy).

This section covers the reserve concepts, but we highly recommend checking out the tutorial on reserves for a more thorough understanding of how the model is set up. You can find the reserves tutorial.

Defining a reserve node

To define a reserve node, the following parameters have to be defined for the relevant node:

  • is_reserve_node : this boolean parameter indicates that this node is a reserve node.
  • upward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns upward reserves.
  • downward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns downward reserves.
  • reserve_procurement_cost: (optional) this parameter indicates the procurement cost of a unit for a certain reserve product and can be define on a unit__to_node or unit__from_node relationship.

Defining a reserve group

The reserve group definition allows the creation of a unit flow capacity constraint where all the unit flows to different commodities, including the reserve provision, are considered to limit the maximum unit capacity.

The definition of the reserve group also allows the creation of minimum operating point, ramp up, and ramp down constraints, considering flows and reserve provisions.

The relationship between the unit and the node group (i.e., unit__to_node or unit__from_node) is essential to define the parameters needed for the constraints (e.g., unit_capacity, minimum_operating_point, ramp_up_limit, or ramp_down_limit).

Illustrative examples

In this example, we will consider a unit that can provide upward and downward reserves, along with producing electricity. Therefore, the model needs to consider both characteristics of electricity production and reserve provision in the constraints.

Let's take a look to the unit flow capacity constraint and the minimum operating point. For the illustrative example of ramping constraints and reserves, please visit the illustrative example of the reserve section.

Unit flow capacity constraint with reserve

Assuming the following parameters, we are considering a fully flexible unit taking into account the definition of the unit flow capacity constraint:

  • unit_capacity : 100
  • shut_down_limit: 1
  • start_up_limit : 1

The parameters indicate that the unit capacity is 100 (e.g., 100 MW) and the shutdown and startup limits are 1 p.u. This means that the unit can start up or shut down to its maximum capacity, making it a fully flexible unit.

Taking into account the constraint and the fact that the unit can provide upward reserve and generate electricity, the simplified version of the resulting constraint is a simplified manner:

$unit\_flow\_to\_electricity + upwards\_reserve \leq 100 \cdot units\_on$

Here, we can see that the flow to the electricity node depends on the unit's capacity and the upward reserve provision of the unit.

Minimum operating point constraint with reserve

We need to consider the following parameters for the minimum operating point constraint:

  • minimum_operating_point : 0.25

This value means that the unit has a 25% of its capacity as a minimum operating point (i.e., 25 MW). Therefore, the simplified version of the resulting constraint is:

$unit\_flow\_to\_electricity - downward\_reserve \geq 25 \cdot units\_on$

Here, the downward reserve limits the flow to the electricity node to ensure that the minimum operating point of the unit is fulfilled.

+Reserves · SpineOpt.jl

Reserves

SpineOpt provides a way to include reserve provision in a model by creating reserve nodes. Reserve provision is different from regular operations as it involves withholding capacity, rather than producing a certain commodity (e.g., energy).

This section covers the reserve concepts, but we highly recommend checking out the tutorial on reserves for a more thorough understanding of how the model is set up. You can find the reserves tutorial.

Defining a reserve node

To define a reserve node, the following parameters have to be defined for the relevant node:

  • is_reserve_node : this boolean parameter indicates that this node is a reserve node.
  • upward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns upward reserves.
  • downward_reserve : this boolean parameter indicates that the demand for reserve provision of this node concerns downward reserves.
  • reserve_procurement_cost: (optional) this parameter indicates the procurement cost of a unit for a certain reserve product and can be define on a unit__to_node or unit__from_node relationship.

Defining a reserve group

The reserve group definition allows the creation of a unit flow capacity constraint where all the unit flows to different commodities, including the reserve provision, are considered to limit the maximum unit capacity.

The definition of the reserve group also allows the creation of minimum operating point, ramp up, and ramp down constraints, considering flows and reserve provisions.

The relationship between the unit and the node group (i.e., unit__to_node or unit__from_node) is essential to define the parameters needed for the constraints (e.g., unit_capacity, minimum_operating_point, ramp_up_limit, or ramp_down_limit).

Illustrative examples

In this example, we will consider a unit that can provide upward and downward reserves, along with producing electricity. Therefore, the model needs to consider both characteristics of electricity production and reserve provision in the constraints.

Let's take a look to the unit flow capacity constraint and the minimum operating point. For the illustrative example of ramping constraints and reserves, please visit the illustrative example of the reserve section.

Unit flow capacity constraint with reserve

Assuming the following parameters, we are considering a fully flexible unit taking into account the definition of the unit flow capacity constraint:

  • unit_capacity : 100
  • shut_down_limit: 1
  • start_up_limit : 1

The parameters indicate that the unit capacity is 100 (e.g., 100 MW) and the shutdown and startup limits are 1 p.u. This means that the unit can start up or shut down to its maximum capacity, making it a fully flexible unit.

Taking into account the constraint and the fact that the unit can provide upward reserve and generate electricity, the simplified version of the resulting constraint is a simplified manner:

$unit\_flow\_to\_electricity + upwards\_reserve \leq 100 \cdot units\_on$

Here, we can see that the flow to the electricity node depends on the unit's capacity and the upward reserve provision of the unit.

Minimum operating point constraint with reserve

We need to consider the following parameters for the minimum operating point constraint:

  • minimum_operating_point : 0.25

This value means that the unit has a 25% of its capacity as a minimum operating point (i.e., 25 MW). Therefore, the simplified version of the resulting constraint is:

$unit\_flow\_to\_electricity - downward\_reserve \geq 25 \cdot units\_on$

Here, the downward reserve limits the flow to the electricity node to ensure that the minimum operating point of the unit is fulfilled.

diff --git a/dev/advanced_concepts/stochastic_framework/index.html b/dev/advanced_concepts/stochastic_framework/index.html index 59b3d51cfa..04eb98ae26 100644 --- a/dev/advanced_concepts/stochastic_framework/index.html +++ b/dev/advanced_concepts/stochastic_framework/index.html @@ -6,4 +6,4 @@ # If not a root `stochastic_scenario` -weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

Finally, with all the pieces in place, we'll need to connect the defined stochastic_structure objects to the desired objects in the Systemic object classes using the Structural relationship classes like node__stochastic_structure etc. Here, we essentially tell which parts of the modelled system use which stochastic_structure. Since creating each of these relationships individually can be a bit of a pain, there are a few Meta relationship classes like the model__default_stochastic_structure, that can be used to set model-wide defaults that are used if specific relationships are missing.

Example of deterministic stochastics

Here, we'll demonstrate step-by-step how to create the simplest possible stochastic frame: the fully deterministic one. See the Deterministic Stochastic Structure archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create a stochastic_scenario called e.g. realization and a stochastic_structure called e.g. deterministic.
  2. We can skip the parent_stochastic_scenario__child_stochastic_scenario relationship, since there isn't a stochastic DAG in this example, and the default behaviour of each stochastic_scenario being independent works for our purposes (only one stochastic_scenario anyhow).
  3. Create the stochastic_structure__stochastic_scenario relationship for (deterministic, realization), and set its weight_relative_to_parents parameter to 1. We don't need to define the stochastic_scenario_end parameter, as we want the realization to go on indefinitely.
  4. Relate the deterministic stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of branching stochastics

Here, we'll demonstrate step-by-step how to create a simple branching stochastic tree, where one scenario branches into three at a specific point in time. See the Branching Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create four stochastic_scenario objects called e.g. realization, forecast1, forecast2, and forecast3, and a stochastic_structure called e.g. branching.
  2. Define the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (realization, forecast1), (realization, forecast2), and (realization, forecast3).
  3. Create the stochastic_structure__stochastic_scenario relationship for (branching, realization), (branching, forecast1), (branching, forecast2), and (branching, forecast3).
  4. Set the weight_relative_to_parents parameter to 1 and the stochastic_scenario_end parameter e.g. to 6h for the stochastic_structure__stochastic_scenario relationship (branching, realization). Now, the realization stochastic_scenario will end after 6 hours of time steps, and its children (forecast1, forecast2, and forecast3) will become active.
  5. Set the weight_relative_to_parents Parameters for the (branching, forecast1), (branching, forecast2), and (branching, forecast3) stochastic_structure__stochastic_scenario relationships to whatever you desire, e.g. 0.33 for equal probabilities across all forecasts.
  6. Relate the branching stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of converging stochastics

Here, we'll demonstrate step-by-step how to create a simple stochastic DAG, where both branching and converging occurs. This example relies on the previous Example of branching stochastics, but adds another stochastic_scenario at the end, which is a child of the forecast1, forecast2, and forecast3 scenarios. See the Converging Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Follow the steps 1-5 in the previous Example of branching stochastics, except call the stochastic_structure something different, e.g. converging.
  2. Create a new stochastic_scenario called e.g. converged_forecast.
  3. Alter the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (forecast1, converged_forecast), (forecast2, converged_forecast), and (forecast3, converged_forecast). Now all three forecasts will converge into a single converged_forecast.
  4. Add the stochastic_structure__stochastic_scenario relationship for (converging, converged_forecast), and set its weight_relative_to_parents parameter to 1. Now, all the probability mass in forecast1, forecast2, and forecast3 will be summed up back to the converged_forecast.
  5. Set the stochastic_scenario_end Parameters of the stochastic_structure__stochastic_scenario relationships (converging, forecast1), (converging, forecast2), and (converging, forecast3) to e.g. 1D, so that all three scenarios end at the same time and the converged_forecast becomes active.
  6. Relate the converging stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Working with stochastic updating data

Now that we've discussed how to set up stochastics for SpineOpt, let's focus on stochastic data. The most complex form of input data SpineOpt can currently handle is both stochastic and updating, meaning that the values the parameter takes can depend on both the stochastic_scenario, and the analysis time (first time step) of each solve. However, just stochastic or just updating cases are supported as well, using the same input data format.

In SpineOpt, stochastic data uses the Map data type from SpineInterface.jl. Essentially, Maps are general indexed data containers, which SpineOpt tries to interpret as stochastic data. Every time SpineOpt calls a parameter, it passes the stochastic_scenario and analysis time as keyword arguments to the parameter, but depending on the parameter type, it doesn't necessarily do anything with that information. For Map type parameters, those keyword arguments are used for navigating the indices of the Map to try and find the corresponding value. If the Map doesn't include the stochastic_scenario index it's looking for, it assumes there's no stochastic information in the Map and carries on to search for analysis time indices. This logic is useful for defining both stochastic and updating data, as well as either case by itself, as shown in the following examples.

Example of stochastic data

By stochastic data, we mean parameter values that depend only on the stochastic_scenario. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenariovalue
scenario1value1
scenario2value2

where stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 in scenario1, and value2 in scenario2. Note that since there's no analysis time index in this example, the values are used regardless of the analysis time.

Example of updating data

By updating data, we mean parameter values that depend only on the analysis time. In such a case, the input data must be formatted as a Map with the following structure

analysis timevalue
2000-01-01T00:00:00value1
2000-01-01T12:00:00value2

where the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00, and value2 if the first time step of the simulation is after 2000-01-01T12:00:00. Note that since there's no stochastic_scenario index in this example, the values are used regardless of the stochastic_scenario.

Example of stochastic updating data

By stochastic updating data, we mean parameter values that depend on both the stochastic_scenario and the analysis time. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenarioanalysis timevalue
scenario12000-01-01T00:00:00value1
scenario12000-01-01T12:00:00value2
scenario22000-01-01T00:00:00value3
scenario22000-01-01T12:00:00value4

where the stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects, and the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00 and the parameter is called in scenario1, and value3 in scenario2. If the first time step of the current simulation is after 2000-01-01T12:00:00, the parameter will take value2 in scenario1, and value4 in scenario2.

Constraint generation with stochastic path indexing

Every time a constraint might refer to variables either on different time steps or on different stochastic scenarios (meaning different nodes or units), the constraint needs to use stochastic path indexing in order to be correctly generated for arbitrary stochastic DAGs. In practise, this means following the procedure outlined below:

  1. Identify all unique full stochastic paths, meaning all the possible ways of traversing the DAG. This is done along with generating the stochastic structure, so no real impact on constraint generation.
  2. Find all the stochastic scenarios that are active on all the stochastic structures and time slices included in the constraint.
  3. Find all the unique stochastic paths by intersecting the set of active scenarios with the full stochastic paths.
  4. Generate constraints over each unique stochastic path found in step 3.

Steps 2 and 3 are the crucial ones, and are currently handled by separate constraint_<constraint_name>_indices functions. Essentially, these functions go through all the variables on all the time steps included in the constraint, collect the set of active stochastic_scenarios on each time step, and then determine the unique active stochastic paths on each time step. The functions pre-form the index set over which the constraint is then generated in the add_constraint_<constraint_name> functions.

+weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

Finally, with all the pieces in place, we'll need to connect the defined stochastic_structure objects to the desired objects in the Systemic object classes using the Structural relationship classes like node__stochastic_structure etc. Here, we essentially tell which parts of the modelled system use which stochastic_structure. Since creating each of these relationships individually can be a bit of a pain, there are a few Meta relationship classes like the model__default_stochastic_structure, that can be used to set model-wide defaults that are used if specific relationships are missing.

Example of deterministic stochastics

Here, we'll demonstrate step-by-step how to create the simplest possible stochastic frame: the fully deterministic one. See the Deterministic Stochastic Structure archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create a stochastic_scenario called e.g. realization and a stochastic_structure called e.g. deterministic.
  2. We can skip the parent_stochastic_scenario__child_stochastic_scenario relationship, since there isn't a stochastic DAG in this example, and the default behaviour of each stochastic_scenario being independent works for our purposes (only one stochastic_scenario anyhow).
  3. Create the stochastic_structure__stochastic_scenario relationship for (deterministic, realization), and set its weight_relative_to_parents parameter to 1. We don't need to define the stochastic_scenario_end parameter, as we want the realization to go on indefinitely.
  4. Relate the deterministic stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of branching stochastics

Here, we'll demonstrate step-by-step how to create a simple branching stochastic tree, where one scenario branches into three at a specific point in time. See the Branching Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Create four stochastic_scenario objects called e.g. realization, forecast1, forecast2, and forecast3, and a stochastic_structure called e.g. branching.
  2. Define the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (realization, forecast1), (realization, forecast2), and (realization, forecast3).
  3. Create the stochastic_structure__stochastic_scenario relationship for (branching, realization), (branching, forecast1), (branching, forecast2), and (branching, forecast3).
  4. Set the weight_relative_to_parents parameter to 1 and the stochastic_scenario_end parameter e.g. to 6h for the stochastic_structure__stochastic_scenario relationship (branching, realization). Now, the realization stochastic_scenario will end after 6 hours of time steps, and its children (forecast1, forecast2, and forecast3) will become active.
  5. Set the weight_relative_to_parents Parameters for the (branching, forecast1), (branching, forecast2), and (branching, forecast3) stochastic_structure__stochastic_scenario relationships to whatever you desire, e.g. 0.33 for equal probabilities across all forecasts.
  6. Relate the branching stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Example of converging stochastics

Here, we'll demonstrate step-by-step how to create a simple stochastic DAG, where both branching and converging occurs. This example relies on the previous Example of branching stochastics, but adds another stochastic_scenario at the end, which is a child of the forecast1, forecast2, and forecast3 scenarios. See the Converging Stochastic Tree archetype for how the final data structure looks like, as well as how to connect this stochastic_structure to the rest of your model.

  1. Follow the steps 1-5 in the previous Example of branching stochastics, except call the stochastic_structure something different, e.g. converging.
  2. Create a new stochastic_scenario called e.g. converged_forecast.
  3. Alter the stochastic DAG by creating the parent_stochastic_scenario__child_stochastic_scenario relationships for (forecast1, converged_forecast), (forecast2, converged_forecast), and (forecast3, converged_forecast). Now all three forecasts will converge into a single converged_forecast.
  4. Add the stochastic_structure__stochastic_scenario relationship for (converging, converged_forecast), and set its weight_relative_to_parents parameter to 1. Now, all the probability mass in forecast1, forecast2, and forecast3 will be summed up back to the converged_forecast.
  5. Set the stochastic_scenario_end Parameters of the stochastic_structure__stochastic_scenario relationships (converging, forecast1), (converging, forecast2), and (converging, forecast3) to e.g. 1D, so that all three scenarios end at the same time and the converged_forecast becomes active.
  6. Relate the converging stochastic_structure to all the desired system objects using the appropriate Structural relationship classes, or use the model-level default Meta relationship classes.

Working with stochastic updating data

Now that we've discussed how to set up stochastics for SpineOpt, let's focus on stochastic data. The most complex form of input data SpineOpt can currently handle is both stochastic and updating, meaning that the values the parameter takes can depend on both the stochastic_scenario, and the analysis time (first time step) of each solve. However, just stochastic or just updating cases are supported as well, using the same input data format.

In SpineOpt, stochastic data uses the Map data type from SpineInterface.jl. Essentially, Maps are general indexed data containers, which SpineOpt tries to interpret as stochastic data. Every time SpineOpt calls a parameter, it passes the stochastic_scenario and analysis time as keyword arguments to the parameter, but depending on the parameter type, it doesn't necessarily do anything with that information. For Map type parameters, those keyword arguments are used for navigating the indices of the Map to try and find the corresponding value. If the Map doesn't include the stochastic_scenario index it's looking for, it assumes there's no stochastic information in the Map and carries on to search for analysis time indices. This logic is useful for defining both stochastic and updating data, as well as either case by itself, as shown in the following examples.

Example of stochastic data

By stochastic data, we mean parameter values that depend only on the stochastic_scenario. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenariovalue
scenario1value1
scenario2value2

where stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 in scenario1, and value2 in scenario2. Note that since there's no analysis time index in this example, the values are used regardless of the analysis time.

Example of updating data

By updating data, we mean parameter values that depend only on the analysis time. In such a case, the input data must be formatted as a Map with the following structure

analysis timevalue
2000-01-01T00:00:00value1
2000-01-01T12:00:00value2

where the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00, and value2 if the first time step of the simulation is after 2000-01-01T12:00:00. Note that since there's no stochastic_scenario index in this example, the values are used regardless of the stochastic_scenario.

Example of stochastic updating data

By stochastic updating data, we mean parameter values that depend on both the stochastic_scenario and the analysis time. In such a case, the input data must be formatted as a Map with the following structure

stochastic_scenarioanalysis timevalue
scenario12000-01-01T00:00:00value1
scenario12000-01-01T12:00:00value2
scenario22000-01-01T00:00:00value3
scenario22000-01-01T12:00:00value4

where the stochastic_scenario indices are simply Strings corresponding to the names of the stochastic_scenario objects, and the analysis time indices are DateTime values. The values can be whatever data types SpineInterface.jl supports, like Constants, DateTimes, Durations, or TimeSeries. In the above example, the parameter will take value1 if the first time step of the current simulation is between 2000-01-01T00:00:00 and 2000-01-01T12:00:00 and the parameter is called in scenario1, and value3 in scenario2. If the first time step of the current simulation is after 2000-01-01T12:00:00, the parameter will take value2 in scenario1, and value4 in scenario2.

Constraint generation with stochastic path indexing

Every time a constraint might refer to variables either on different time steps or on different stochastic scenarios (meaning different nodes or units), the constraint needs to use stochastic path indexing in order to be correctly generated for arbitrary stochastic DAGs. In practise, this means following the procedure outlined below:

  1. Identify all unique full stochastic paths, meaning all the possible ways of traversing the DAG. This is done along with generating the stochastic structure, so no real impact on constraint generation.
  2. Find all the stochastic scenarios that are active on all the stochastic structures and time slices included in the constraint.
  3. Find all the unique stochastic paths by intersecting the set of active scenarios with the full stochastic paths.
  4. Generate constraints over each unique stochastic path found in step 3.

Steps 2 and 3 are the crucial ones, and are currently handled by separate constraint_<constraint_name>_indices functions. Essentially, these functions go through all the variables on all the time steps included in the constraint, collect the set of active stochastic_scenarios on each time step, and then determine the unique active stochastic paths on each time step. The functions pre-form the index set over which the constraint is then generated in the add_constraint_<constraint_name> functions.

diff --git a/dev/advanced_concepts/temporal_framework/index.html b/dev/advanced_concepts/temporal_framework/index.html index bf6595d29a..4db81d1557 100644 --- a/dev/advanced_concepts/temporal_framework/index.html +++ b/dev/advanced_concepts/temporal_framework/index.html @@ -1,2 +1,2 @@ -Temporal Framework · SpineOpt.jl

Temporal Framework

Spine Model aims to provide a high degree of flexibility in the temporal dimension across different components of the created model. This means that the user has some freedom to choose how the temporal aspects of different components of the model are defined. This freedom increases the variety of problems that can be tackled in Spine: from very coarse, long term models, to very detailed models with a more limited horizon, or a mix of both. The choice of the user on how this flexibility is used will lead to the temporal structure of the model.

The main components of flexibility consist of the following parts:

  • The horizon that is modeled: end and start time
  • Temporal resolution
  • Possibility of a rolling optimization window
  • Support for commonly used methods such as representative days

Part of the temporal flexibility in Spine is due to the fact that these options mentioned above can be implemented differently across different components of the model, which can be very useful when different markets are coupled in a single model. The resolution and horizon of the gas market can for example be taken differently than that of the electricity market. This documentation aims to give the reader insight in how these aspects are defined, and which objects are used for this.

We start by introducing the relevant objects with their parameters, and the relevant relationship classes for the temporal structure. Afterwards, we will discuss how this setting creates flexibility and will present some of the practical approaches to create a variety of temporal structures.

Objects, relationships, and their parameters

In this section, the objects and relationships will be discussed that form the temporal structure together.

Objects relevant for the temporal framework

For the objects, the relevant parameters will also be introduced, along with the type of values that are allowed, following the format below:

  • 'parameter_name' : "Allowed value type"

model object

Each model object holds general information about the model at hand. Here we only discuss the time related parameters:

These two parameters define the model horizon. A Datetime value is to be taken for both parameters, in which case they directly mark respectively the beginning and end of the modeled time horizon.

This parameters gives the unit of duration that is used in the model calculations. The default value for this parameter is 'minute'. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In the practical approaches presented below, the rolling window optimization will be explained in more detail.

temporal_block object

A temporal block defines the properties of the optimization that is to be solved in the current window. Most importantly, it holds the necessary information about the resolution and horizon of the optimization.

  • resolution (optional): "duration value" or "array of duration values"

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run.

  • block_start (optional): "duration value" or "Date time value"

Indicates the start of this temporal block.

  • block_end(optional): "duration value" or "Date time value"

Indicates the end of this temporal block.

Relationships relevant for the temporal framework

model__temporal_block relationship

In this relationship, a model instance is linked to a temporal block. If this relationship doesn't exist - the temporal block is disregarded from this optimization model.

model__default_temporal_block relationship

Defines the default temporal block used for model objects, which will be replaced when a specific relationship is defined for a model in model__temporal_block.

node__temporal_block relationship

This relationship will link a node to a temporal block.

units_on__temporal_block relationship

This relationship links the units_on variable of a unit to a temporal block and will therefore govern the time-resolution of the unit's online/offline status.

unit__investment_temporal_block relationship

This relationship sets the temporal dimensions for investment decisions of a certain unit. The separation between this relationship and the units_on__temporal_block, allows the user for example to give a much finer resolution to a unit's on- or offline status than to it's investment decisions.

model__default_investment_temporal_block relationship

Defines the default temporal block used for investment decisions, which will be replaced when a specific relationship is defined for a unit in unit__investment_temporal_block.

General principle of the temporal framework

The general principle of the Spine modeling temporal structure is that different temporal blocks can be defined and linked to different objects in a model. This leads to great flexibility in the temporal structure of the model as a whole. To illustrate this, we will discuss some of the possibilities that arise in this framework.

One single temporal_block

Single solve with single block

The simplest case is a single solve of the entire time horizon (so roll_forward not defined) with a fixed resolution. In this case, only one temporal block has to be defined with a fixed resolution. Each node has to be linked to this temporal_block.

Alternatively, a variable resolution can be defined by choosing an array of durations for the resolution parameter. The sum of the durations in the array then have to match the length of the temporal block. The example below illustrates an optimization that spans one day for which the resolution is hourly in the beginning and then gradually decreases to a 6h resolution at the end.

  • temporal_block_1
    • block_start: 0h (Alternative DateTime: e.g. 2030-01-01T00:00:00)
    • block_end: 1D (Alternative DateTime: e.g. 2030-01-02T00:00:00)
    • resolution: [1h 1h 1h 1h 2h 2h 2h 4h 4h 6h]

Note that, as mentioned above, the block_start and block_end parameters can also be entered as absolute values, i.e. DateTime values.

Rolling window optimization with single block

A model with a single temporal_block can also be optimized in a rolling horizon framework. In this case, the roll_forward parameter has to be defined in the model object. The roll_forward parameter will then determine how much the optimization moves forward with every step, while the size of the temporal block will determine how large a time frame is optimized in each step. To see this more clearly, let's take a look at an example.

Suppose we want to model a horizon of one week, with a rolling window size of one day. The roll_forward parameter will then be a duration value of 1d. If we take the temporal_block parameters block_start and block_end to be the duration values 0h and 1d respectively, the model will optimize each day of the week separately. However, we could also take the block_end parameter to be 2d. Now the model will start by optimizing day 1 and day 2 together, after which it keeps only the values obtained for the first day, and moves forward to optimize the second and third day together.

Again, a variable resolution can be implemented for the rolling window optimization. The sum of the durations must in this case match the size of the optimized window.

Advanced usage: multiple temporal_block objects

Single solve with multiple blocks

Disconnected time periods

Multiple temporal blocks can be used to optimize disconnected periods. Let's take a look at an example in which two temporal blocks are defined.

  • temporal_block_1
    • block_start: 0h
    • block_end: 4h
  • temporal_block_2
    • block_start: 12h
    • block_end: 16h

This example will lead to an optimization of the first four hours of the model horizon, and also of hour 12 to 16. By defining exactly the same relationships for the two temporal blocks, an optimization of disconnected periods is achieved for exactly the same model components. This leads to the possibility of implementing the widely used representative days method. If desired, it is possible to choose a different temporal resolution for the different temporal_blocks.

It is worth noting that dynamic variables like node_state and units_on merit special attention when using disconnected time periods. By default, when trying to access variables Variables outside the defined temporal_blocks, SpineOpt.jl assumes such variables exist but allows them to take any values within specified bounds. If fixed initial conditions for the disconnected periods are desired, one needs to use parameters such as fix_node_state or fix_units_on.

Different regions/commodities in different resolutions

Multiple temporal blocks can also be used to model different regions or different commodities with a different resolution. This is especially useful when there is a certain region or commodity of interest, while other elements are connected to this but require less detail. For this kind of usage, the relationships that are defined for the temporal blocks will be different, as shown in the example below.

  • temporal_blocks
    • temporal_block_1
      • resolution: 1h
    • temporal_block_2
      • resolution: 2h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_1
    • node_2_temporal_block_2

Similarly, the on- and offline status of a unit can be modeled with a lower resolution than the actual output of that unit, by defining the units_on_temporal_block relationship with a different temporal block than the one used for the node_temporal_block relationship (of the node to which the unit is connected).

Rolling horizon with multiple blocks

Rolling horizon with different window sizes

Similar to what has been discussed above in Different regions/commodities in different resolutions, different commodities or regions can be modeled with a different resolution in the rolling horizon setting. The way to do it is completely analogous. Furthermore, when using the rolling horizon framework, a different window size can be chosen for the different modeled components, by simply using a different block_end parameter. However, using different block_ends e.g. for interconnected regions should be treated with care, as the variables for each region will only be generated for their respective temporal_block, which in most cases will lead to inconsistent linking constraints.

Putting it all together: rolling horizon with variable resolution that differs for different model components

Below is an example of an advanced use case in which a rolling horizon optimization is used, and different model components are optimized with a different resolution. By choosing the relevant parameters in the following way:

  • model
    • roll_forward: 4h
  • temporal_blocks
    • temporal_block_A
      • resolution: [1h 1h 2h 2h 2h 3h 3h]
      • block_end: 14h
    • temporal_block_B
      • resolution: [2h 2h 4h 6h]
      • block_end: 14h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_A
    • node_2_temporal_block_B

The two model components that are considered have a different resolution, and their own resolution is also varying within the optimization window. Note that in this case the two optimization windows have the same size, but this is not strictly necessary. The image below visualizes the first two window optimizations of this model.

temporal structure

+Temporal Framework · SpineOpt.jl

Temporal Framework

Spine Model aims to provide a high degree of flexibility in the temporal dimension across different components of the created model. This means that the user has some freedom to choose how the temporal aspects of different components of the model are defined. This freedom increases the variety of problems that can be tackled in Spine: from very coarse, long term models, to very detailed models with a more limited horizon, or a mix of both. The choice of the user on how this flexibility is used will lead to the temporal structure of the model.

The main components of flexibility consist of the following parts:

  • The horizon that is modeled: end and start time
  • Temporal resolution
  • Possibility of a rolling optimization window
  • Support for commonly used methods such as representative days

Part of the temporal flexibility in Spine is due to the fact that these options mentioned above can be implemented differently across different components of the model, which can be very useful when different markets are coupled in a single model. The resolution and horizon of the gas market can for example be taken differently than that of the electricity market. This documentation aims to give the reader insight in how these aspects are defined, and which objects are used for this.

We start by introducing the relevant objects with their parameters, and the relevant relationship classes for the temporal structure. Afterwards, we will discuss how this setting creates flexibility and will present some of the practical approaches to create a variety of temporal structures.

Objects, relationships, and their parameters

In this section, the objects and relationships will be discussed that form the temporal structure together.

Objects relevant for the temporal framework

For the objects, the relevant parameters will also be introduced, along with the type of values that are allowed, following the format below:

  • 'parameter_name' : "Allowed value type"

model object

Each model object holds general information about the model at hand. Here we only discuss the time related parameters:

These two parameters define the model horizon. A Datetime value is to be taken for both parameters, in which case they directly mark respectively the beginning and end of the modeled time horizon.

This parameters gives the unit of duration that is used in the model calculations. The default value for this parameter is 'minute'. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In the practical approaches presented below, the rolling window optimization will be explained in more detail.

temporal_block object

A temporal block defines the properties of the optimization that is to be solved in the current window. Most importantly, it holds the necessary information about the resolution and horizon of the optimization.

  • resolution (optional): "duration value" or "array of duration values"

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run.

  • block_start (optional): "duration value" or "Date time value"

Indicates the start of this temporal block.

  • block_end(optional): "duration value" or "Date time value"

Indicates the end of this temporal block.

Relationships relevant for the temporal framework

model__temporal_block relationship

In this relationship, a model instance is linked to a temporal block. If this relationship doesn't exist - the temporal block is disregarded from this optimization model.

model__default_temporal_block relationship

Defines the default temporal block used for model objects, which will be replaced when a specific relationship is defined for a model in model__temporal_block.

node__temporal_block relationship

This relationship will link a node to a temporal block.

units_on__temporal_block relationship

This relationship links the units_on variable of a unit to a temporal block and will therefore govern the time-resolution of the unit's online/offline status.

unit__investment_temporal_block relationship

This relationship sets the temporal dimensions for investment decisions of a certain unit. The separation between this relationship and the units_on__temporal_block, allows the user for example to give a much finer resolution to a unit's on- or offline status than to it's investment decisions.

model__default_investment_temporal_block relationship

Defines the default temporal block used for investment decisions, which will be replaced when a specific relationship is defined for a unit in unit__investment_temporal_block.

General principle of the temporal framework

The general principle of the Spine modeling temporal structure is that different temporal blocks can be defined and linked to different objects in a model. This leads to great flexibility in the temporal structure of the model as a whole. To illustrate this, we will discuss some of the possibilities that arise in this framework.

One single temporal_block

Single solve with single block

The simplest case is a single solve of the entire time horizon (so roll_forward not defined) with a fixed resolution. In this case, only one temporal block has to be defined with a fixed resolution. Each node has to be linked to this temporal_block.

Alternatively, a variable resolution can be defined by choosing an array of durations for the resolution parameter. The sum of the durations in the array then have to match the length of the temporal block. The example below illustrates an optimization that spans one day for which the resolution is hourly in the beginning and then gradually decreases to a 6h resolution at the end.

  • temporal_block_1
    • block_start: 0h (Alternative DateTime: e.g. 2030-01-01T00:00:00)
    • block_end: 1D (Alternative DateTime: e.g. 2030-01-02T00:00:00)
    • resolution: [1h 1h 1h 1h 2h 2h 2h 4h 4h 6h]

Note that, as mentioned above, the block_start and block_end parameters can also be entered as absolute values, i.e. DateTime values.

Rolling window optimization with single block

A model with a single temporal_block can also be optimized in a rolling horizon framework. In this case, the roll_forward parameter has to be defined in the model object. The roll_forward parameter will then determine how much the optimization moves forward with every step, while the size of the temporal block will determine how large a time frame is optimized in each step. To see this more clearly, let's take a look at an example.

Suppose we want to model a horizon of one week, with a rolling window size of one day. The roll_forward parameter will then be a duration value of 1d. If we take the temporal_block parameters block_start and block_end to be the duration values 0h and 1d respectively, the model will optimize each day of the week separately. However, we could also take the block_end parameter to be 2d. Now the model will start by optimizing day 1 and day 2 together, after which it keeps only the values obtained for the first day, and moves forward to optimize the second and third day together.

Again, a variable resolution can be implemented for the rolling window optimization. The sum of the durations must in this case match the size of the optimized window.

Advanced usage: multiple temporal_block objects

Single solve with multiple blocks

Disconnected time periods

Multiple temporal blocks can be used to optimize disconnected periods. Let's take a look at an example in which two temporal blocks are defined.

  • temporal_block_1
    • block_start: 0h
    • block_end: 4h
  • temporal_block_2
    • block_start: 12h
    • block_end: 16h

This example will lead to an optimization of the first four hours of the model horizon, and also of hour 12 to 16. By defining exactly the same relationships for the two temporal blocks, an optimization of disconnected periods is achieved for exactly the same model components. This leads to the possibility of implementing the widely used representative days method. If desired, it is possible to choose a different temporal resolution for the different temporal_blocks.

It is worth noting that dynamic variables like node_state and units_on merit special attention when using disconnected time periods. By default, when trying to access variables Variables outside the defined temporal_blocks, SpineOpt.jl assumes such variables exist but allows them to take any values within specified bounds. If fixed initial conditions for the disconnected periods are desired, one needs to use parameters such as fix_node_state or fix_units_on.

Different regions/commodities in different resolutions

Multiple temporal blocks can also be used to model different regions or different commodities with a different resolution. This is especially useful when there is a certain region or commodity of interest, while other elements are connected to this but require less detail. For this kind of usage, the relationships that are defined for the temporal blocks will be different, as shown in the example below.

  • temporal_blocks
    • temporal_block_1
      • resolution: 1h
    • temporal_block_2
      • resolution: 2h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_1
    • node_2_temporal_block_2

Similarly, the on- and offline status of a unit can be modeled with a lower resolution than the actual output of that unit, by defining the units_on_temporal_block relationship with a different temporal block than the one used for the node_temporal_block relationship (of the node to which the unit is connected).

Rolling horizon with multiple blocks

Rolling horizon with different window sizes

Similar to what has been discussed above in Different regions/commodities in different resolutions, different commodities or regions can be modeled with a different resolution in the rolling horizon setting. The way to do it is completely analogous. Furthermore, when using the rolling horizon framework, a different window size can be chosen for the different modeled components, by simply using a different block_end parameter. However, using different block_ends e.g. for interconnected regions should be treated with care, as the variables for each region will only be generated for their respective temporal_block, which in most cases will lead to inconsistent linking constraints.

Putting it all together: rolling horizon with variable resolution that differs for different model components

Below is an example of an advanced use case in which a rolling horizon optimization is used, and different model components are optimized with a different resolution. By choosing the relevant parameters in the following way:

  • model
    • roll_forward: 4h
  • temporal_blocks
    • temporal_block_A
      • resolution: [1h 1h 2h 2h 2h 3h 3h]
      • block_end: 14h
    • temporal_block_B
      • resolution: [2h 2h 4h 6h]
      • block_end: 14h
  • nodes
    • node_1
    • node_2
  • node_temporal_block relationships
    • node_1_temporal_block_A
    • node_2_temporal_block_B

The two model components that are considered have a different resolution, and their own resolution is also varying within the optimization window. Note that in this case the two optimization windows have the same size, but this is not strictly necessary. The image below visualizes the first two window optimizations of this model.

temporal structure

diff --git a/dev/advanced_concepts/unit_commitment/index.html b/dev/advanced_concepts/unit_commitment/index.html index 988a199e98..5c39a7c9d9 100644 --- a/dev/advanced_concepts/unit_commitment/index.html +++ b/dev/advanced_concepts/unit_commitment/index.html @@ -1,2 +1,2 @@ -Unit Commitment · SpineOpt.jl

Unit commitment

To incorporate technical detail about (clustered) unit-commitment statuses of units, the online, started and shutdown status of units can be tracked and constrained in SpineOpt. In the following, relevant relationships and parameters are introduced and the general working principle is described.

Key concepts for unit commitment

Here, we briefly describe the key concepts involved in the representation of (clustered) unit commitment models:

  • units_on is an optimization variable that holds information about the on- or offline status of a unit. Unit commitment restrictions will govern how this variable can change through time.

  • units_on__temporal_block is a relationship linking the units_on variable of this unit to a specific temporal_block object. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

  • online_variable_type is a method parameter and can take the values unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear. If the binary value is chosen, the units status is modelled as a binary (classic UC). For clustered unit commitment units, the integer type is applicable. Note that if the parameter is not defined, the default will be linear. If the units status is not crucial, this can reduce the computational burden.

  • number_of_units defines how many units of a certain unit type are available. Typically this parameter takes a binary (UC) or integer (clustered UC) value. To avoid confusion the following distinction will be made in this document: unit will be used to identify a Spine unit object, which can have multiple members. Together with the unit_availability_factor, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). The default value for this parameter is $1$. It is possible to allow the model to increase the number_of_units itself, through Investment Optimization

  • unit_availability_factor: (number value or time series). Is the fraction of the time that this unit is considered to be available, by acting as a multiplier on the capacity. A time series can be used to indicate the intermittent character of renewable generation technologies.

  • min_up_time: (duration value). Sets the minimum time that a unit has to stay online after a startup. Inclusion of this parameter will trigger the creation of the constraint on Minimum up time (basic version)

  • min_down_time: (duration value). Sets the minimum time that a unit has to stay offline after a shutdown. Inclusion of this parameter will trigger the creation of the constraint on Minimum down time (basic version)

  • minimum_operating_point: (number value) limits the minimum value of the unit_flow variable for a unit which is currently online. Inclusion of this parameter will trigger the creation of the Constraint on minimum operating point

  • start_up_cost: "number value". Cost associated with starting up a unit.

  • shut_down_cost: "number value". Cost associated with shutting down a unit.

Illustrative unit commitment examples

Step 1: defining the number of members of a unit type

A spine unit can represent multiple members. This can be incorporated in a model by setting the number_of_units parameter to a specific value. For example, if we define a single unit in a model as follows:

  • unit_1
    • number_of_units: 2

And we link the unit to a certain node_1 with a unit__to_node relationship.

  • unit_1_to__node_1

The single Spine unit defined here, now represents two members. This means that a single unit_flow variable will be created for this unit, but the restrictions as imposed by the Ramping and Reserves framework will be adapted to reflect the fact that there are two members present, thus doubling the total capacity.

Step 2: choosing the online_variable_type

Next, we have to decide the online_variable_type for this unit, which will restrict the kind of values that the units_on variable can take. This basically comes down to deciding if we are working in a classical UC framework (unit_online_variable_type_binary), a clustered UC framework (unit_online_variable_type_integer), or a relaxed clustered UC framework (unit_online_variable_type_linear), in which a non-integer number of units can be online.

The classical UC framework can only be applied when the number_of_units equals 1.

Step 3: imposing a minimum operating point

The output of an online unit to a specific node can be restricted to be above a certain minimum by choosing a value for the minimum_operating_point parameter. This parameter is defined for the unit__to_node relationship, and is given as a fraction of the unit_capacity. If we continue with the example above, and define the following objects, relationships, and parameters:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

It can be seen that in this case the unit_flow form unit_1 to node_1 must for any timestep $t$ be larger than $units\_on(t) * 0.2 * 200$

Step 4: imposing a minimum up or down time

Spine units can also be restricted in their commitment status with minimum up- or down times by choosing a value for the min_up_time or min_down_time respectively. These parameters are defined for the unit object, and should be duration values. We can continue the example and add a minimum up time for the unit:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
    • min_up_time: 2h
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

Whereas the units_on variable was restricted (before inclusion of the min_up_time parameter) to be smaller than or equal to the number_of_units for any timestep $t$, it now has to be smaller than or equal to the number_of_units decremented with the units_started_up summed over the timesteps that include t - min_up_time. This implies that a unit which has started up, has to stay online for at least the min_up_time

To consider a simple example let's assume that we have a model with a resolution of 1h. Suppose that before t, there is no member of the unit online and in timestep t -> t + 1h, one member starts up. Another member starts up in timestep t + 1h \-> t + 2h. The first startup, along with the minimum up time of 2 hours implies that the units_on variable of this unit has now changed to $1$ in timestep t -> t + 1h and can not go back to $0$ in timestep t-> t + 1h -> t + 2h. The second startup further restricts the number of units that are allowed to be online, it can be seen that the following restrictions apply when both startups are combined with the minimum up time of 2h:

  • t-> t + 1h : $units\_on = 1$
  • t + 1h -> t + 2h: $units\_on = 2$
  • t + 2h-> t + 3h: $units\_on \in {1,2}$
  • t + 3h-> t + 4h: $units\_on \in {0,1,2}$

The minimum down time restrictions operate in very much the same way, they simply impose that units that have been shut down, have to stay offline for the chosen period of time.

Step 5: allocationg a cost to startups or shutdowns

Costs can be allocated to startups or shutdowns by choosing a value for the start_up_cost or shut_down_cost respectively.

Step 6: defining unit availabilities

By defining a unit_availability_factor, the fact that typical members are not available all the time can be reflected in the model.

Typically, units are not available $100$% of the time, due to scheduled maintenance, unforeseen outages, or other things. This can be incorporated in the model by setting the unit_availability_factor to a fractional value. For each timestep in the model, an upper bound is then imposed on the units_on variable, equal to number_of_units $*$ unit_availability_factor. This parameter can not be used when the online_variable_type is binary. It should also be noted that when the online_variable_type is of integer type, the aforementioned product must be integer as well, since it will determine the value of the units_available parameter which is restricted to integer values. The default value for this parameter is $1$.

The unit_availability_factor can also be taken as a timeseries. By allowing a different availability factor for each timestep in the model, it can perfectly be used to represent intermittent technologies of which the output cannot be fully controlled.

+Unit Commitment · SpineOpt.jl

Unit commitment

To incorporate technical detail about (clustered) unit-commitment statuses of units, the online, started and shutdown status of units can be tracked and constrained in SpineOpt. In the following, relevant relationships and parameters are introduced and the general working principle is described.

Key concepts for unit commitment

Here, we briefly describe the key concepts involved in the representation of (clustered) unit commitment models:

  • units_on is an optimization variable that holds information about the on- or offline status of a unit. Unit commitment restrictions will govern how this variable can change through time.

  • units_on__temporal_block is a relationship linking the units_on variable of this unit to a specific temporal_block object. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

  • online_variable_type is a method parameter and can take the values unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear. If the binary value is chosen, the units status is modelled as a binary (classic UC). For clustered unit commitment units, the integer type is applicable. Note that if the parameter is not defined, the default will be linear. If the units status is not crucial, this can reduce the computational burden.

  • number_of_units defines how many units of a certain unit type are available. Typically this parameter takes a binary (UC) or integer (clustered UC) value. To avoid confusion the following distinction will be made in this document: unit will be used to identify a Spine unit object, which can have multiple members. Together with the unit_availability_factor, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). The default value for this parameter is $1$. It is possible to allow the model to increase the number_of_units itself, through Investment Optimization

  • unit_availability_factor: (number value or time series). Is the fraction of the time that this unit is considered to be available, by acting as a multiplier on the capacity. A time series can be used to indicate the intermittent character of renewable generation technologies.

  • min_up_time: (duration value). Sets the minimum time that a unit has to stay online after a startup. Inclusion of this parameter will trigger the creation of the constraint on Minimum up time (basic version)

  • min_down_time: (duration value). Sets the minimum time that a unit has to stay offline after a shutdown. Inclusion of this parameter will trigger the creation of the constraint on Minimum down time (basic version)

  • minimum_operating_point: (number value) limits the minimum value of the unit_flow variable for a unit which is currently online. Inclusion of this parameter will trigger the creation of the Constraint on minimum operating point

  • start_up_cost: "number value". Cost associated with starting up a unit.

  • shut_down_cost: "number value". Cost associated with shutting down a unit.

Illustrative unit commitment examples

Step 1: defining the number of members of a unit type

A spine unit can represent multiple members. This can be incorporated in a model by setting the number_of_units parameter to a specific value. For example, if we define a single unit in a model as follows:

  • unit_1
    • number_of_units: 2

And we link the unit to a certain node_1 with a unit__to_node relationship.

  • unit_1_to__node_1

The single Spine unit defined here, now represents two members. This means that a single unit_flow variable will be created for this unit, but the restrictions as imposed by the Ramping and Reserves framework will be adapted to reflect the fact that there are two members present, thus doubling the total capacity.

Step 2: choosing the online_variable_type

Next, we have to decide the online_variable_type for this unit, which will restrict the kind of values that the units_on variable can take. This basically comes down to deciding if we are working in a classical UC framework (unit_online_variable_type_binary), a clustered UC framework (unit_online_variable_type_integer), or a relaxed clustered UC framework (unit_online_variable_type_linear), in which a non-integer number of units can be online.

The classical UC framework can only be applied when the number_of_units equals 1.

Step 3: imposing a minimum operating point

The output of an online unit to a specific node can be restricted to be above a certain minimum by choosing a value for the minimum_operating_point parameter. This parameter is defined for the unit__to_node relationship, and is given as a fraction of the unit_capacity. If we continue with the example above, and define the following objects, relationships, and parameters:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

It can be seen that in this case the unit_flow form unit_1 to node_1 must for any timestep $t$ be larger than $units\_on(t) * 0.2 * 200$

Step 4: imposing a minimum up or down time

Spine units can also be restricted in their commitment status with minimum up- or down times by choosing a value for the min_up_time or min_down_time respectively. These parameters are defined for the unit object, and should be duration values. We can continue the example and add a minimum up time for the unit:

  • unit_1
    • number_of_units: 2
    • unit_online_variable_type: "unit_online_variable_type_integer"
    • min_up_time: 2h
  • unit_1_to__node_1
    • minimum_operating_point: 0.2
    • unit_capacity: 200

Whereas the units_on variable was restricted (before inclusion of the min_up_time parameter) to be smaller than or equal to the number_of_units for any timestep $t$, it now has to be smaller than or equal to the number_of_units decremented with the units_started_up summed over the timesteps that include t - min_up_time. This implies that a unit which has started up, has to stay online for at least the min_up_time

To consider a simple example let's assume that we have a model with a resolution of 1h. Suppose that before t, there is no member of the unit online and in timestep t -> t + 1h, one member starts up. Another member starts up in timestep t + 1h \-> t + 2h. The first startup, along with the minimum up time of 2 hours implies that the units_on variable of this unit has now changed to $1$ in timestep t -> t + 1h and can not go back to $0$ in timestep t-> t + 1h -> t + 2h. The second startup further restricts the number of units that are allowed to be online, it can be seen that the following restrictions apply when both startups are combined with the minimum up time of 2h:

  • t-> t + 1h : $units\_on = 1$
  • t + 1h -> t + 2h: $units\_on = 2$
  • t + 2h-> t + 3h: $units\_on \in {1,2}$
  • t + 3h-> t + 4h: $units\_on \in {0,1,2}$

The minimum down time restrictions operate in very much the same way, they simply impose that units that have been shut down, have to stay offline for the chosen period of time.

Step 5: allocationg a cost to startups or shutdowns

Costs can be allocated to startups or shutdowns by choosing a value for the start_up_cost or shut_down_cost respectively.

Step 6: defining unit availabilities

By defining a unit_availability_factor, the fact that typical members are not available all the time can be reflected in the model.

Typically, units are not available $100$% of the time, due to scheduled maintenance, unforeseen outages, or other things. This can be incorporated in the model by setting the unit_availability_factor to a fractional value. For each timestep in the model, an upper bound is then imposed on the units_on variable, equal to number_of_units $*$ unit_availability_factor. This parameter can not be used when the online_variable_type is binary. It should also be noted that when the online_variable_type is of integer type, the aforementioned product must be integer as well, since it will determine the value of the units_available parameter which is restricted to integer values. The default value for this parameter is $1$.

The unit_availability_factor can also be taken as a timeseries. By allowing a different availability factor for each timestep in the model, it can perfectly be used to represent intermittent technologies of which the output cannot be fully controlled.

diff --git a/dev/advanced_concepts/user_constraints/index.html b/dev/advanced_concepts/user_constraints/index.html index 877a3e3856..5d83ff7963 100644 --- a/dev/advanced_concepts/user_constraints/index.html +++ b/dev/advanced_concepts/user_constraints/index.html @@ -1,2 +1,2 @@ -User Constraints · SpineOpt.jl

User Constraints

User constraints allow the user to define arbitrary linear constraints involving most of the problem variables. This section describes this function and how to use it.

Key User Constraint Concepts

  1. The basic principle: The basic steps involved in forming a user constraint are:
  • Creating a user constraint object: One creates a new user_constraint object which will be used as a unique handle for the specific constraint and on which constraint-level parameters will be defined.
  • Specify which variables are involved in the constraint: this generally involves creating a relationship involving the user_constraint object. For example, specifying the relationship unit__from_node__user_constraint specifies that the corresponding unit_flow variable is involved in the constraint. The table below contains a complete list of variables and the corresponding relationships to set.
  • Specify the variable coefficients: this will generally involve specifying a parameter named *_coefficient on the relationship defined above to specify the coefficient on that particular variable in the constraint. For example, to define the coefficient on the unit_flow variable, one specifies the unit_flow_coefficient parameter on the approrpriate unit__from_node__user_constraint relationship. The table below contains a complete list of variables and the corresponding coefficient parameters to set.
  • Specify the right-hand-side constant term: The constraint should be formed in conventional form with all constant terms moved to the right-hand side. The right-hand-side constant term is specified by setting the right_hand_side user_constraint parameter.
    • Specify the constraint sense: this is done by setting the constraint_sense user_constraint parameter. The allowed values are ==, >= and <=.
    • Coefficients can be defined on some parameters themselves. For example, one may specify a coefficient on a node's demand parameter. This is done by specifying the relationship node__user_constraint and specifying the demand_coefficient parameter on that relationship
  1. Piecewise unit_flow coefficients: As described in operating_points, specifying this parameter decomposes the unit_flow variable into a number of sub operating segment variables named unit_flow_op in the model and with an additional index, i for the operating segment. The intention of this functionality is to allow unit_flow coefficients to be defined individually per segment to define a piecewise linear function. To accomplish this, the steps are as described above with the exception that one must define operating_points on the appropriate unit__from_node or unit__to_node as an array type with the dimension corresponding to the number of operating points and then set the unit_flow_coefficient for the appropriate unit__from_node__user_constraint relationship, also as an array type with the same number of elements. Note that if operating points is defined as an array type with more than one elements, unit_flow_coefficient may be defined as either an array or non-array type. However, if operating_points is of non-array type, corresponding unit_flow_coefficients must also be of non-array types.
  2. Variables, relationships and coefficient guide for user constraints The table below provides guidance regarding what relationships and coefficients to set for various problem variables and parameters.
Problem variable / Parameter NameRelationshipParameter
unit_flow (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow_op (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (array type)
unit_flow_op (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (array type)
connection_flow (direction=from_node)connection__from_node__user_constraintconnection_flow_coefficient
connection_flow (direction=to_node)connection__to_node__user_constraintconnection_flow_coefficient
node_statenode__user_constraintnode_state_coefficient
storages_investednode__user_constraintstorages_invested_coefficient
storages_invested_availablenode__user_constraintstorages_invested_available_coefficient
demandnode__user_constraintdemand_coefficient
units_onunit__user_constraintunits_on_coefficient
units_started_upunit__user_constraintunits_started_up_coefficient
units_investedunit__user_constraintunits_invested_coefficient
units_invested_availableunit__user_constraintunits_invested_available_coefficient
connections_investedconnection__user_constraintconnections_invested_coefficient
connections_invested_availableconnection__user_constraintconnections_invested_available_coefficient
+User Constraints · SpineOpt.jl

User Constraints

User constraints allow the user to define arbitrary linear constraints involving most of the problem variables. This section describes this function and how to use it.

Key User Constraint Concepts

  1. The basic principle: The basic steps involved in forming a user constraint are:
  • Creating a user constraint object: One creates a new user_constraint object which will be used as a unique handle for the specific constraint and on which constraint-level parameters will be defined.
  • Specify which variables are involved in the constraint: this generally involves creating a relationship involving the user_constraint object. For example, specifying the relationship unit__from_node__user_constraint specifies that the corresponding unit_flow variable is involved in the constraint. The table below contains a complete list of variables and the corresponding relationships to set.
  • Specify the variable coefficients: this will generally involve specifying a parameter named *_coefficient on the relationship defined above to specify the coefficient on that particular variable in the constraint. For example, to define the coefficient on the unit_flow variable, one specifies the unit_flow_coefficient parameter on the approrpriate unit__from_node__user_constraint relationship. The table below contains a complete list of variables and the corresponding coefficient parameters to set.
  • Specify the right-hand-side constant term: The constraint should be formed in conventional form with all constant terms moved to the right-hand side. The right-hand-side constant term is specified by setting the right_hand_side user_constraint parameter.
    • Specify the constraint sense: this is done by setting the constraint_sense user_constraint parameter. The allowed values are ==, >= and <=.
    • Coefficients can be defined on some parameters themselves. For example, one may specify a coefficient on a node's demand parameter. This is done by specifying the relationship node__user_constraint and specifying the demand_coefficient parameter on that relationship
  1. Piecewise unit_flow coefficients: As described in operating_points, specifying this parameter decomposes the unit_flow variable into a number of sub operating segment variables named unit_flow_op in the model and with an additional index, i for the operating segment. The intention of this functionality is to allow unit_flow coefficients to be defined individually per segment to define a piecewise linear function. To accomplish this, the steps are as described above with the exception that one must define operating_points on the appropriate unit__from_node or unit__to_node as an array type with the dimension corresponding to the number of operating points and then set the unit_flow_coefficient for the appropriate unit__from_node__user_constraint relationship, also as an array type with the same number of elements. Note that if operating points is defined as an array type with more than one elements, unit_flow_coefficient may be defined as either an array or non-array type. However, if operating_points is of non-array type, corresponding unit_flow_coefficients must also be of non-array types.
  2. Variables, relationships and coefficient guide for user constraints The table below provides guidance regarding what relationships and coefficients to set for various problem variables and parameters.
Problem variable / Parameter NameRelationshipParameter
unit_flow (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (non-array type)
unit_flow_op (direction=from_node)unit__from_node__user_constraintunit_flow_coefficient (array type)
unit_flow_op (direction=to_node)unit__to_node__user_constraintunit_flow_coefficient (array type)
connection_flow (direction=from_node)connection__from_node__user_constraintconnection_flow_coefficient
connection_flow (direction=to_node)connection__to_node__user_constraintconnection_flow_coefficient
node_statenode__user_constraintnode_state_coefficient
storages_investednode__user_constraintstorages_invested_coefficient
storages_invested_availablenode__user_constraintstorages_invested_available_coefficient
demandnode__user_constraintdemand_coefficient
units_onunit__user_constraintunits_on_coefficient
units_started_upunit__user_constraintunits_started_up_coefficient
units_investedunit__user_constraintunits_invested_coefficient
units_invested_availableunit__user_constraintunits_invested_available_coefficient
connections_investedconnection__user_constraintconnections_invested_coefficient
connections_invested_availableconnection__user_constraintconnections_invested_available_coefficient
diff --git a/dev/concept_reference/Object Classes/index.html b/dev/concept_reference/Object Classes/index.html index 36c5d55243..af12f37a26 100644 --- a/dev/concept_reference/Object Classes/index.html +++ b/dev/concept_reference/Object Classes/index.html @@ -1,2 +1,2 @@ -Object Classes · SpineOpt.jl

Object Classes

commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

Related Parameters: commodity_lodf_tolerance, commodity_physics_duration, commodity_physics, commodity_ptdf_threshold, is_active, mp_min_res_gen_to_demand_ratio_slack_penalty and mp_min_res_gen_to_demand_ratio

Related Relationship Classes: node__commodity and unit__commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

Related Parameters: benders_starting_connections_invested, candidate_connections, connection_availability_factor, connection_contingency, connection_investment_cost, connection_investment_lifetime, connection_investment_variable_type, connection_monitored, connection_reactance_base, connection_reactance, connection_resistance, connection_type, connections_invested_big_m_mga, connections_invested_mga_weight, connections_invested_mga, fix_connections_invested_available, fix_connections_invested, forced_availability_factor, graph_view_position, has_binary_gas_flow, initial_connections_invested_available, initial_connections_invested, is_active and number_of_connections

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__investment_group, connection__investment_stochastic_structure, connection__investment_temporal_block, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node and connection__user_constraint

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

investment_group

A group of investments that need to be done together.

Related Parameters: equal_investments, maximum_capacity_invested_available, maximum_entities_invested_available, minimum_capacity_invested_available and minimum_entities_invested_available

Related Relationship Classes: connection__from_node__investment_group, connection__investment_group, connection__to_node__investment_group, node__investment_group, unit__from_node__investment_group, unit__investment_group and unit__to_node__investment_group

A group of investments that need to be done together.

model

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

Related Parameters: big_m, db_lp_solver_options, db_lp_solver, db_mip_solver_options, db_mip_solver, duration_unit, is_active, max_gap, max_iterations, max_mga_iterations, max_mga_slack, min_iterations, model_end, model_start, model_type, roll_forward, use_connection_intact_flow, window_duration, window_weight, write_lodf_file, write_mps_file and write_ptdf_file

Related Relationship Classes: model__default_investment_stochastic_structure, model__default_investment_temporal_block, model__default_stochastic_structure, model__default_temporal_block and model__report

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

Related Parameters: balance_type, benders_starting_storages_invested, candidate_storages, demand, downward_reserve, fix_node_pressure, fix_node_state, fix_node_voltage_angle, fix_storages_invested_available, fix_storages_invested, frac_state_loss, fractional_demand, graph_view_position, has_pressure, has_state, has_voltage_angle, initial_node_pressure, initial_node_state, initial_node_voltage_angle, initial_storages_invested_available, initial_storages_invested, is_active, is_non_spinning, is_reserve_node, max_node_pressure, max_voltage_angle, min_capacity_margin_penalty, min_capacity_margin, min_node_pressure, min_voltage_angle, minimum_reserve_activation_time, nodal_balance_sense, node_opf_type, node_slack_penalty, node_state_cap, node_state_min, number_of_storages, state_coeff, storage_investment_cost, storage_investment_lifetime, storage_investment_variable_type, storages_invested_big_m_mga, storages_invested_mga_weight, storages_invested_mga, tax_in_unit_flow, tax_net_unit_flow, tax_out_unit_flow and upward_reserve

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, node__commodity, node__investment_group, node__investment_stochastic_structure, node__investment_temporal_block, node__node, node__stochastic_structure, node__temporal_block, node__user_constraint, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint and unit__to_node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

output

A variable name from SpineOpt whose value can be included in a report.

Related Parameters: is_active and output_resolution

Related Relationship Classes: report__output and stage__output

A variable name from SpineOpt whose value can be included in a report.

report

A results report from a particular SpineOpt run, including the value of specific variables.

Related Parameters: is_active and output_db_url

Related Relationship Classes: model__report and report__output

A results report from a particular SpineOpt run, including the value of specific variables.

settings

Internal SpineOpt settings. We kindly advise not to mess with this one.

Related Parameters: version

stage

An additional stage in the optimisation problem (EXPERIMENTAL)

Related Parameters: is_active and stage_scenario

Related Relationship Classes: stage__child_stage and stage__output

stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

Related Parameters: is_active

Related Relationship Classes: parent_stochastic_scenario__child_stochastic_scenario and stochastic_structure__stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

stochastic_structure

A group of stochastic scenarios that represent a structure.

Related Parameters: is_active

Related Relationship Classes: connection__investment_stochastic_structure, model__default_investment_stochastic_structure, model__default_stochastic_structure, node__investment_stochastic_structure, node__stochastic_structure, stochastic_structure__stochastic_scenario, unit__investment_stochastic_structure and units_on__stochastic_structure

A group of stochastic scenarios that represent a structure.

temporal_block

A length of time with a particular resolution.

Related Parameters: block_end, block_start, is_active, representative_periods_mapping, resolution and weight

Related Relationship Classes: connection__investment_temporal_block, model__default_investment_temporal_block, model__default_temporal_block, node__investment_temporal_block, node__temporal_block, unit__investment_temporal_block and units_on__temporal_block

A length of time with a particular resolution.

unit

A conversion of one/many comodities between nodes.

Related Parameters: benders_starting_units_invested, candidate_units, curtailment_cost, fix_units_invested_available, fix_units_invested, fix_units_on, fix_units_out_of_service, fom_cost, forced_availability_factor, graph_view_position, initial_units_invested_available, initial_units_invested, initial_units_on, initial_units_out_of_service, is_active, is_renewable, min_down_time, min_up_time, number_of_units, online_variable_type, outage_variable_type, scheduled_outage_duration, shut_down_cost, start_up_cost, unit_availability_factor, unit_investment_cost, unit_investment_lifetime, unit_investment_variable_type, units_invested_big_m_mga, units_invested_mga_weight, units_invested_mga, units_on_cost, units_on_non_anticipativity_margin, units_on_non_anticipativity_time and units_unavailable

Related Relationship Classes: unit__commodity, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__investment_group, unit__investment_stochastic_structure, unit__investment_temporal_block, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint, unit__to_node, unit__user_constraint, units_on__stochastic_structure and units_on__temporal_block

A conversion of one/many comodities between nodes.

user_constraint

A generic data-driven custom constraint.

Related Parameters: constraint_sense, is_active, right_hand_side and user_constraint_slack_penalty

Related Relationship Classes: connection__from_node__user_constraint, connection__to_node__user_constraint, connection__user_constraint, node__user_constraint, unit__from_node__user_constraint, unit__to_node__user_constraint and unit__user_constraint

A generic data-driven custom constraint.

+Object Classes · SpineOpt.jl

Object Classes

commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

Related Parameters: commodity_lodf_tolerance, commodity_physics_duration, commodity_physics, commodity_ptdf_threshold, is_active, mp_min_res_gen_to_demand_ratio_slack_penalty and mp_min_res_gen_to_demand_ratio

Related Relationship Classes: node__commodity and unit__commodity

A good or product that can be consumed, produced, traded. E.g., electricity, oil, gas, water...

connection

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

Related Parameters: benders_starting_connections_invested, candidate_connections, connection_availability_factor, connection_contingency, connection_investment_cost, connection_investment_lifetime, connection_investment_variable_type, connection_monitored, connection_reactance_base, connection_reactance, connection_resistance, connection_type, connections_invested_big_m_mga, connections_invested_mga_weight, connections_invested_mga, fix_connections_invested_available, fix_connections_invested, forced_availability_factor, graph_view_position, has_binary_gas_flow, initial_connections_invested_available, initial_connections_invested, is_active and number_of_connections

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__investment_group, connection__investment_stochastic_structure, connection__investment_temporal_block, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node and connection__user_constraint

A transfer of commodities between nodes. E.g. electricity line, gas pipeline...

investment_group

A group of investments that need to be done together.

Related Parameters: equal_investments, maximum_capacity_invested_available, maximum_entities_invested_available, minimum_capacity_invested_available and minimum_entities_invested_available

Related Relationship Classes: connection__from_node__investment_group, connection__investment_group, connection__to_node__investment_group, node__investment_group, unit__from_node__investment_group, unit__investment_group and unit__to_node__investment_group

A group of investments that need to be done together.

model

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

Related Parameters: big_m, db_lp_solver_options, db_lp_solver, db_mip_solver_options, db_mip_solver, duration_unit, is_active, max_gap, max_iterations, max_mga_iterations, max_mga_slack, min_iterations, model_end, model_start, model_type, roll_forward, use_connection_intact_flow, window_duration, window_weight, write_lodf_file, write_mps_file and write_ptdf_file

Related Relationship Classes: model__default_investment_stochastic_structure, model__default_investment_temporal_block, model__default_stochastic_structure, model__default_temporal_block and model__report

An instance of SpineOpt, that specifies general parameters such as the temporal horizon.

node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

Related Parameters: balance_type, benders_starting_storages_invested, candidate_storages, demand, downward_reserve, fix_node_pressure, fix_node_state, fix_node_voltage_angle, fix_storages_invested_available, fix_storages_invested, frac_state_loss, fractional_demand, graph_view_position, has_pressure, has_state, has_voltage_angle, initial_node_pressure, initial_node_state, initial_node_voltage_angle, initial_storages_invested_available, initial_storages_invested, is_active, is_non_spinning, is_reserve_node, max_node_pressure, max_voltage_angle, min_capacity_margin_penalty, min_capacity_margin, min_node_pressure, min_voltage_angle, minimum_reserve_activation_time, nodal_balance_sense, node_opf_type, node_slack_penalty, node_state_cap, node_state_min, number_of_storages, state_coeff, storage_investment_cost, storage_investment_lifetime, storage_investment_variable_type, storages_invested_big_m_mga, storages_invested_mga_weight, storages_invested_mga, tax_in_unit_flow, tax_net_unit_flow, tax_out_unit_flow and upward_reserve

Related Relationship Classes: connection__from_node__investment_group, connection__from_node__user_constraint, connection__from_node, connection__node__node, connection__to_node__investment_group, connection__to_node__user_constraint, connection__to_node, node__commodity, node__investment_group, node__investment_stochastic_structure, node__investment_temporal_block, node__node, node__stochastic_structure, node__temporal_block, node__user_constraint, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint and unit__to_node

A universal aggregator of commodify flows over units and connections, with storage capabilities.

output

A variable name from SpineOpt whose value can be included in a report.

Related Parameters: is_active and output_resolution

Related Relationship Classes: report__output and stage__output

A variable name from SpineOpt whose value can be included in a report.

report

A results report from a particular SpineOpt run, including the value of specific variables.

Related Parameters: is_active and output_db_url

Related Relationship Classes: model__report and report__output

A results report from a particular SpineOpt run, including the value of specific variables.

settings

Internal SpineOpt settings. We kindly advise not to mess with this one.

Related Parameters: version

stage

An additional stage in the optimisation problem (EXPERIMENTAL)

Related Parameters: is_active and stage_scenario

Related Relationship Classes: stage__child_stage and stage__output

stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

Related Parameters: is_active

Related Relationship Classes: parent_stochastic_scenario__child_stochastic_scenario and stochastic_structure__stochastic_scenario

A scenario for stochastic optimisation in SpineOpt.

stochastic_structure

A group of stochastic scenarios that represent a structure.

Related Parameters: is_active

Related Relationship Classes: connection__investment_stochastic_structure, model__default_investment_stochastic_structure, model__default_stochastic_structure, node__investment_stochastic_structure, node__stochastic_structure, stochastic_structure__stochastic_scenario, unit__investment_stochastic_structure and units_on__stochastic_structure

A group of stochastic scenarios that represent a structure.

temporal_block

A length of time with a particular resolution.

Related Parameters: block_end, block_start, is_active, representative_periods_mapping, resolution and weight

Related Relationship Classes: connection__investment_temporal_block, model__default_investment_temporal_block, model__default_temporal_block, node__investment_temporal_block, node__temporal_block, unit__investment_temporal_block and units_on__temporal_block

A length of time with a particular resolution.

unit

A conversion of one/many comodities between nodes.

Related Parameters: benders_starting_units_invested, candidate_units, curtailment_cost, fix_units_invested_available, fix_units_invested, fix_units_on, fix_units_out_of_service, fom_cost, forced_availability_factor, graph_view_position, initial_units_invested_available, initial_units_invested, initial_units_on, initial_units_out_of_service, is_active, is_renewable, min_down_time, min_up_time, number_of_units, online_variable_type, outage_variable_type, scheduled_outage_duration, shut_down_cost, start_up_cost, unit_availability_factor, unit_investment_cost, unit_investment_lifetime, unit_investment_variable_type, units_invested_big_m_mga, units_invested_mga_weight, units_invested_mga, units_on_cost, units_on_non_anticipativity_margin, units_on_non_anticipativity_time and units_unavailable

Related Relationship Classes: unit__commodity, unit__from_node__investment_group, unit__from_node__user_constraint, unit__from_node, unit__investment_group, unit__investment_stochastic_structure, unit__investment_temporal_block, unit__node__node, unit__to_node__investment_group, unit__to_node__user_constraint, unit__to_node, unit__user_constraint, units_on__stochastic_structure and units_on__temporal_block

A conversion of one/many comodities between nodes.

user_constraint

A generic data-driven custom constraint.

Related Parameters: constraint_sense, is_active, right_hand_side and user_constraint_slack_penalty

Related Relationship Classes: connection__from_node__user_constraint, connection__to_node__user_constraint, connection__user_constraint, node__user_constraint, unit__from_node__user_constraint, unit__to_node__user_constraint and unit__user_constraint

A generic data-driven custom constraint.

diff --git a/dev/concept_reference/Parameter Value Lists/index.html b/dev/concept_reference/Parameter Value Lists/index.html index 09e666b758..58252d9a06 100644 --- a/dev/concept_reference/Parameter Value Lists/index.html +++ b/dev/concept_reference/Parameter Value Lists/index.html @@ -1,2 +1,2 @@ -Parameter Value Lists · SpineOpt.jl

Parameter Value Lists

balance_type_list

Possible values: balance_type_group, balance_type_node and balance_type_none

boolean_value_list

Possible values: false and true

commodity_physics_list

Possible values: commodity_physics_lodf, commodity_physics_none and commodity_physics_ptdf

connection_investment_variable_type_list

Possible values: connection_investment_variable_type_continuous and connection_investment_variable_type_integer

connection_type_list

Possible values: connection_type_lossless_bidirectional and connection_type_normal

constraint_sense_list

Possible values: <=, == and >=

db_lp_solver_list

Possible values: CDCS.jl, CDDLib.jl, COSMO.jl, CPLEX.jl, CSDP.jl, Clp.jl, ECOS.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Hypatia.jl, Ipopt.jl, KNITRO.jl, MadNLP.jl, MosekTools.jl, NLopt.jl, OSQP.jl, ProxSDP.jl, SCIP.jl, SCS.jl, SDPA.jl, SDPNAL.jl, SDPT3.jl, SeDuMi.jl and Xpress.jl

db_mip_solver_list

Possible values: CPLEX.jl, Cbc.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Juniper.jl, KNITRO.jl, MosekTools.jl, SCIP.jl and Xpress.jl

duration_unit_list

Possible values: hour and minute

model_type_list

Possible values: spineopt_benders, spineopt_mga, spineopt_other and spineopt_standard

node_opf_type_list

Possible values: node_opf_type_normal and node_opf_type_reference

storage_investment_variable_type_list

Possible values: storage_investment_variable_type_continuous and storage_investment_variable_type_integer

unit_investment_variable_type_list

Possible values: unit_investment_variable_type_continuous and unit_investment_variable_type_integer

unit_online_variable_type_list

Possible values: unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear and unit_online_variable_type_none

write_mps_file_list

Possible values: write_mps_always, write_mps_never and write_mps_on_no_solve

+Parameter Value Lists · SpineOpt.jl

Parameter Value Lists

balance_type_list

Possible values: balance_type_group, balance_type_node and balance_type_none

boolean_value_list

Possible values: false and true

commodity_physics_list

Possible values: commodity_physics_lodf, commodity_physics_none and commodity_physics_ptdf

connection_investment_variable_type_list

Possible values: connection_investment_variable_type_continuous and connection_investment_variable_type_integer

connection_type_list

Possible values: connection_type_lossless_bidirectional and connection_type_normal

constraint_sense_list

Possible values: <=, == and >=

db_lp_solver_list

Possible values: CDCS.jl, CDDLib.jl, COSMO.jl, CPLEX.jl, CSDP.jl, Clp.jl, ECOS.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Hypatia.jl, Ipopt.jl, KNITRO.jl, MadNLP.jl, MosekTools.jl, NLopt.jl, OSQP.jl, ProxSDP.jl, SCIP.jl, SCS.jl, SDPA.jl, SDPNAL.jl, SDPT3.jl, SeDuMi.jl and Xpress.jl

db_mip_solver_list

Possible values: CPLEX.jl, Cbc.jl, GLPK.jl, Gurobi.jl, HiGHS.jl, Juniper.jl, KNITRO.jl, MosekTools.jl, SCIP.jl and Xpress.jl

duration_unit_list

Possible values: hour and minute

model_type_list

Possible values: spineopt_benders, spineopt_mga, spineopt_other and spineopt_standard

node_opf_type_list

Possible values: node_opf_type_normal and node_opf_type_reference

storage_investment_variable_type_list

Possible values: storage_investment_variable_type_continuous and storage_investment_variable_type_integer

unit_investment_variable_type_list

Possible values: unit_investment_variable_type_continuous and unit_investment_variable_type_integer

unit_online_variable_type_list

Possible values: unit_online_variable_type_binary, unit_online_variable_type_integer, unit_online_variable_type_linear and unit_online_variable_type_none

write_mps_file_list

Possible values: write_mps_always, write_mps_never and write_mps_on_no_solve

diff --git a/dev/concept_reference/Parameters/index.html b/dev/concept_reference/Parameters/index.html index 03a458a40d..fb6fdf2830 100644 --- a/dev/concept_reference/Parameters/index.html +++ b/dev/concept_reference/Parameters/index.html @@ -1,2 +1,2 @@ -Parameters · SpineOpt.jl

Parameters

balance_type

A selector for how the nodal_balance constraint should be handled.

Default value: balance_type_node

Uses Parameter Value Lists: balance_type_list

Related Object Classes: node

A selector for how the nodal_balance constraint should be handled.

benders_starting_connections_invested

Fixes the number of connections invested during the first Benders iteration

Default value: nothing

Related Object Classes: connection

benders_starting_storages_invested

Fixes the number of storages invested during the first Benders iteration

Default value: nothing

Related Object Classes: node

benders_starting_units_invested

Fixes the number of units invested during the first Benders iteration

Default value: nothing

Related Object Classes: unit

big_m

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

Default value: 1000000

Related Object Classes: model

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

block_end

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

block_start

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

candidate_connections

The number of connections that may be invested in

Default value: nothing

Related Object Classes: connection

The number of connections that may be invested in

candidate_storages

Determines the maximum number of new storages which may be invested in

Default value: nothing

Related Object Classes: node

Determines the maximum number of new storages which may be invested in

candidate_units

Number of units which may be additionally constructed

Default value: nothing

Related Object Classes: unit

Number of units which may be additionally constructed

commodity_lodf_tolerance

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

Default value: 0.1

Related Object Classes: commodity

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

commodity_physics

Defines if the commodity follows lodf or ptdf physics.

Default value: commodity_physics_none

Uses Parameter Value Lists: commodity_physics_list

Related Object Classes: commodity

Defines if the commodity follows lodf or ptdf physics.

commodity_physics_duration

For how long the commodity_physics should apply relative to the start of the window.

Default value: nothing

Related Object Classes: commodity

For how long the commodity_physics should apply relative to the start of the window.

commodity_ptdf_threshold

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

Default value: 0.001

Related Object Classes: commodity

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

compression_factor

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

Default value: nothing

Related Relationship Classes: connection__node__node

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

connection_availability_factor

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: connection

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

connection_capacity

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

connection_contingency

A boolean flag for defining a contingency connection.

Default value: nothing

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_conv_cap_to_flow

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

Default value: 1.0

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

connection_emergency_capacity

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

connection_flow_coefficient

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

Default value: 0.0

Related Relationship Classes: connection__from_node__user_constraint and connection__to_node__user_constraint

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

connection_flow_cost

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

connection_flow_delay

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Relationship Classes: connection__node__node

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

connection_flow_non_anticipativity_margin

Margin by which connection_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_flow_non_anticipativity_time

Period of time where the value of the connection_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_margin

Margin by which connection_intact_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_time

Period of time where the value of the connection_intact_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_investment_cost

The per unit investment cost for the connection over the connection_investment_lifetime

Default value: nothing

Related Object Classes: connection

The per unit investment cost for the connection over the connection_investment_lifetime

connection_investment_lifetime

Determines the minimum investment lifetime of a connection. Once invested, it remains in service for this long

Default value: nothing

Related Object Classes: connection

Determines the minimum investment lifetime of a connection. Once invested, it remains in service for this long

connection_investment_variable_type

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

Default value: connection_investment_variable_type_integer

Uses Parameter Value Lists: connection_investment_variable_type_list

Related Object Classes: connection

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

connection_linepack_constant

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

Default value: nothing

Related Relationship Classes: connection__node__node

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

connection_monitored

A boolean flag for defining a contingency connection.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_reactance

The per unit reactance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit reactance of a connection.

connection_reactance_base

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

Default value: 1

Related Object Classes: connection

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

connection_resistance

The per unit resistance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit resistance of a connection.

connection_type

A selector between a normal and a lossless bidirectional connection.

Default value: connection_type_normal

Uses Parameter Value Lists: connection_type_list

Related Object Classes: connection

A selector between a normal and a lossless bidirectional connection.

connections_invested_available_coefficient

coefficient of connections_invested_available in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

connections_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

Default value: nothing

Related Object Classes: connection

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

connections_invested_coefficient

coefficient of connections_invested in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

coefficient of connections_invested in the specific user_constraint

connections_invested_mga

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

connections_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: connection

constraint_sense

A selector for the sense of the user_constraint.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: user_constraint

A selector for the sense of the user_constraint.

curtailment_cost

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

cyclic_condition

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: node__temporal_block

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

db_lp_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_lp_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

db_lp_solver_options

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Clp.jl", Dict{String, Any}("data" => Any[Any["LogLevel", 0.0]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

db_mip_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_mip_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

db_mip_solver_options

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["mip_rel_gap", 0.01], Any["threads", 0.0], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Cbc.jl", Dict{String, Any}("data" => Any[Any["ratioGap", 0.01], Any["logLevel", 0.0]], "type" => "map", "index_type" => "str")], Any["CPLEX.jl", Dict{String, Any}("data" => Any[Any["CPX_PARAM_EPGAP", 0.01]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

demand

Demand for the commodity of a node. Energy gains can be represented using negative demand.

Default value: 0.0

Related Object Classes: node

Demand for the commodity of a node. Energy gains can be represented using negative demand.

demand_coefficient

coefficient of the specified node's demand in the specified user constraint

Default value: 0.0

Related Relationship Classes: node__user_constraint

coefficient of the specified node's demand in the specified user constraint

diff_coeff

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

Default value: 0.0

Related Relationship Classes: node__node

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

downward_reserve

Identifier for nodes providing downward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing downward reserves

duration_unit

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

Default value: hour

Uses Parameter Value Lists: duration_unit_list

Related Object Classes: model

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

equal_investments

Whether all entities in the group must have the same investment decision.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: investment_group

fix_binary_gas_connection_flow

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

fix_connection_flow

Fix the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow variable.

fix_connection_intact_flow

Fix the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_intact_flow variable.

fix_connections_invested

Setting a value fixes the connections_invested variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connections_invested variable accordingly

fix_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connectionsinvestedavailable variable accordingly

fix_node_pressure

Fixes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_pressure variable to the provided value

fix_node_state

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

fix_node_voltage_angle

Fixes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_voltage_angle variable to the provided value

fix_nonspin_units_shut_down

Fix the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

Fix the nonspin_units_shut_down variable.

fix_nonspin_units_started_up

Fix the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the nonspin_units_started_up variable.

fix_ratio_in_in_unit_flow

Fix the ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows coming into the unit from the two nodes.

fix_ratio_in_out_unit_flow

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

fix_ratio_out_in_connection_flow

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

fix_ratio_out_in_unit_flow

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

fix_ratio_out_out_unit_flow

Fix the ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows going from the unit into the two nodes.

fix_storages_invested

Used to fix the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storages_invested variable

fix_storages_invested_available

Used to fix the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storagesinvestedavailable variable

fix_unit_flow

Fix the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow variable.

fix_unit_flow_op

Fix the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow_op variable.

fix_units_invested

Fix the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested variable.

fix_units_invested_available

Fix the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested_available variable

fix_units_on

Fix the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_on variable.

fix_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

fix_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

fix_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

fix_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

fix_units_out_of_service

Fix the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

fixed_pressure_constant_0

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fixed_pressure_constant_1

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fom_cost

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. E.g. EUR/MWh

forced_availability_factor

Availability factor due to outages/deratings.

Default value: nothing

Related Object Classes: connection and unit

frac_state_loss

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

Default value: 0.0

Related Object Classes: node

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

fractional_demand

The fraction of a node group's demand applied for the node in question.

Default value: 0.0

Related Object Classes: node

The fraction of a node group's demand applied for the node in question.

fuel_cost

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

graph_view_position

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

Default value: nothing

Related Object Classes: connection, node and unit

Related Relationship Classes: connection__from_node, connection__to_node, unit__from_node__user_constraint, unit__from_node, unit__to_node__user_constraint and unit__to_node

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

has_binary_gas_flow

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

has_pressure

A boolean flag for whether a node has a node_pressure variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_pressure variable.

has_state

A boolean flag for whether a node has a node_state variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_state variable.

has_voltage_angle

A boolean flag for whether a node has a node_voltage_angle variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_voltage_angle variable.

initial_binary_gas_connection_flow

Initialize the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_flow

Initialize the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_intact_flow

Initialize the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connections_invested

Setting a value fixes the connections_invested variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_node_pressure

Initializes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

initial_node_state

Initializes the corresponding node_state variable to the provided value.

Default value: nothing

Related Object Classes: node

initial_node_voltage_angle

Initializes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

initial_nonspin_units_shut_down

Initialize the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

initial_nonspin_units_started_up

Initialize the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_storages_invested

Used to initialze the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

initial_storages_invested_available

Used to initialze the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

initial_unit_flow

Initialize the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_unit_flow_op

Initialize the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_units_invested

Initialize the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

initial_units_invested_available

Initialize the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

initial_units_on

Initialize the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

initial_units_out_of_service

Initialize the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

is_active

If false, the object is excluded from the model if the tool filter object activity control is specified

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: commodity, connection, model, node, output, report, stage, stochastic_scenario, stochastic_structure, temporal_block, unit and user_constraint

Related Relationship Classes: node__stochastic_structure, node__temporal_block, unit__from_node, unit__to_node, units_on__stochastic_structure and units_on__temporal_block

If false, the object is excluded from the model if the tool filter object activity control is specified

is_non_spinning

A boolean flag for whether a node is acting as a non-spinning reserve

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a non-spinning reserve

is_renewable

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

is_reserve_node

A boolean flag for whether a node is acting as a reserve_node

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a reserve_node

max_cum_in_unit_flow_bound

Set a maximum cumulative upper bound for a unit_flow

Default value: nothing

Related Relationship Classes: unit__commodity

Set a maximum cumulative upper bound for a unit_flow

max_gap

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

Default value: 0.05

Related Object Classes: model

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

max_iterations

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 10.0

Related Object Classes: model

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

max_mga_iterations

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

Default value: nothing

Related Object Classes: model

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

max_mga_slack

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

Default value: 0.05

Related Object Classes: model

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

max_node_pressure

Maximum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Maximum allowed gas pressure at node.

max_ratio_in_in_unit_flow

Maximum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows coming into the unit from the two nodes.

max_ratio_in_out_unit_flow

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

max_ratio_out_in_connection_flow

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

max_ratio_out_in_unit_flow

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

max_ratio_out_out_unit_flow

Maximum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows going from the unit into the two nodes.

max_total_cumulated_unit_flow_from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

max_total_cumulated_unit_flow_to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

max_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

max_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

max_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

max_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

max_voltage_angle

Maximum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Maximum allowed voltage angle at node.

maximum_capacity_invested_available

Upper bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

maximum_entities_invested_available

Upper bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

min_capacity_margin

minimum capacity margin applying to the node or node_group

Default value: nothing

Related Object Classes: node

minimum capacity margin applying to the node or node_group

min_capacity_margin_penalty

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

Default value: nothing

Related Object Classes: node

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

min_down_time

Minimum downtime of a unit after it shuts down.

Default value: nothing

Related Object Classes: unit

Minimum downtime of a unit after it shuts down.

min_iterations

Specifies the minimum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 1.0

Related Object Classes: model

min_node_pressure

Minimum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Minimum allowed gas pressure at node.

min_ratio_in_in_unit_flow

Minimum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows coming into the unit from the two nodes.

min_ratio_in_out_unit_flow

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

min_ratio_out_in_connection_flow

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

min_ratio_out_in_unit_flow

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

min_ratio_out_out_unit_flow

Minimum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows going from the unit into the two nodes.

min_total_cumulated_unit_flow_from_node

Bound on the minimum cumulated flows of a unit group from a node group.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the minimum cumulated flows of a unit group from a node group.

min_total_cumulated_unit_flow_to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

min_unit_flow

Set lower bound of the unit_flow variable.

Default value: 0.0

Related Relationship Classes: unit__from_node and unit__to_node

min_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

min_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

min_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

min_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

min_up_time

Minimum uptime of a unit after it starts up.

Default value: nothing

Related Object Classes: unit

Minimum uptime of a unit after it starts up.

min_voltage_angle

Minimum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Minimum allowed voltage angle at node.

minimum_capacity_invested_available

Lower bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_entities_invested_available

Lower bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_operating_point

Minimum level for the unit_flow relative to the units_on online capacity.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Minimum level for the unit_flow relative to the units_on online capacity.

minimum_reserve_activation_time

Duration a certain reserve product needs to be online/available

Default value: nothing

Related Object Classes: node

Duration a certain reserve product needs to be online/available

model_end

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

Default value: Dict{String, Any}("data" => "2000-01-02T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

model_start

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

Default value: Dict{String, Any}("data" => "2000-01-01T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

model_type

Used to identify model objects as relating to the master problem or operational sub problems (default)

Default value: spineopt_standard

Uses Parameter Value Lists: model_type_list

Related Object Classes: model

Used to identify model objects as relating to the master problem or operational sub problems (default)

mp_min_res_gen_to_demand_ratio

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

Default value: nothing

Related Object Classes: commodity

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

mp_min_res_gen_to_demand_ratio_slack_penalty

Penalty for violating the minimum renewable generation to demand ratio.

Default value: nothing

Related Object Classes: commodity

Penalty for violating the minimum renewable generation to demand ratio.

nodal_balance_sense

A selector for nodal_balance constraint sense.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

A selector for nodal_balance constraint sense.

node_opf_type

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

Default value: node_opf_type_normal

Uses Parameter Value Lists: node_opf_type_list

Related Object Classes: node

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

node_slack_penalty

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

Default value: nothing

Related Object Classes: node

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

node_state_cap

The maximum permitted value for a node_state variable.

Default value: nothing

Related Object Classes: node

The maximum permitted value for a node_state variable.

node_state_coefficient

Coefficient of the specified node's state variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's state variable in the specified user constraint.

node_state_min

The minimum permitted value for a node_state variable.

Default value: 0.0

Related Object Classes: node

The minimum permitted value for a node_state variable.

number_of_connections

Denotes the number of 'sub connections' aggregated to form the modelled connection.

Default value: 1.0

Related Object Classes: connection

number_of_storages

Denotes the number of 'sub storages' aggregated to form the modelled node.

Default value: 1.0

Related Object Classes: node

number_of_units

Denotes the number of 'sub units' aggregated to form the modelled unit.

Default value: 1.0

Related Object Classes: unit

Denotes the number of 'sub units' aggregated to form the modelled unit.

online_variable_type

A selector for how the units_on variable is represented within the model.

Default value: unit_online_variable_type_linear

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

A selector for how the units_on variable is represented within the model.

operating_points

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

ordered_unit_flow_op

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: unit__from_node and unit__to_node

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

outage_variable_type

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

Default value: unit_online_variable_type_none

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

output_db_url

Database url for SpineOpt output.

Default value: nothing

Related Object Classes: report

Database url for SpineOpt output.

output_resolution

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

Default value: nothing

Related Object Classes: output

Related Relationship Classes: stage__output

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

overwrite_results_on_rolling

Whether or not results from further windows should overwrite results from previous ones.

Default value: true

Related Relationship Classes: report__output

Whether or not results from further windows should overwrite results from previous ones.

ramp_down_limit

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

ramp_up_limit

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

representative_periods_mapping

Map from date time to representative temporal block name

Default value: nothing

Related Object Classes: temporal_block

Map from date time to representative temporal block name

reserve_procurement_cost

Procurement cost for reserves

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Procurement cost for reserves

resolution

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

Default value: Dict{String, Any}("data" => "1h", "type" => "duration")

Related Object Classes: temporal_block

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

right_hand_side

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

Default value: 0.0

Related Object Classes: user_constraint

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

roll_forward

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

Default value: nothing

Related Object Classes: model

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

scheduled_outage_duration

Specifies the amount of time a unit must be out of service for maintenance as a single block over the course of the optimisation window

Default value: nothing

Related Object Classes: unit

shut_down_cost

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

Default value: nothing

Related Object Classes: unit

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

shut_down_limit

Maximum ramp-down during shutdowns

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-down during shutdowns

stage_scenario

The scenario that this stage should run (EXPERIMENTAL).

Default value: nothing

Related Object Classes: stage

start_up_cost

Costs of starting up a 'sub unit', e.g. EUR/startup.

Default value: nothing

Related Object Classes: unit

Costs of starting up a 'sub unit', e.g. EUR/startup.

start_up_limit

Maximum ramp-up during startups

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-up during startups

state_coeff

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

Default value: 1.0

Related Object Classes: node

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

stochastic_scenario_end

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

Default value: nothing

Related Relationship Classes: stochastic_structure__stochastic_scenario

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

storage_investment_cost

Determines the investment cost per unit state_cap over the investment life of a storage

Default value: nothing

Related Object Classes: node

Determines the investment cost per unit state_cap over the investment life of a storage

storage_investment_lifetime

Minimum lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

Minimum lifetime for storage investment decisions.

storage_investment_variable_type

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

Default value: storage_investment_variable_type_integer

Uses Parameter Value Lists: storage_investment_variable_type_list

Related Object Classes: node

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

storages_invested_available_coefficient

Coefficient of the specified node's storages invested available variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

storages_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

Default value: nothing

Related Object Classes: node

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

storages_invested_coefficient

Coefficient of the specified node's storage investment variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's storage investment variable in the specified user constraint.

storages_invested_mga

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

storages_invested_mga_weight

Used to scale mga variables. For weighted-sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: node

tax_in_unit_flow

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

tax_net_unit_flow

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

Default value: nothing

Related Object Classes: node

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

tax_out_unit_flow

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

unit_availability_factor

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: unit

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

unit_capacity

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

unit_conv_cap_to_flow

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

Default value: 1.0

Related Relationship Classes: unit__from_node and unit__to_node

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

unit_flow_coefficient

Coefficient of a unit_flow variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__from_node__user_constraint and unit__to_node__user_constraint

Coefficient of a unit_flow variable for a custom user_constraint.

unit_flow_non_anticipativity_margin

Margin by which unit_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_flow_non_anticipativity_time

Period of time where the value of the unit_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_idle_heat_rate

Flow from node1 per unit time and per units_on that results in no additional flow to node2

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 per unit time and per units_on that results in no additional flow to node2

unit_incremental_heat_rate

Standard piecewise incremental heat rate where node1 is assumed to be the fuel and node2 is assumed to be electriciy. Assumed monotonically increasing. Array type or single coefficient where the number of coefficients must match the dimensions of unit_operating_points

Default value: nothing

Related Relationship Classes: unit__node__node

Standard piecewise incremental heat rate where node1 is assumed to be the fuel and node2 is assumed to be electriciy. Assumed monotonically increasing. Array type or single coefficient where the number of coefficients must match the dimensions of unit_operating_points

unit_investment_cost

Investment cost per 'sub unit' built.

Default value: nothing

Related Object Classes: unit

Investment cost per 'sub unit' built.

unit_investment_lifetime

Minimum lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

Minimum lifetime for unit investment decisions.

unit_investment_variable_type

Determines whether investment variable is integer or continuous.

Default value: unit_investment_variable_type_continuous

Uses Parameter Value Lists: unit_investment_variable_type_list

Related Object Classes: unit

Determines whether investment variable is integer or continuous.

unit_start_flow

Flow from node1 that is incurred when a unit is started up.

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 that is incurred when a unit is started up.

units_invested_available_coefficient

Coefficient of the units_invested_available variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

units_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

Default value: nothing

Related Object Classes: unit

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

units_invested_coefficient

Coefficient of the units_invested variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of the units_invested variable in the specified user_constraint.

units_invested_mga

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

units_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: unit

units_on_coefficient

Coefficient of a units_on variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_on variable for a custom user_constraint.

units_on_cost

Objective function coefficient on units_on. An idling cost, for example

Default value: nothing

Related Object Classes: unit

Objective function coefficient on units_on. An idling cost, for example

units_on_non_anticipativity_margin

Margin by which units_on variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Object Classes: unit

units_on_non_anticipativity_time

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

Default value: nothing

Related Object Classes: unit

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

units_started_up_coefficient

Coefficient of a units_started_up variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_started_up variable for a custom user_constraint.

units_unavailable

Represents the number of units out of service

Default value: 0

Related Object Classes: unit

Represents the number of units out of service

upward_reserve

Identifier for nodes providing upward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing upward reserves

use_connection_intact_flow

Whether to use connection_intact_flow variables, to capture the impact of connection investments on network characteristics via line outage distribution factors (LODF).

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

user_constraint_slack_penalty

A penalty for violating a user constraint.

Default value: nothing

Related Object Classes: user_constraint

version

Current version of the SpineOpt data structure. Modify it at your own risk (but please don't).

Default value: 12

Related Object Classes: settings

vom_cost

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

weight

Weighting factor of the temporal block associated with the objective function

Default value: 1.0

Related Object Classes: temporal_block

Weighting factor of the temporal block associated with the objective function

weight_relative_to_parents

The weight of the stochastic_scenario in the objective function relative to its parents.

Default value: 1.0

Related Relationship Classes: stochastic_structure__stochastic_scenario

The weight of the stochastic_scenario in the objective function relative to its parents.

window_duration

The duration of the window in case it differs from roll_forward

Default value: nothing

Related Object Classes: model

window_weight

The weight of the window in the rolling subproblem

Default value: 1

Related Object Classes: model

The weight of the window in the rolling subproblem

write_lodf_file

A boolean flag for whether the LODF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the LODF values should be written to a results file.

write_mps_file

A selector for writing an .mps file of the model.

Default value: nothing

Uses Parameter Value Lists: write_mps_file_list

Related Object Classes: model

A selector for writing an .mps file of the model.

write_ptdf_file

A boolean flag for whether the PTDF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the PTDF values should be written to a results file.

+Parameters · SpineOpt.jl

Parameters

balance_type

A selector for how the nodal_balance constraint should be handled.

Default value: balance_type_node

Uses Parameter Value Lists: balance_type_list

Related Object Classes: node

A selector for how the nodal_balance constraint should be handled.

benders_starting_connections_invested

Fixes the number of connections invested during the first Benders iteration

Default value: nothing

Related Object Classes: connection

benders_starting_storages_invested

Fixes the number of storages invested during the first Benders iteration

Default value: nothing

Related Object Classes: node

benders_starting_units_invested

Fixes the number of units invested during the first Benders iteration

Default value: nothing

Related Object Classes: unit

big_m

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

Default value: 1000000

Related Object Classes: model

Sufficiently large number used for linearization bilinear terms, e.g. to enforce bidirectional flow for gas pipielines

block_end

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The end time for the temporal_block. Can be given either as a DateTime for a static end point, or as a Duration for an end point relative to the start of the current optimization.

block_start

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

Default value: nothing

Related Object Classes: temporal_block

The start time for the temporal_block. Can be given either as a DateTime for a static start point, or as a Duration for an start point relative to the start of the current optimization.

candidate_connections

The number of connections that may be invested in

Default value: nothing

Related Object Classes: connection

The number of connections that may be invested in

candidate_storages

Determines the maximum number of new storages which may be invested in

Default value: nothing

Related Object Classes: node

Determines the maximum number of new storages which may be invested in

candidate_units

Number of units which may be additionally constructed

Default value: nothing

Related Object Classes: unit

Number of units which may be additionally constructed

commodity_lodf_tolerance

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

Default value: 0.1

Related Object Classes: commodity

The minimum absolute value of the line outage distribution factor (LODF) that is considered meaningful.

commodity_physics

Defines if the commodity follows lodf or ptdf physics.

Default value: commodity_physics_none

Uses Parameter Value Lists: commodity_physics_list

Related Object Classes: commodity

Defines if the commodity follows lodf or ptdf physics.

commodity_physics_duration

For how long the commodity_physics should apply relative to the start of the window.

Default value: nothing

Related Object Classes: commodity

For how long the commodity_physics should apply relative to the start of the window.

commodity_ptdf_threshold

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

Default value: 0.001

Related Object Classes: commodity

The minimum absolute value of the power transfer distribution factor (PTDF) that is considered meaningful.

compression_factor

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

Default value: nothing

Related Relationship Classes: connection__node__node

The compression factor establishes a compression from an origin node to a receiving node, which are connected through a connection. The first node corresponds to the origin node, the second to the (compressed) destination node. Typically the value is >=1.

connection_availability_factor

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: connection

Availability of the connection, acting as a multiplier on its connection_capacity. Typically between 0-1.

connection_capacity

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Limits the connection_flow variable from the from_node. from_node can be a group of nodes, in which case the sum of the connection_flow is constrained.
  • For connection__to_node: Limits the connection_flow variable to the to_node. to_node can be a group of nodes, in which case the sum of the connection_flow is constrained.

connection_contingency

A boolean flag for defining a contingency connection.

Default value: nothing

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_conv_cap_to_flow

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

Default value: 1.0

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Optional coefficient for connection_capacity unit conversions in the case that the connection_capacity value is incompatible with the desired connection_flow units.
  • For connection__to_node: Optional coefficient for connection_capacity unit conversions in the case the connection_capacity value is incompatible with the desired connection_flow units.

connection_emergency_capacity

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

  • For connection__from_node: Post contingency flow capacity of a connection. Sometimes referred to as emergency rating
  • For connection__to_node: The maximum post-contingency flow on a monitored connection.

connection_flow_coefficient

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

Default value: 0.0

Related Relationship Classes: connection__from_node__user_constraint and connection__to_node__user_constraint

  • For connection__from_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the from direction
  • For connection__to_node__user_constraint: defines the user constraint coefficient on the connection flow variable in the to direction

connection_flow_cost

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Variable costs of a flow through a connection. E.g. EUR/MWh of energy throughput.

connection_flow_delay

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

Default value: Dict{String, Any}("data" => "0h", "type" => "duration")

Related Relationship Classes: connection__node__node

Delays the connection_flows associated with the latter node in respect to the connection_flows associated with the first node.

connection_flow_non_anticipativity_margin

Margin by which connection_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_flow_non_anticipativity_time

Period of time where the value of the connection_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_margin

Margin by which connection_intact_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_intact_flow_non_anticipativity_time

Period of time where the value of the connection_intact_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

connection_investment_cost

The per unit investment cost for the connection over the connection_investment_lifetime

Default value: nothing

Related Object Classes: connection

The per unit investment cost for the connection over the connection_investment_lifetime

connection_investment_lifetime

Determines the minimum investment lifetime of a connection. Once invested, it remains in service for this long

Default value: nothing

Related Object Classes: connection

Determines the minimum investment lifetime of a connection. Once invested, it remains in service for this long

connection_investment_variable_type

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

Default value: connection_investment_variable_type_integer

Uses Parameter Value Lists: connection_investment_variable_type_list

Related Object Classes: connection

Determines whether the investment variable is integer variable_type_integer or continuous variable_type_continuous

connection_linepack_constant

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

Default value: nothing

Related Relationship Classes: connection__node__node

The linepack constant is a property of gas pipelines and relates the linepack to the pressure of the adjacent nodes.

connection_monitored

A boolean flag for defining a contingency connection.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

A boolean flag for defining a contingency connection.

connection_reactance

The per unit reactance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit reactance of a connection.

connection_reactance_base

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

Default value: 1

Related Object Classes: connection

If the reactance is given for a p.u. (e.g. p.u. = 100MW), the connection_reactance_base can be set to perform this conversion (e.g. *100).

connection_resistance

The per unit resistance of a connection.

Default value: nothing

Related Object Classes: connection

The per unit resistance of a connection.

connection_type

A selector between a normal and a lossless bidirectional connection.

Default value: connection_type_normal

Uses Parameter Value Lists: connection_type_list

Related Object Classes: connection

A selector between a normal and a lossless bidirectional connection.

connections_invested_available_coefficient

coefficient of connections_invested_available in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

connections_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

Default value: nothing

Related Object Classes: connection

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate connections.

connections_invested_coefficient

coefficient of connections_invested in the specific user_constraint

Default value: 0.0

Related Relationship Classes: connection__user_constraint

coefficient of connections_invested in the specific user_constraint

connections_invested_mga

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

Defines whether a certain variable (here: connections_invested) will be considered in the maximal-differences of the mga objective

connections_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: connection

constraint_sense

A selector for the sense of the user_constraint.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: user_constraint

A selector for the sense of the user_constraint.

curtailment_cost

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Costs for curtailing generation. Essentially, accrues costs whenever unit_flow not operating at its maximum available capacity. E.g. EUR/MWh

cyclic_condition

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: node__temporal_block

If the cyclic condition is set to true for a storage node, the node_state at the end of the optimization window has to be larger than or equal to the initial storage state.

db_lp_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_lp_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides lp_solver RunSpineOpt kwarg

db_lp_solver_options

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Clp.jl", Dict{String, Any}("data" => Any[Any["LogLevel", 0.0]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing LP solver option name option value pairs. See solver documentation for supported solver options

db_mip_solver

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

Default value: HiGHS.jl

Uses Parameter Value Lists: db_mip_solver_list

Related Object Classes: model

Solver for MIP problems. Solver package must be added and pre-configured in Julia. Overrides mip_solver RunSpineOpt kwarg

db_mip_solver_options

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

Default value: Dict{String, Any}("data" => Any[Any["HiGHS.jl", Dict{String, Any}("data" => Any[Any["presolve", "on"], Any["mip_rel_gap", 0.01], Any["threads", 0.0], Any["time_limit", 300.01]], "type" => "map", "index_type" => "str")], Any["Cbc.jl", Dict{String, Any}("data" => Any[Any["ratioGap", 0.01], Any["logLevel", 0.0]], "type" => "map", "index_type" => "str")], Any["CPLEX.jl", Dict{String, Any}("data" => Any[Any["CPX_PARAM_EPGAP", 0.01]], "type" => "map", "index_type" => "str")]], "type" => "map", "index_type" => "str")

Related Object Classes: model

Map parameter containing MIP solver option name option value pairs for MIP. See solver documentation for supported solver options

demand

Demand for the commodity of a node. Energy gains can be represented using negative demand.

Default value: 0.0

Related Object Classes: node

Demand for the commodity of a node. Energy gains can be represented using negative demand.

demand_coefficient

coefficient of the specified node's demand in the specified user constraint

Default value: 0.0

Related Relationship Classes: node__user_constraint

coefficient of the specified node's demand in the specified user constraint

diff_coeff

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

Default value: 0.0

Related Relationship Classes: node__node

Commodity diffusion coefficient between two nodes. Effectively, denotes the diffusion power per unit of state from the first node to the second.

downward_reserve

Identifier for nodes providing downward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing downward reserves

duration_unit

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

Default value: hour

Uses Parameter Value Lists: duration_unit_list

Related Object Classes: model

Defines the base temporal unit of the model. Currently supported values are either an hour or a minute.

equal_investments

Whether all entities in the group must have the same investment decision.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: investment_group

fix_binary_gas_connection_flow

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

fix_connection_flow

Fix the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_flow variable.

fix_connection_intact_flow

Fix the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

Fix the value of the connection_intact_flow variable.

fix_connections_invested

Setting a value fixes the connections_invested variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connections_invested variable accordingly

fix_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable accordingly

Default value: nothing

Related Object Classes: connection

Setting a value fixes the connectionsinvestedavailable variable accordingly

fix_node_pressure

Fixes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_pressure variable to the provided value

fix_node_state

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_state variable to the provided value. Can be used for e.g. fixing boundary conditions.

fix_node_voltage_angle

Fixes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

Fixes the corresponding node_voltage_angle variable to the provided value

fix_nonspin_units_shut_down

Fix the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

Fix the nonspin_units_shut_down variable.

fix_nonspin_units_started_up

Fix the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the nonspin_units_started_up variable.

fix_ratio_in_in_unit_flow

Fix the ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows coming into the unit from the two nodes.

fix_ratio_in_out_unit_flow

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

fix_ratio_out_in_connection_flow

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Fix the ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

fix_ratio_out_in_unit_flow

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

fix_ratio_out_out_unit_flow

Fix the ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Fix the ratio between two unit_flows going from the unit into the two nodes.

fix_storages_invested

Used to fix the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storages_invested variable

fix_storages_invested_available

Used to fix the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

Used to fix the value of the storagesinvestedavailable variable

fix_unit_flow

Fix the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow variable.

fix_unit_flow_op

Fix the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Fix the unit_flow_op variable.

fix_units_invested

Fix the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested variable.

fix_units_invested_available

Fix the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

Fix the value of the units_invested_available variable

fix_units_on

Fix the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

Fix the value of the units_on variable.

fix_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_in_unit_flow constraint.

fix_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_in_out_unit_flow constraint.

fix_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_in_unit_flow constraint.

fix_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the fix_ratio_out_out_unit_flow constraint.

fix_units_out_of_service

Fix the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

fixed_pressure_constant_0

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fixed_pressure_constant_1

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

Default value: nothing

Related Relationship Classes: connection__node__node

Fixed pressure points for pipelines for the outer approximation of the Weymouth approximation. The direction of flow is the first node in the relationship to the second node in the relationship.

fom_cost

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. E.g. EUR/MWh

Default value: nothing

Related Object Classes: unit

Fixed operation and maintenance costs of a unit. Essentially, a cost coefficient on the existing units (incl. number_of_units and units_invested_available) and unit_capacity parameters. E.g. EUR/MWh

forced_availability_factor

Availability factor due to outages/deratings.

Default value: nothing

Related Object Classes: connection and unit

frac_state_loss

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

Default value: 0.0

Related Object Classes: node

Self-discharge coefficient for node_state variables. Effectively, represents the loss power per unit of state.

fractional_demand

The fraction of a node group's demand applied for the node in question.

Default value: 0.0

Related Object Classes: node

The fraction of a node group's demand applied for the node in question.

fuel_cost

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable fuel costs than can be attributed to a unit_flow. E.g. EUR/MWh

graph_view_position

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

Default value: nothing

Related Object Classes: connection, node and unit

Related Relationship Classes: connection__from_node, connection__to_node, unit__from_node__user_constraint, unit__from_node, unit__to_node__user_constraint and unit__to_node

An optional setting for tweaking the position of the different elements when drawing them via Spine Toolbox Graph View.

has_binary_gas_flow

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: connection

This parameter needs to be set to true in order to represent bidirectional pressure drive gas transfer.

has_pressure

A boolean flag for whether a node has a node_pressure variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_pressure variable.

has_state

A boolean flag for whether a node has a node_state variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_state variable.

has_voltage_angle

A boolean flag for whether a node has a node_voltage_angle variable.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node has a node_voltage_angle variable.

initial_binary_gas_connection_flow

Initialize the value of the connection_flow_binary variable, and hence pre-determine the direction of flow in the connection.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_flow

Initialize the value of the connection_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connection_intact_flow

Initialize the value of the connection_intact_flow variable.

Default value: nothing

Related Relationship Classes: connection__from_node and connection__to_node

initial_connections_invested

Setting a value fixes the connections_invested variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_connections_invested_available

Setting a value fixes the connectionsinvestedavailable variable at the beginning

Default value: nothing

Related Object Classes: connection

initial_node_pressure

Initializes the corresponding node_pressure variable to the provided value

Default value: nothing

Related Object Classes: node

initial_node_state

Initializes the corresponding node_state variable to the provided value.

Default value: nothing

Related Object Classes: node

initial_node_voltage_angle

Initializes the corresponding node_voltage_angle variable to the provided value

Default value: nothing

Related Object Classes: node

initial_nonspin_units_shut_down

Initialize the nonspin_units_shut_down variable.

Default value: nothing

Related Relationship Classes: unit__to_node

initial_nonspin_units_started_up

Initialize the nonspin_units_started_up variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_storages_invested

Used to initialze the value of the storages_invested variable

Default value: nothing

Related Object Classes: node

initial_storages_invested_available

Used to initialze the value of the storagesinvestedavailable variable

Default value: nothing

Related Object Classes: node

initial_unit_flow

Initialize the unit_flow variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_unit_flow_op

Initialize the unit_flow_op variable.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

initial_units_invested

Initialize the value of the units_invested variable.

Default value: nothing

Related Object Classes: unit

initial_units_invested_available

Initialize the value of the units_invested_available variable

Default value: nothing

Related Object Classes: unit

initial_units_on

Initialize the value of the units_on variable.

Default value: nothing

Related Object Classes: unit

initial_units_out_of_service

Initialize the value of the units_out_of_service variable.

Default value: nothing

Related Object Classes: unit

is_active

If false, the object is excluded from the model if the tool filter object activity control is specified

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: commodity, connection, model, node, output, report, stage, stochastic_scenario, stochastic_structure, temporal_block, unit and user_constraint

Related Relationship Classes: node__stochastic_structure, node__temporal_block, unit__from_node, unit__to_node, units_on__stochastic_structure and units_on__temporal_block

If false, the object is excluded from the model if the tool filter object activity control is specified

is_non_spinning

A boolean flag for whether a node is acting as a non-spinning reserve

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a non-spinning reserve

is_renewable

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Whether the unit is renewable - used in the minimum renewable generation constraint within the Benders master problem

is_reserve_node

A boolean flag for whether a node is acting as a reserve_node

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

A boolean flag for whether a node is acting as a reserve_node

max_cum_in_unit_flow_bound

Set a maximum cumulative upper bound for a unit_flow

Default value: nothing

Related Relationship Classes: unit__commodity

Set a maximum cumulative upper bound for a unit_flow

max_gap

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

Default value: 0.05

Related Object Classes: model

Specifies the maximum optimality gap for the model. Currently only used for the master problem within a decomposed structure

max_iterations

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 10.0

Related Object Classes: model

Specifies the maximum number of iterations for the model. Currently only used for the master problem within a decomposed structure

max_mga_iterations

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

Default value: nothing

Related Object Classes: model

Define the number of mga iterations, i.e. how many alternative solutions will be generated.

max_mga_slack

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

Default value: 0.05

Related Object Classes: model

Defines the maximum slack by which the alternative solution may differ from the original solution (e.g. 5% more than initial objective function value)

max_node_pressure

Maximum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Maximum allowed gas pressure at node.

max_ratio_in_in_unit_flow

Maximum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows coming into the unit from the two nodes.

max_ratio_in_out_unit_flow

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

max_ratio_out_in_connection_flow

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Maximum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

max_ratio_out_in_unit_flow

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

max_ratio_out_out_unit_flow

Maximum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Maximum ratio between two unit_flows going from the unit into the two nodes.

max_total_cumulated_unit_flow_from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the maximum cumulated flows of a unit group from a node group e.g max consumption of certain commodity.

max_total_cumulated_unit_flow_to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the maximum cumulated flows of a unit group to a node group, e.g. total GHG emissions.

max_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_in_unit_flow constraint.

max_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_in_out_unit_flow constraint.

max_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_in_unit_flow constraint.

max_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the max_ratio_out_out_unit_flow constraint.

max_voltage_angle

Maximum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Maximum allowed voltage angle at node.

maximum_capacity_invested_available

Upper bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

maximum_entities_invested_available

Upper bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

min_capacity_margin

minimum capacity margin applying to the node or node_group

Default value: nothing

Related Object Classes: node

minimum capacity margin applying to the node or node_group

min_capacity_margin_penalty

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

Default value: nothing

Related Object Classes: node

penalty to apply to violations of the min capacitymargin constraint of the node or `nodegroup`

min_down_time

Minimum downtime of a unit after it shuts down.

Default value: nothing

Related Object Classes: unit

Minimum downtime of a unit after it shuts down.

min_iterations

Specifies the minimum number of iterations for the model. Currently only used for the master problem within a decomposed structure

Default value: 1.0

Related Object Classes: model

min_node_pressure

Minimum allowed gas pressure at node.

Default value: nothing

Related Object Classes: node

Minimum allowed gas pressure at node.

min_ratio_in_in_unit_flow

Minimum ratio between two unit_flows coming into the unit from the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows coming into the unit from the two nodes.

min_ratio_in_out_unit_flow

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an incoming unit_flow from the first node and an outgoing unit_flow to the second node.

min_ratio_out_in_connection_flow

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

Default value: nothing

Related Relationship Classes: connection__node__node

Minimum ratio between an outgoing connection_flow to the first node and an incoming connection_flow from the second node.

min_ratio_out_in_unit_flow

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between an outgoing unit_flow to the first node and an incoming unit_flow from the second node.

min_ratio_out_out_unit_flow

Minimum ratio between two unit_flows going from the unit into the two nodes.

Default value: nothing

Related Relationship Classes: unit__node__node

Minimum ratio between two unit_flows going from the unit into the two nodes.

min_total_cumulated_unit_flow_from_node

Bound on the minimum cumulated flows of a unit group from a node group.

Default value: nothing

Related Relationship Classes: unit__from_node

Bound on the minimum cumulated flows of a unit group from a node group.

min_total_cumulated_unit_flow_to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

Default value: nothing

Related Relationship Classes: unit__to_node

Bound on the minimum cumulated flows of a unit group to a node group, e.g. total renewable production.

min_unit_flow

Set lower bound of the unit_flow variable.

Default value: 0.0

Related Relationship Classes: unit__from_node and unit__to_node

min_units_on_coefficient_in_in

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_in_unit_flow constraint.

min_units_on_coefficient_in_out

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_in_out_unit_flow constraint.

min_units_on_coefficient_out_in

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_in_unit_flow constraint.

min_units_on_coefficient_out_out

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

Default value: 0.0

Related Relationship Classes: unit__node__node

Optional coefficient for the units_on variable impacting the min_ratio_out_out_unit_flow constraint.

min_up_time

Minimum uptime of a unit after it starts up.

Default value: nothing

Related Object Classes: unit

Minimum uptime of a unit after it starts up.

min_voltage_angle

Minimum allowed voltage angle at node.

Default value: nothing

Related Object Classes: node

Minimum allowed voltage angle at node.

minimum_capacity_invested_available

Lower bound on the capacity invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_entities_invested_available

Lower bound on the number of entities invested available in the group at any point in time.

Default value: nothing

Related Object Classes: investment_group

minimum_operating_point

Minimum level for the unit_flow relative to the units_on online capacity.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Minimum level for the unit_flow relative to the units_on online capacity.

minimum_reserve_activation_time

Duration a certain reserve product needs to be online/available

Default value: nothing

Related Object Classes: node

Duration a certain reserve product needs to be online/available

model_end

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

Default value: Dict{String, Any}("data" => "2000-01-02T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the last timestamp to be modelled. Rolling optimization terminates after passing this point.

model_start

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

Default value: Dict{String, Any}("data" => "2000-01-01T00:00:00", "type" => "date_time")

Related Object Classes: model

Defines the first timestamp to be modelled. Relative temporal_blocks refer to this value for their start and end.

model_type

Used to identify model objects as relating to the master problem or operational sub problems (default)

Default value: spineopt_standard

Uses Parameter Value Lists: model_type_list

Related Object Classes: model

Used to identify model objects as relating to the master problem or operational sub problems (default)

mp_min_res_gen_to_demand_ratio

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

Default value: nothing

Related Object Classes: commodity

Minimum ratio of renewable generation to demand for this commodity - used in the minimum renewable generation constraint within the Benders master problem

mp_min_res_gen_to_demand_ratio_slack_penalty

Penalty for violating the minimum renewable generation to demand ratio.

Default value: nothing

Related Object Classes: commodity

Penalty for violating the minimum renewable generation to demand ratio.

nodal_balance_sense

A selector for nodal_balance constraint sense.

Default value: ==

Uses Parameter Value Lists: constraint_sense_list

Related Object Classes: node

A selector for nodal_balance constraint sense.

node_opf_type

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

Default value: node_opf_type_normal

Uses Parameter Value Lists: node_opf_type_list

Related Object Classes: node

A selector for the reference node (slack bus) when PTDF-based DC load-flow is enabled.

node_slack_penalty

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

Default value: nothing

Related Object Classes: node

A penalty cost for node_slack_pos and node_slack_neg variables. The slack variables won't be included in the model unless there's a cost defined for them.

node_state_cap

The maximum permitted value for a node_state variable.

Default value: nothing

Related Object Classes: node

The maximum permitted value for a node_state variable.

node_state_coefficient

Coefficient of the specified node's state variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's state variable in the specified user constraint.

node_state_min

The minimum permitted value for a node_state variable.

Default value: 0.0

Related Object Classes: node

The minimum permitted value for a node_state variable.

number_of_connections

Denotes the number of 'sub connections' aggregated to form the modelled connection.

Default value: 1.0

Related Object Classes: connection

number_of_storages

Denotes the number of 'sub storages' aggregated to form the modelled node.

Default value: 1.0

Related Object Classes: node

number_of_units

Denotes the number of 'sub units' aggregated to form the modelled unit.

Default value: 1.0

Related Object Classes: unit

Denotes the number of 'sub units' aggregated to form the modelled unit.

online_variable_type

A selector for how the units_on variable is represented within the model.

Default value: unit_online_variable_type_linear

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

A selector for how the units_on variable is represented within the model.

operating_points

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

  • For unit__from_node: Operating points for piecewise-linear unit efficiency approximations.
  • For unit__to_node: Decomposes the flow variable into a number of separate operating segment variables. Used to in conjunction with unit_incremental_heat_rate and/or user_constraints

ordered_unit_flow_op

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Relationship Classes: unit__from_node and unit__to_node

Defines whether the segments of this unit flow are ordered as per the rank of their operating points.

outage_variable_type

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

Default value: unit_online_variable_type_none

Uses Parameter Value Lists: unit_online_variable_type_list

Related Object Classes: unit

Determines whether the outage variable is integer or continuous or none(no optimisation of maintenance outages).

output_db_url

Database url for SpineOpt output.

Default value: nothing

Related Object Classes: report

Database url for SpineOpt output.

output_resolution

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

Default value: nothing

Related Object Classes: output

Related Relationship Classes: stage__output

  • For output: Temporal resolution of the output variables associated with this output.
  • For stage__output: A duration or array of durations indicating the points in time where the output of this stage should be fixed in the children. If not specified, then the output is fixed at the end of each child's roling window (EXPERIMENTAL).

overwrite_results_on_rolling

Whether or not results from further windows should overwrite results from previous ones.

Default value: true

Related Relationship Classes: report__output

Whether or not results from further windows should overwrite results from previous ones.

ramp_down_limit

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-down rate of an online unit, given as a fraction of the unitcapacity. [rampdown_limit] = %/t, e.g. 0.2/h

ramp_up_limit

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Limit the maximum ramp-up rate of an online unit, given as a fraction of the unitcapacity. [rampup_limit] = %/t, e.g. 0.2/h

representative_periods_mapping

Map from date time to representative temporal block name

Default value: nothing

Related Object Classes: temporal_block

Map from date time to representative temporal block name

reserve_procurement_cost

Procurement cost for reserves

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Procurement cost for reserves

resolution

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

Default value: Dict{String, Any}("data" => "1h", "type" => "duration")

Related Object Classes: temporal_block

Temporal resolution of the temporal_block. Essentially, divides the period between block_start and block_end into TimeSlices with the input resolution.

right_hand_side

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

Default value: 0.0

Related Object Classes: user_constraint

The right-hand side, constant term in a user_constraint. Can be time-dependent and used e.g. for complicated efficiency approximations.

roll_forward

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

Default value: nothing

Related Object Classes: model

Defines how much the model moves ahead in time between solves in a rolling optimization. If null, everything is solved in as a single optimization.

scheduled_outage_duration

Specifies the amount of time a unit must be out of service for maintenance as a single block over the course of the optimisation window

Default value: nothing

Related Object Classes: unit

shut_down_cost

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

Default value: nothing

Related Object Classes: unit

Costs of shutting down a 'sub unit', e.g. EUR/shutdown.

shut_down_limit

Maximum ramp-down during shutdowns

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-down during shutdowns

stage_scenario

The scenario that this stage should run (EXPERIMENTAL).

Default value: nothing

Related Object Classes: stage

start_up_cost

Costs of starting up a 'sub unit', e.g. EUR/startup.

Default value: nothing

Related Object Classes: unit

Costs of starting up a 'sub unit', e.g. EUR/startup.

start_up_limit

Maximum ramp-up during startups

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum ramp-up during startups

state_coeff

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

Default value: 1.0

Related Object Classes: node

Represents the commodity content of a node_state variable in respect to the unit_flow and connection_flow variables. Essentially, acts as a coefficient on the node_state variable in the node_injection constraint.

stochastic_scenario_end

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

Default value: nothing

Related Relationship Classes: stochastic_structure__stochastic_scenario

A Duration for when a stochastic_scenario ends and its child_stochastic_scenarios start. Values are interpreted relative to the start of the current solve, and if no value is given, the stochastic_scenario is assumed to continue indefinitely.

storage_investment_cost

Determines the investment cost per unit state_cap over the investment life of a storage

Default value: nothing

Related Object Classes: node

Determines the investment cost per unit state_cap over the investment life of a storage

storage_investment_lifetime

Minimum lifetime for storage investment decisions.

Default value: nothing

Related Object Classes: node

Minimum lifetime for storage investment decisions.

storage_investment_variable_type

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

Default value: storage_investment_variable_type_integer

Uses Parameter Value Lists: storage_investment_variable_type_list

Related Object Classes: node

Determines whether the storage investment variable is continuous (usually representing capacity) or integer (representing discrete units invested)

storages_invested_available_coefficient

Coefficient of the specified node's storages invested available variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

storages_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

Default value: nothing

Related Object Classes: node

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate storages.

storages_invested_coefficient

Coefficient of the specified node's storage investment variable in the specified user constraint.

Default value: 0.0

Related Relationship Classes: node__user_constraint

Coefficient of the specified node's storage investment variable in the specified user constraint.

storages_invested_mga

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: node

Defines whether a certain variable (here: storages_invested) will be considered in the maximal-differences of the mga objective

storages_invested_mga_weight

Used to scale mga variables. For weighted-sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: node

tax_in_unit_flow

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for incoming unit_flows on this node. E.g. EUR/MWh.

tax_net_unit_flow

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

Default value: nothing

Related Object Classes: node

Tax costs for net incoming and outgoing unit_flows on this node. Incoming flows accrue positive net taxes, and outgoing flows accrue negative net taxes.

tax_out_unit_flow

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

Default value: nothing

Related Object Classes: node

Tax costs for outgoing unit_flows from this node. E.g. EUR/MWh.

unit_availability_factor

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

Default value: 1.0

Related Object Classes: unit

Availability of the unit, acting as a multiplier on its unit_capacity. Typically between 0-1.

unit_capacity

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Maximum unit_flow capacity of a single 'sub_unit' of the unit.

unit_conv_cap_to_flow

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

Default value: 1.0

Related Relationship Classes: unit__from_node and unit__to_node

Optional coefficient for unit_capacity unit conversions in the case the unit_capacity value is incompatible with the desired unit_flow units.

unit_flow_coefficient

Coefficient of a unit_flow variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__from_node__user_constraint and unit__to_node__user_constraint

Coefficient of a unit_flow variable for a custom user_constraint.

unit_flow_non_anticipativity_margin

Margin by which unit_flow variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_flow_non_anticipativity_time

Period of time where the value of the unit_flow variable has to be fixed to the result from the previous window.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

unit_idle_heat_rate

Flow from node1 per unit time and per units_on that results in no additional flow to node2

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 per unit time and per units_on that results in no additional flow to node2

unit_incremental_heat_rate

Standard piecewise incremental heat rate where node1 is assumed to be the fuel and node2 is assumed to be electriciy. Assumed monotonically increasing. Array type or single coefficient where the number of coefficients must match the dimensions of unit_operating_points

Default value: nothing

Related Relationship Classes: unit__node__node

Standard piecewise incremental heat rate where node1 is assumed to be the fuel and node2 is assumed to be electriciy. Assumed monotonically increasing. Array type or single coefficient where the number of coefficients must match the dimensions of unit_operating_points

unit_investment_cost

Investment cost per 'sub unit' built.

Default value: nothing

Related Object Classes: unit

Investment cost per 'sub unit' built.

unit_investment_lifetime

Minimum lifetime for unit investment decisions.

Default value: nothing

Related Object Classes: unit

Minimum lifetime for unit investment decisions.

unit_investment_variable_type

Determines whether investment variable is integer or continuous.

Default value: unit_investment_variable_type_continuous

Uses Parameter Value Lists: unit_investment_variable_type_list

Related Object Classes: unit

Determines whether investment variable is integer or continuous.

unit_start_flow

Flow from node1 that is incurred when a unit is started up.

Default value: 0.0

Related Relationship Classes: unit__node__node

Flow from node1 that is incurred when a unit is started up.

units_invested_available_coefficient

Coefficient of the units_invested_available variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

units_invested_big_m_mga

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

Default value: nothing

Related Object Classes: unit

bigmmga should be chosen as small as possible but sufficiently large. For unitsinvestedmga an appropriate bigmmga would be twice the candidate units.

units_invested_coefficient

Coefficient of the units_invested variable in the specified user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of the units_invested variable in the specified user_constraint.

units_invested_mga

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: unit

Defines whether a certain variable (here: units_invested) will be considered in the maximal-differences of the mga objective

units_invested_mga_weight

Used to scale mga variables. For weightd sum mga method, the length of this weight given as an Array will determine the number of iterations.

Default value: 1

Related Object Classes: unit

units_on_coefficient

Coefficient of a units_on variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_on variable for a custom user_constraint.

units_on_cost

Objective function coefficient on units_on. An idling cost, for example

Default value: nothing

Related Object Classes: unit

Objective function coefficient on units_on. An idling cost, for example

units_on_non_anticipativity_margin

Margin by which units_on variable can differ from the value in the previous window during non_anticipativity_time.

Default value: nothing

Related Object Classes: unit

units_on_non_anticipativity_time

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

Default value: nothing

Related Object Classes: unit

Period of time where the value of the units_on variable has to be fixed to the result from the previous window.

units_started_up_coefficient

Coefficient of a units_started_up variable for a custom user_constraint.

Default value: 0.0

Related Relationship Classes: unit__user_constraint

Coefficient of a units_started_up variable for a custom user_constraint.

units_unavailable

Represents the number of units out of service

Default value: 0

Related Object Classes: unit

Represents the number of units out of service

upward_reserve

Identifier for nodes providing upward reserves

Default value: false

Related Object Classes: node

Identifier for nodes providing upward reserves

use_connection_intact_flow

Whether to use connection_intact_flow variables, to capture the impact of connection investments on network characteristics via line outage distribution factors (LODF).

Default value: true

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

user_constraint_slack_penalty

A penalty for violating a user constraint.

Default value: nothing

Related Object Classes: user_constraint

version

Current version of the SpineOpt data structure. Modify it at your own risk (but please don't).

Default value: 12

Related Object Classes: settings

vom_cost

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

Default value: nothing

Related Relationship Classes: unit__from_node and unit__to_node

Variable operating costs of a unit_flow variable. E.g. EUR/MWh.

weight

Weighting factor of the temporal block associated with the objective function

Default value: 1.0

Related Object Classes: temporal_block

Weighting factor of the temporal block associated with the objective function

weight_relative_to_parents

The weight of the stochastic_scenario in the objective function relative to its parents.

Default value: 1.0

Related Relationship Classes: stochastic_structure__stochastic_scenario

The weight of the stochastic_scenario in the objective function relative to its parents.

window_duration

The duration of the window in case it differs from roll_forward

Default value: nothing

Related Object Classes: model

window_weight

The weight of the window in the rolling subproblem

Default value: 1

Related Object Classes: model

The weight of the window in the rolling subproblem

write_lodf_file

A boolean flag for whether the LODF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the LODF values should be written to a results file.

write_mps_file

A selector for writing an .mps file of the model.

Default value: nothing

Uses Parameter Value Lists: write_mps_file_list

Related Object Classes: model

A selector for writing an .mps file of the model.

write_ptdf_file

A boolean flag for whether the PTDF values should be written to a results file.

Default value: false

Uses Parameter Value Lists: boolean_value_list

Related Object Classes: model

A boolean flag for whether the PTDF values should be written to a results file.

diff --git a/dev/concept_reference/Relationship Classes/index.html b/dev/concept_reference/Relationship Classes/index.html index 86b64a4507..3cc7b339dc 100644 --- a/dev/concept_reference/Relationship Classes/index.html +++ b/dev/concept_reference/Relationship Classes/index.html @@ -1,2 +1,2 @@ -Relationship Classes · SpineOpt.jl

Relationship Classes

connection__from_node

A flow on a connection from a node.

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection from a node.

connection__from_node__investment_group

A flow on a connection from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__from_node__user_constraint

A flow on a connection from a node constrained by a user_constraint.

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__investment_group

A connection that belongs in an investment_group.

Related Object Classes: connection and investment_group

connection__investment_stochastic_structure

The stochastic_structure of a connection investment.

Related Object Classes: connection and stochastic_structure

The stochastic_structure of a connection investment.

connection__investment_temporal_block

The temporal_block of a connection investment.

Related Object Classes: connection and temporal_block

The temporal_block of a connection investment.

connection__node__node

A connection acting over two nodes.

Related Object Classes: connection and node

Related Parameters: compression_factor, connection_flow_delay, connection_linepack_constant, fix_ratio_out_in_connection_flow, fixed_pressure_constant_0, fixed_pressure_constant_1, max_ratio_out_in_connection_flow and min_ratio_out_in_connection_flow

A connection acting over two nodes.

connection__to_node

A flow on a connection to a node .

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection to a node .

connection__to_node__investment_group

A flow on a connection to a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__to_node__user_constraint

A flow on a connection to a node constrained by a `user_constraint

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__user_constraint

A connection investment constrained by a user_constraint.

Related Object Classes: connection and user_constraint

Related Parameters: connections_invested_available_coefficient and connections_invested_coefficient

model__default_investment_stochastic_structure

The default stochastic_structure of all investments in the model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of all investments in the model.

model__default_investment_temporal_block

The default temporal_block of all investments in the model.

Related Object Classes: model and temporal_block

The default temporal_block of all investments in the model.

model__default_stochastic_structure

The default stochastic_structure of the `model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of the `model.

model__default_temporal_block

The default temporal_block of the model.

Related Object Classes: model and temporal_block

The default temporal_block of the model.

model__report

A report that should be written for the model.

Related Object Classes: model and report

A report that should be written for the model.

node__commodity

A commodity for a node. Only a single commodity is permitted per node.

Related Object Classes: commodity and node

A commodity for a node. Only a single commodity is permitted per node.

node__investment_group

A node that belongs in an investment_group.

Related Object Classes: investment_group and node

node__investment_stochastic_structure

The stochastic_structure of a node storage investment.

Related Object Classes: node and stochastic_structure

The stochastic_structure of a node storage investment.

node__investment_temporal_block

The temporal_block of a node storage investment.

Related Object Classes: node and temporal_block

The temporal_block of a node storage investment.

node__node

An interaction between two nodes.

Related Object Classes: node

Related Parameters: diff_coeff

An interaction between two nodes.

node__stochastic_structure

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

Related Object Classes: node and stochastic_structure

Related Parameters: is_active

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

node__temporal_block

The temporal_block of a node and the corresponding flow variables.

Related Object Classes: node and temporal_block

Related Parameters: cyclic_condition and is_active

The temporal_block of a node and the corresponding flow variables.

node__user_constraint

A node state constrained by a user_constraint, or a node demand included in a user_constraint.

Related Object Classes: node and user_constraint

Related Parameters: demand_coefficient, node_state_coefficient, storages_invested_available_coefficient and storages_invested_coefficient

parent_stochastic_scenario__child_stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

Related Object Classes: stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

report__output

An output that should be included in a report.

Related Object Classes: output and report

Related Parameters: overwrite_results_on_rolling

An output that should be included in a report.

stage__child_stage

A parent-child relationship between two stages (EXPERIMENTAL).

Related Object Classes: stage

stage__output

An output that should be fixed by a stage in all its children (EXPERIMENTAL).

Related Object Classes: output and stage

Related Parameters: output_resolution

stochastic_structure__stochastic_scenario

A stochastic_scenarios that belongs in a stochastic_structure.

Related Object Classes: stochastic_scenario and stochastic_structure

Related Parameters: stochastic_scenario_end and weight_relative_to_parents

A stochastic_scenarios that belongs in a stochastic_structure.

unit__commodity

Holds parameters for commodities used by the unit.

Related Object Classes: commodity and unit

Related Parameters: max_cum_in_unit_flow_bound

Holds parameters for commodities used by the unit.

unit__from_node

A flow on a unit from a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_from_node, min_total_cumulated_unit_flow_from_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit from a node.

unit__from_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__from_node__user_constraint

A flow on a unit from a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__investment_group

A unit that belongs in an investment_group.

Related Object Classes: investment_group and unit

unit__investment_stochastic_structure

The stochastic_structure of a unit investment.

Related Object Classes: stochastic_structure and unit

The stochastic_structure of a unit investment.

unit__investment_temporal_block

The temporal_block of a unit investment.

Related Object Classes: temporal_block and unit

The temporal_block of a unit investment.

unit__node__node

A unit acting over two nodes.

Related Object Classes: node and unit

Related Parameters: fix_ratio_in_in_unit_flow, fix_ratio_in_out_unit_flow, fix_ratio_out_in_unit_flow, fix_ratio_out_out_unit_flow, fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, fix_units_on_coefficient_out_out, max_ratio_in_in_unit_flow, max_ratio_in_out_unit_flow, max_ratio_out_in_unit_flow, max_ratio_out_out_unit_flow, max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, max_units_on_coefficient_out_out, min_ratio_in_in_unit_flow, min_ratio_in_out_unit_flow, min_ratio_out_in_unit_flow, min_ratio_out_out_unit_flow, min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, min_units_on_coefficient_out_out, unit_idle_heat_rate, unit_incremental_heat_rate and unit_start_flow

A unit acting over two nodes.

unit__to_node

A flow on a unit to a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_shut_down, fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_shut_down, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_to_node, min_total_cumulated_unit_flow_to_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit to a node.

unit__to_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__to_node__user_constraint

A flow on a unit to a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__user_constraint

A unit commitment constrained by a user_constraint.

Related Object Classes: unit and user_constraint

Related Parameters: units_invested_available_coefficient, units_invested_coefficient, units_on_coefficient and units_started_up_coefficient

units_on__stochastic_structure

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

Related Object Classes: stochastic_structure and unit

Related Parameters: is_active

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

units_on__temporal_block

The temporal_block of a unit commitment.

Related Object Classes: temporal_block and unit

Related Parameters: is_active

The temporal_block of a unit commitment.

+Relationship Classes · SpineOpt.jl

Relationship Classes

connection__from_node

A flow on a connection from a node.

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection from a node.

connection__from_node__investment_group

A flow on a connection from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__from_node__user_constraint

A flow on a connection from a node constrained by a user_constraint.

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__investment_group

A connection that belongs in an investment_group.

Related Object Classes: connection and investment_group

connection__investment_stochastic_structure

The stochastic_structure of a connection investment.

Related Object Classes: connection and stochastic_structure

The stochastic_structure of a connection investment.

connection__investment_temporal_block

The temporal_block of a connection investment.

Related Object Classes: connection and temporal_block

The temporal_block of a connection investment.

connection__node__node

A connection acting over two nodes.

Related Object Classes: connection and node

Related Parameters: compression_factor, connection_flow_delay, connection_linepack_constant, fix_ratio_out_in_connection_flow, fixed_pressure_constant_0, fixed_pressure_constant_1, max_ratio_out_in_connection_flow and min_ratio_out_in_connection_flow

A connection acting over two nodes.

connection__to_node

A flow on a connection to a node .

Related Object Classes: connection and node

Related Parameters: connection_capacity, connection_conv_cap_to_flow, connection_emergency_capacity, connection_flow_cost, connection_flow_non_anticipativity_margin, connection_flow_non_anticipativity_time, connection_intact_flow_non_anticipativity_margin, connection_intact_flow_non_anticipativity_time, fix_binary_gas_connection_flow, fix_connection_flow, fix_connection_intact_flow, graph_view_position, initial_binary_gas_connection_flow, initial_connection_flow and initial_connection_intact_flow

A flow on a connection to a node .

connection__to_node__investment_group

A flow on a connection to a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: connection, investment_group and node

connection__to_node__user_constraint

A flow on a connection to a node constrained by a `user_constraint

Related Object Classes: connection, node and user_constraint

Related Parameters: connection_flow_coefficient

connection__user_constraint

A connection investment constrained by a user_constraint.

Related Object Classes: connection and user_constraint

Related Parameters: connections_invested_available_coefficient and connections_invested_coefficient

model__default_investment_stochastic_structure

The default stochastic_structure of all investments in the model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of all investments in the model.

model__default_investment_temporal_block

The default temporal_block of all investments in the model.

Related Object Classes: model and temporal_block

The default temporal_block of all investments in the model.

model__default_stochastic_structure

The default stochastic_structure of the `model.

Related Object Classes: model and stochastic_structure

The default stochastic_structure of the `model.

model__default_temporal_block

The default temporal_block of the model.

Related Object Classes: model and temporal_block

The default temporal_block of the model.

model__report

A report that should be written for the model.

Related Object Classes: model and report

A report that should be written for the model.

node__commodity

A commodity for a node. Only a single commodity is permitted per node.

Related Object Classes: commodity and node

A commodity for a node. Only a single commodity is permitted per node.

node__investment_group

A node that belongs in an investment_group.

Related Object Classes: investment_group and node

node__investment_stochastic_structure

The stochastic_structure of a node storage investment.

Related Object Classes: node and stochastic_structure

The stochastic_structure of a node storage investment.

node__investment_temporal_block

The temporal_block of a node storage investment.

Related Object Classes: node and temporal_block

The temporal_block of a node storage investment.

node__node

An interaction between two nodes.

Related Object Classes: node

Related Parameters: diff_coeff

An interaction between two nodes.

node__stochastic_structure

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

Related Object Classes: node and stochastic_structure

Related Parameters: is_active

The stochastic_structure of a node. Only one stochastic_structure is permitted per node.

node__temporal_block

The temporal_block of a node and the corresponding flow variables.

Related Object Classes: node and temporal_block

Related Parameters: cyclic_condition and is_active

The temporal_block of a node and the corresponding flow variables.

node__user_constraint

A node state constrained by a user_constraint, or a node demand included in a user_constraint.

Related Object Classes: node and user_constraint

Related Parameters: demand_coefficient, node_state_coefficient, storages_invested_available_coefficient and storages_invested_coefficient

parent_stochastic_scenario__child_stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

Related Object Classes: stochastic_scenario

A parent-child relationship between two stochastic_scenarios defining the master stochastic direct acyclic graph.

report__output

An output that should be included in a report.

Related Object Classes: output and report

Related Parameters: overwrite_results_on_rolling

An output that should be included in a report.

stage__child_stage

A parent-child relationship between two stages (EXPERIMENTAL).

Related Object Classes: stage

stage__output

An output that should be fixed by a stage in all its children (EXPERIMENTAL).

Related Object Classes: output and stage

Related Parameters: output_resolution

stochastic_structure__stochastic_scenario

A stochastic_scenarios that belongs in a stochastic_structure.

Related Object Classes: stochastic_scenario and stochastic_structure

Related Parameters: stochastic_scenario_end and weight_relative_to_parents

A stochastic_scenarios that belongs in a stochastic_structure.

unit__commodity

Holds parameters for commodities used by the unit.

Related Object Classes: commodity and unit

Related Parameters: max_cum_in_unit_flow_bound

Holds parameters for commodities used by the unit.

unit__from_node

A flow on a unit from a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_from_node, min_total_cumulated_unit_flow_from_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit from a node.

unit__from_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__from_node__user_constraint

A flow on a unit from a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__investment_group

A unit that belongs in an investment_group.

Related Object Classes: investment_group and unit

unit__investment_stochastic_structure

The stochastic_structure of a unit investment.

Related Object Classes: stochastic_structure and unit

The stochastic_structure of a unit investment.

unit__investment_temporal_block

The temporal_block of a unit investment.

Related Object Classes: temporal_block and unit

The temporal_block of a unit investment.

unit__node__node

A unit acting over two nodes.

Related Object Classes: node and unit

Related Parameters: fix_ratio_in_in_unit_flow, fix_ratio_in_out_unit_flow, fix_ratio_out_in_unit_flow, fix_ratio_out_out_unit_flow, fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, fix_units_on_coefficient_out_out, max_ratio_in_in_unit_flow, max_ratio_in_out_unit_flow, max_ratio_out_in_unit_flow, max_ratio_out_out_unit_flow, max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, max_units_on_coefficient_out_out, min_ratio_in_in_unit_flow, min_ratio_in_out_unit_flow, min_ratio_out_in_unit_flow, min_ratio_out_out_unit_flow, min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, min_units_on_coefficient_out_out, unit_idle_heat_rate, unit_incremental_heat_rate and unit_start_flow

A unit acting over two nodes.

unit__to_node

A flow on a unit to a node.

Related Object Classes: node and unit

Related Parameters: fix_nonspin_units_shut_down, fix_nonspin_units_started_up, fix_unit_flow_op, fix_unit_flow, fuel_cost, graph_view_position, initial_nonspin_units_shut_down, initial_nonspin_units_started_up, initial_unit_flow_op, initial_unit_flow, is_active, max_total_cumulated_unit_flow_to_node, min_total_cumulated_unit_flow_to_node, min_unit_flow, minimum_operating_point, operating_points, ordered_unit_flow_op, ramp_down_limit, ramp_up_limit, reserve_procurement_cost, shut_down_limit, start_up_limit, unit_capacity, unit_conv_cap_to_flow, unit_flow_non_anticipativity_margin, unit_flow_non_anticipativity_time and vom_cost

A flow on a unit to a node.

unit__to_node__investment_group

A flow on a unit from a node whose capacity should be counted in the capacity invested available of an investment_group.

Related Object Classes: investment_group, node and unit

unit__to_node__user_constraint

A flow on a unit to a node constrained by a user_constraint.

Related Object Classes: node, unit and user_constraint

Related Parameters: graph_view_position and unit_flow_coefficient

unit__user_constraint

A unit commitment constrained by a user_constraint.

Related Object Classes: unit and user_constraint

Related Parameters: units_invested_available_coefficient, units_invested_coefficient, units_on_coefficient and units_started_up_coefficient

units_on__stochastic_structure

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

Related Object Classes: stochastic_structure and unit

Related Parameters: is_active

The stochastic_structure of a unit commitment. Only one stochastic_structure is permitted per unit.

units_on__temporal_block

The temporal_block of a unit commitment.

Related Object Classes: temporal_block and unit

Related Parameters: is_active

The temporal_block of a unit commitment.

diff --git a/dev/concept_reference/_example/index.html b/dev/concept_reference/_example/index.html index c32de6a69d..bf934f603d 100644 --- a/dev/concept_reference/_example/index.html +++ b/dev/concept_reference/_example/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

AN EXAMPLE DESCRIPTION FOR HOW THE AUTOGENERATION OF CONCEPT REFERENCE BASED ON SPINEOPT TEMPLATE WORKS

References to other sections, e.g. node are handled like this. Don't use the grave accents around the reference name, as it breaks the reference! Grave accents in Documenter.jl refer to docstrings in the code instead of sections in the documentation.

+- · SpineOpt.jl

AN EXAMPLE DESCRIPTION FOR HOW THE AUTOGENERATION OF CONCEPT REFERENCE BASED ON SPINEOPT TEMPLATE WORKS

References to other sections, e.g. node are handled like this. Don't use the grave accents around the reference name, as it breaks the reference! Grave accents in Documenter.jl refer to docstrings in the code instead of sections in the documentation.

diff --git a/dev/concept_reference/balance_type/index.html b/dev/concept_reference/balance_type/index.html index 251beb1768..ed6b799878 100644 --- a/dev/concept_reference/balance_type/index.html +++ b/dev/concept_reference/balance_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The balance_type parameter determines whether or not a node needs to be balanced, in the classical sense that the sum of flows entering the node is equal to the sum of flows leaving it.

The values balance_type_node (the default) and balance_type_group mean that the node is always balanced. The only exception is if the node belongs in a group that has itself balance_type equal to balance_type_group. The value balance_type_none means that the node doesn't need to be balanced.

+- · SpineOpt.jl

The balance_type parameter determines whether or not a node needs to be balanced, in the classical sense that the sum of flows entering the node is equal to the sum of flows leaving it.

The values balance_type_node (the default) and balance_type_group mean that the node is always balanced. The only exception is if the node belongs in a group that has itself balance_type equal to balance_type_group. The value balance_type_none means that the node doesn't need to be balanced.

diff --git a/dev/concept_reference/balance_type_list/index.html b/dev/concept_reference/balance_type_list/index.html index bcfc7f5380..0c1d0d0645 100644 --- a/dev/concept_reference/balance_type_list/index.html +++ b/dev/concept_reference/balance_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/big_m/index.html b/dev/concept_reference/big_m/index.html index f2e01a27ac..71e6de16d1 100644 --- a/dev/concept_reference/big_m/index.html +++ b/dev/concept_reference/big_m/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The big_m parameter is a property of the model object. The bigM method is commonly used for the purpose of recasting non-linear constraints into a mixed-integer reformulation. In SpineOpt, the bigM formulation is used to describe the sign of gas flow through a connection (if a pressure driven gas transfer model is used). The big_m parameter in combination with the binary variable binary_gas_connection_flow is used in the constraints on the gas flow capacity and the fixed node pressure points and ensures that the average flow through a pipeline is only in one direction and is constraint by the fixed pressure points from the outer approximation of the Weymouth equation. See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling for reference.

+- · SpineOpt.jl

The big_m parameter is a property of the model object. The bigM method is commonly used for the purpose of recasting non-linear constraints into a mixed-integer reformulation. In SpineOpt, the bigM formulation is used to describe the sign of gas flow through a connection (if a pressure driven gas transfer model is used). The big_m parameter in combination with the binary variable binary_gas_connection_flow is used in the constraints on the gas flow capacity and the fixed node pressure points and ensures that the average flow through a pipeline is only in one direction and is constraint by the fixed pressure points from the outer approximation of the Weymouth equation. See Schwele - Coordination of Power and Natural Gas Systems: Convexification Approaches for Linepack Modeling for reference.

diff --git a/dev/concept_reference/block_end/index.html b/dev/concept_reference/block_end/index.html index 2a8f1d9786..52139bfe13 100644 --- a/dev/concept_reference/block_end/index.html +++ b/dev/concept_reference/block_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Indicates the end of this temporal block. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the end of the optimization for this temporal block. In a single solve optimization, a combination of block_start and block_end can easily be used to run optimizations that cover only part of the model horizon. Multiple temporal_block objects can then be used to create optimizations for disconnected time periods, which is commonly used in the method of representative days. The default value coincides with the model_end.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The block_end parameter will in this case determine the size of the optimization window, with respect to the start of each optimization window. If multiple temporal blocks with different block_end parameters exist, the maximum value will determine the size of the optimization window. Note, this is different from the roll_forward parameter, which determines how much the window moves for after each optimization. For more info, see One single temporal_block. The default value is equal to the roll_forward parameter.

+- · SpineOpt.jl

Indicates the end of this temporal block. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the end of the optimization for this temporal block. In a single solve optimization, a combination of block_start and block_end can easily be used to run optimizations that cover only part of the model horizon. Multiple temporal_block objects can then be used to create optimizations for disconnected time periods, which is commonly used in the method of representative days. The default value coincides with the model_end.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The block_end parameter will in this case determine the size of the optimization window, with respect to the start of each optimization window. If multiple temporal blocks with different block_end parameters exist, the maximum value will determine the size of the optimization window. Note, this is different from the roll_forward parameter, which determines how much the window moves for after each optimization. For more info, see One single temporal_block. The default value is equal to the roll_forward parameter.

diff --git a/dev/concept_reference/block_start/index.html b/dev/concept_reference/block_start/index.html index c09990df39..2dc8f4a008 100644 --- a/dev/concept_reference/block_start/index.html +++ b/dev/concept_reference/block_start/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Indicates the start of this temporal block. The main use of this parameter is to create an offset from the model start. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the start of the optimization for this temporal block. When a duration is chosen, it is added to the model_start to obtain the start of this temporal_block. In the case of a duration, the chosen value directly marks the offset of the optimization with respect to the model_start. The default value for this parameter is the model_start.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The temporal block_start will again mark the offset of the optimization start but now with respect to the start of each optimization window.

+- · SpineOpt.jl

Indicates the start of this temporal block. The main use of this parameter is to create an offset from the model start. The default value is equal to a duration of 0. It is useful to distinguish here between two cases: a single solve, or a rolling window optimization.

single solve When a Date time value is chosen, this is directly the start of the optimization for this temporal block. When a duration is chosen, it is added to the model_start to obtain the start of this temporal_block. In the case of a duration, the chosen value directly marks the offset of the optimization with respect to the model_start. The default value for this parameter is the model_start.

rolling window optimization To create a temporal block that is rolling along with the optimization window, a rolling temporal block, a duration value should be chosen. The temporal block_start will again mark the offset of the optimization start but now with respect to the start of each optimization window.

diff --git a/dev/concept_reference/boolean_value_list/index.html b/dev/concept_reference/boolean_value_list/index.html index cfd2d97769..0315a605a1 100644 --- a/dev/concept_reference/boolean_value_list/index.html +++ b/dev/concept_reference/boolean_value_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A list of boolean values (True or False).

+- · SpineOpt.jl

A list of boolean values (True or False).

diff --git a/dev/concept_reference/candidate_connections/index.html b/dev/concept_reference/candidate_connections/index.html index 0fb3a7f673..7f3161a66f 100644 --- a/dev/concept_reference/candidate_connections/index.html +++ b/dev/concept_reference/candidate_connections/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/candidate_storages/index.html b/dev/concept_reference/candidate_storages/index.html index 67fdfea0b3..08b7ec93e6 100644 --- a/dev/concept_reference/candidate_storages/index.html +++ b/dev/concept_reference/candidate_storages/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem candidate_storages determines the upper bound on the storages investment decision variable in constraint storages_invested_available. In constraint node_state_cap the maximum node state will be the product of the storages investment variable and node_state_cap. Thus, the interpretation of candidate_storages depends on storage_investment_variable_type which determines the investment decision variable type. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages of size node_state_cap that may be invested in at the corresponding node. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a maximum storage capacity with node_state_cap being analagous to a scaling parameter.

Note that candidate_storages is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for storages at the corresponding node. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and storage_investment_variable_type

+- · SpineOpt.jl

Within an investments problem candidate_storages determines the upper bound on the storages investment decision variable in constraint storages_invested_available. In constraint node_state_cap the maximum node state will be the product of the storages investment variable and node_state_cap. Thus, the interpretation of candidate_storages depends on storage_investment_variable_type which determines the investment decision variable type. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages of size node_state_cap that may be invested in at the corresponding node. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a maximum storage capacity with node_state_cap being analagous to a scaling parameter.

Note that candidate_storages is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for storages at the corresponding node. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and storage_investment_variable_type

diff --git a/dev/concept_reference/candidate_units/index.html b/dev/concept_reference/candidate_units/index.html index 7babc358bc..c5a0baa516 100644 --- a/dev/concept_reference/candidate_units/index.html +++ b/dev/concept_reference/candidate_units/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem candidate_units determines the upper bound on the unit investment decision variable in constraint units_invested_available. In constraint unit_flow_capacity the maximum unit_flow will be the product of the units_invested_available and the corresponding unit_capacity. Thus, the interpretation of candidate_units depends on unit_investment_variable_type which determines the unit investment decision variable type. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested in. If unit_investment_variable_type is continuous, candidate_units is more analagous to a maximum storage capacity.

Note that candidate_units is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for the unit. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and unit_investment_variable_type

+- · SpineOpt.jl

Within an investments problem candidate_units determines the upper bound on the unit investment decision variable in constraint units_invested_available. In constraint unit_flow_capacity the maximum unit_flow will be the product of the units_invested_available and the corresponding unit_capacity. Thus, the interpretation of candidate_units depends on unit_investment_variable_type which determines the unit investment decision variable type. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested in. If unit_investment_variable_type is continuous, candidate_units is more analagous to a maximum storage capacity.

Note that candidate_units is the main investment switch and setting a value other than none/nothing triggers the creation of the investment variable for the unit. Note that a value of zero will still trigger the variable creation but its value will be fixed to zero. This can be useful if an inspection of the related dual variables will yield the value of this resource.

See also Investment Optimization and unit_investment_variable_type

diff --git a/dev/concept_reference/commodity/index.html b/dev/concept_reference/commodity/index.html index 68d607bbe0..d094ab261e 100644 --- a/dev/concept_reference/commodity/index.html +++ b/dev/concept_reference/commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Commodities correspond to the type of energy traded. When associated with a node through the node__commodity relationship, a specific form of energy, i.e. commodity, can be associated with a specific location. Furthermore, by linking commodities with units, it is possible to track the flows of a certain commodity and impose limitations on the use of a certain commodity (See also max_cum_in_unit_flow_bound). For the representation of specific commodity physics, related to e.g. the representation of the electric network, designated parameters can be defined to enforce commodity specific behaviour. (See also commodity_physics)

+- · SpineOpt.jl

Commodities correspond to the type of energy traded. When associated with a node through the node__commodity relationship, a specific form of energy, i.e. commodity, can be associated with a specific location. Furthermore, by linking commodities with units, it is possible to track the flows of a certain commodity and impose limitations on the use of a certain commodity (See also max_cum_in_unit_flow_bound). For the representation of specific commodity physics, related to e.g. the representation of the electric network, designated parameters can be defined to enforce commodity specific behaviour. (See also commodity_physics)

diff --git a/dev/concept_reference/commodity_lodf_tolerance/index.html b/dev/concept_reference/commodity_lodf_tolerance/index.html index 84c1910954..9ff69a596f 100644 --- a/dev/concept_reference/commodity_lodf_tolerance/index.html +++ b/dev/concept_reference/commodity_lodf_tolerance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Given two connections, the line outage distribution factor (LODF) is the fraction of the pre-contingency flow on the first one, that will flow on the second after the contingency. commodity_lodf_tolerance is the minimum absolute value of the LODF that is considered meaningful. Any value below this tolerance (in absolute value) will be treated as zero.

The LODFs are used to model contingencies on some connections and their impact on some other connections. To model contingencies on a connection, set connection_contingency to true; to study the impact of such contingencies on another connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to commodity_physics_lodf, and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

+- · SpineOpt.jl

Given two connections, the line outage distribution factor (LODF) is the fraction of the pre-contingency flow on the first one, that will flow on the second after the contingency. commodity_lodf_tolerance is the minimum absolute value of the LODF that is considered meaningful. Any value below this tolerance (in absolute value) will be treated as zero.

The LODFs are used to model contingencies on some connections and their impact on some other connections. To model contingencies on a connection, set connection_contingency to true; to study the impact of such contingencies on another connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to commodity_physics_lodf, and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

diff --git a/dev/concept_reference/commodity_physics/index.html b/dev/concept_reference/commodity_physics/index.html index 9351e3cb5a..3ee4bb1ae8 100644 --- a/dev/concept_reference/commodity_physics/index.html +++ b/dev/concept_reference/commodity_physics/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter determines the specific formulation used to carry out dc load flow within a model. To enable power transfer distribution factor (ptdf) based load flow for a network of nodes and connections, all nodes must be related to a commodity with commodity_physics set to commodity_physics_ptdf. To enable security constraint unit comment based on ptdfs and line outage distribution factors (lodf) all nodes must be related to a commodity with commodity_physics set to commodity_physics_lodf.

See also powerflow

+- · SpineOpt.jl

This parameter determines the specific formulation used to carry out dc load flow within a model. To enable power transfer distribution factor (ptdf) based load flow for a network of nodes and connections, all nodes must be related to a commodity with commodity_physics set to commodity_physics_ptdf. To enable security constraint unit comment based on ptdfs and line outage distribution factors (lodf) all nodes must be related to a commodity with commodity_physics set to commodity_physics_lodf.

See also powerflow

diff --git a/dev/concept_reference/commodity_physics_duration/index.html b/dev/concept_reference/commodity_physics_duration/index.html index 1ac133aadf..66269a0f53 100644 --- a/dev/concept_reference/commodity_physics_duration/index.html +++ b/dev/concept_reference/commodity_physics_duration/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter determines the duration, relative to the start of the optimisation window, over which the physics determined by commodity_physics should be applied. This is useful when the optimisation window includes a long look-ahead where the detailed physics are not necessary. In this case one can set commodity_physics_duration to a shorter value to reduce problem size and increase performace.

See also powerflow

+- · SpineOpt.jl

This parameter determines the duration, relative to the start of the optimisation window, over which the physics determined by commodity_physics should be applied. This is useful when the optimisation window includes a long look-ahead where the detailed physics are not necessary. In this case one can set commodity_physics_duration to a shorter value to reduce problem size and increase performace.

See also powerflow

diff --git a/dev/concept_reference/commodity_physics_list/index.html b/dev/concept_reference/commodity_physics_list/index.html index 6a733da60f..d4741a3522 100644 --- a/dev/concept_reference/commodity_physics_list/index.html +++ b/dev/concept_reference/commodity_physics_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/commodity_ptdf_threshold/index.html b/dev/concept_reference/commodity_ptdf_threshold/index.html index e37cc98834..63a93d0cff 100644 --- a/dev/concept_reference/commodity_ptdf_threshold/index.html +++ b/dev/concept_reference/commodity_ptdf_threshold/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Given a connection and a node, the power transfer distribution factor (PTDF) is the fraction of the flow injected into the node that will flow on the connection. commodity_ptdf_threshold is the minimum absolute value of the PTDF that is considered meaningful. Any value below this threshold (in absolute value) will be treated as zero.

The PTDFs are used to model DC power flow on certain connections. To model DC power flow on a connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to either commodity_physics_ptdf, or commodity_physics_lodf. and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

+- · SpineOpt.jl

Given a connection and a node, the power transfer distribution factor (PTDF) is the fraction of the flow injected into the node that will flow on the connection. commodity_ptdf_threshold is the minimum absolute value of the PTDF that is considered meaningful. Any value below this threshold (in absolute value) will be treated as zero.

The PTDFs are used to model DC power flow on certain connections. To model DC power flow on a connection, set connection_monitored to true.

In addition, define a commodity with commodity_physics set to either commodity_physics_ptdf, or commodity_physics_lodf. and associate that commodity (via node__commodity) to both connections' nodes (given by connection__to_node and connection__from_node).

diff --git a/dev/concept_reference/compression_factor/index.html b/dev/concept_reference/compression_factor/index.html index 86d04446fd..b19f19850c 100644 --- a/dev/concept_reference/compression_factor/index.html +++ b/dev/concept_reference/compression_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is specific to the use of pressure driven gas transfer. To represent a compression between two nodes in the gas network, the compression_factor can be defined. This factor ensures that the pressure of a node is equal to (or lower than) the pressure at the sending node times the compression_factor. The relationship connection__node__node that hosts this parameter should be defined in a way that the first node represents the origin node and the second node represents the compressed node.

+- · SpineOpt.jl

This parameter is specific to the use of pressure driven gas transfer. To represent a compression between two nodes in the gas network, the compression_factor can be defined. This factor ensures that the pressure of a node is equal to (or lower than) the pressure at the sending node times the compression_factor. The relationship connection__node__node that hosts this parameter should be defined in a way that the first node represents the origin node and the second node represents the compressed node.

diff --git a/dev/concept_reference/connection/index.html b/dev/concept_reference/connection/index.html index 990cb66bd9..615bf1fe44 100644 --- a/dev/concept_reference/connection/index.html +++ b/dev/concept_reference/connection/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A connection represents a transfer of one commodity over space. For example, an electricity transmission line, a gas pipe, a river branch, can be modelled using a connection.

A connection always takes commodities from one or more nodes, and releases them to one or more (possibly the same) nodes. The former are specificed through the connection__from_node relationship, and the latter through connection__to_node. Every connection inherits the temporal and stochastic structures from the associated nodes. The model will generate connection_flow variables for every combination of connection, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the connection is specified through a number of parameter values. For example, the capacity of the connection, as the maximum amount of energy that can enter or leave it, is given by connection_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_connection_flow, max_ratio_out_in_connection_flow, and min_ratio_out_in_connection_flow parameters in the connection__node__node relationship. The delay on a connection, as the time it takes for the energy to go from one end to the other, is given by connection_flow_delay.

+- · SpineOpt.jl

A connection represents a transfer of one commodity over space. For example, an electricity transmission line, a gas pipe, a river branch, can be modelled using a connection.

A connection always takes commodities from one or more nodes, and releases them to one or more (possibly the same) nodes. The former are specificed through the connection__from_node relationship, and the latter through connection__to_node. Every connection inherits the temporal and stochastic structures from the associated nodes. The model will generate connection_flow variables for every combination of connection, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the connection is specified through a number of parameter values. For example, the capacity of the connection, as the maximum amount of energy that can enter or leave it, is given by connection_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_connection_flow, max_ratio_out_in_connection_flow, and min_ratio_out_in_connection_flow parameters in the connection__node__node relationship. The delay on a connection, as the time it takes for the energy to go from one end to the other, is given by connection_flow_delay.

diff --git a/dev/concept_reference/connection__from_node/index.html b/dev/concept_reference/connection__from_node/index.html index 08abdb4c8a..f35328c052 100644 --- a/dev/concept_reference/connection__from_node/index.html +++ b/dev/concept_reference/connection__from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__from_node is a two-dimensional relationship between a connection and a node and implies a connection_flow to the connection from the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:from_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

+- · SpineOpt.jl

connection__from_node is a two-dimensional relationship between a connection and a node and implies a connection_flow to the connection from the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:from_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

diff --git a/dev/concept_reference/connection__from_node__unit_constraint/index.html b/dev/concept_reference/connection__from_node__unit_constraint/index.html index bbcd6c2b70..d39a592c6b 100644 --- a/dev/concept_reference/connection__from_node__unit_constraint/index.html +++ b/dev/concept_reference/connection__from_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__from_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable to the specified connection from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__from_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

+- · SpineOpt.jl

connection__from_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable to the specified connection from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__from_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/connection__investment_stochastic_structure/index.html b/dev/concept_reference/connection__investment_stochastic_structure/index.html index 50ded0df10..c7c5c99512 100644 --- a/dev/concept_reference/connection__investment_stochastic_structure/index.html +++ b/dev/concept_reference/connection__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection__investment_temporal_block/index.html b/dev/concept_reference/connection__investment_temporal_block/index.html index 91e6de7021..2392986afd 100644 --- a/dev/concept_reference/connection__investment_temporal_block/index.html +++ b/dev/concept_reference/connection__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__investment_temporal_block is a two-dimensional relationship between a connection and a temporal_block. This relationship defines the temporal resolution and scope of a connection's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no connection__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if connection__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified connection.

See also Investment Optimization

+- · SpineOpt.jl

connection__investment_temporal_block is a two-dimensional relationship between a connection and a temporal_block. This relationship defines the temporal resolution and scope of a connection's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no connection__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if connection__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified connection.

See also Investment Optimization

diff --git a/dev/concept_reference/connection__node__node/index.html b/dev/concept_reference/connection__node__node/index.html index 884da3783b..8107fb1189 100644 --- a/dev/concept_reference/connection__node__node/index.html +++ b/dev/concept_reference/connection__node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__node__node is a three-dimensional relationship between a connection, a node (node 1) and another node (node 2). connection__node__node infers a conversion and a direction with respect to that conversion. Node 1 is assumed to be the input node and node 2 is assumed to be the output node. For example, the fix_ratio_out_in_connection_flow parameter defined on connection__node__node relates the output connection_flow to node 2 to the intput connection_flow from node 1

+- · SpineOpt.jl

connection__node__node is a three-dimensional relationship between a connection, a node (node 1) and another node (node 2). connection__node__node infers a conversion and a direction with respect to that conversion. Node 1 is assumed to be the input node and node 2 is assumed to be the output node. For example, the fix_ratio_out_in_connection_flow parameter defined on connection__node__node relates the output connection_flow to node 2 to the intput connection_flow from node 1

diff --git a/dev/concept_reference/connection__to_node/index.html b/dev/concept_reference/connection__to_node/index.html index 05af798ea3..e8c2dc5bf3 100644 --- a/dev/concept_reference/connection__to_node/index.html +++ b/dev/concept_reference/connection__to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__to_node is a two-dimensional relationship between a connection and a node and implies a connection_flow from the connection to the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:to_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

+- · SpineOpt.jl

connection__to_node is a two-dimensional relationship between a connection and a node and implies a connection_flow from the connection to the node. Specifying such a relationship will give rise to a connection_flow_variable with indices connection=connection, node=node, direction=:to_node. Relationships defined on this relationship will generally apply to this specific flow variable. For example, connection_capacity will apply only to this specific flow variable, unless the connection parameter connection_type is specified.

diff --git a/dev/concept_reference/connection__to_node__unit_constraint/index.html b/dev/concept_reference/connection__to_node__unit_constraint/index.html index 5d023b767e..5e893bb2d4 100644 --- a/dev/concept_reference/connection__to_node__unit_constraint/index.html +++ b/dev/concept_reference/connection__to_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection__to_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable from the specified connection to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__to_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

+- · SpineOpt.jl

connection__to_node__user_constraint is a three-dimensional relationship between a connection, a node and a user_constraint. The relationship specifies that the connection_flow variable from the specified connection to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific connection_flow variable. For example the parameter connection_flow_coefficient defined on connection__to_node__user_constraint represents the coefficient on the specific connection_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/connection_availability_factor/index.html b/dev/concept_reference/connection_availability_factor/index.html index fd2c76aa86..a59a219d43 100644 --- a/dev/concept_reference/connection_availability_factor/index.html +++ b/dev/concept_reference/connection_availability_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To indicate that a connection is only available to a certain extent or at certain times of the optimization, the connection_availability_factor can be used. A typical use case could be an availability timeseries for connection with expected outage times. By default the availability factor is set to 1. The availability is, among others, used in the constraint_connection_flow_capacity.

+- · SpineOpt.jl

To indicate that a connection is only available to a certain extent or at certain times of the optimization, the connection_availability_factor can be used. A typical use case could be an availability timeseries for connection with expected outage times. By default the availability factor is set to 1. The availability is, among others, used in the constraint_connection_flow_capacity.

diff --git a/dev/concept_reference/connection_capacity/index.html b/dev/concept_reference/connection_capacity/index.html index ff81779674..9ca853971f 100644 --- a/dev/concept_reference/connection_capacity/index.html +++ b/dev/concept_reference/connection_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Defines the upper bound on the corresponding connection_flow variable. If the connection is a candidate connection, the effective connection_flow upper bound is the product of the investment variable, connections_invested_available and connection_capacity. If ptdf based dc load flow is enabled, connection_capacity represents the normal rating of a connection (line) while connection_emergency_capacity represents the maximum post contingency flow.

+- · SpineOpt.jl

Defines the upper bound on the corresponding connection_flow variable. If the connection is a candidate connection, the effective connection_flow upper bound is the product of the investment variable, connections_invested_available and connection_capacity. If ptdf based dc load flow is enabled, connection_capacity represents the normal rating of a connection (line) while connection_emergency_capacity represents the maximum post contingency flow.

diff --git a/dev/concept_reference/connection_contingency/index.html b/dev/concept_reference/connection_contingency/index.html index 6a27b27e50..5bd557a067 100644 --- a/dev/concept_reference/connection_contingency/index.html +++ b/dev/concept_reference/connection_contingency/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies that the connection in question is to be included as a contingency when security constrained unit commitment is enabled. When using security constrained unit commitment by setting commodity_physics to commodity_physics_lodf, an N-1 security constraint is created for each monitored line (connection_monitored = true) for each specified contingency (connection_contingency = true).

See also powerflow

+- · SpineOpt.jl

Specifies that the connection in question is to be included as a contingency when security constrained unit commitment is enabled. When using security constrained unit commitment by setting commodity_physics to commodity_physics_lodf, an N-1 security constraint is created for each monitored line (connection_monitored = true) for each specified contingency (connection_contingency = true).

See also powerflow

diff --git a/dev/concept_reference/connection_conv_cap_to_flow/index.html b/dev/concept_reference/connection_conv_cap_to_flow/index.html index 65890bc0dc..24acc3727e 100644 --- a/dev/concept_reference/connection_conv_cap_to_flow/index.html +++ b/dev/concept_reference/connection_conv_cap_to_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_emergency_capacity/index.html b/dev/concept_reference/connection_emergency_capacity/index.html index 0ebcd273e9..4cee5755a0 100644 --- a/dev/concept_reference/connection_emergency_capacity/index.html +++ b/dev/concept_reference/connection_emergency_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_flow_coefficient/index.html b/dev/concept_reference/connection_flow_coefficient/index.html index 44a6ccd375..9028505666 100644 --- a/dev/concept_reference/connection_flow_coefficient/index.html +++ b/dev/concept_reference/connection_flow_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_flow_cost/index.html b/dev/concept_reference/connection_flow_cost/index.html index c5009d9c16..a5429bb3d3 100644 --- a/dev/concept_reference/connection_flow_cost/index.html +++ b/dev/concept_reference/connection_flow_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the connection_flow_cost parameter for a specific connection, a cost term will be added to the objective function that values all connection_flow variables associated with that connection during the current optimization window.

+- · SpineOpt.jl

By defining the connection_flow_cost parameter for a specific connection, a cost term will be added to the objective function that values all connection_flow variables associated with that connection during the current optimization window.

diff --git a/dev/concept_reference/connection_flow_delay/index.html b/dev/concept_reference/connection_flow_delay/index.html index c1210f1098..cb7aab734c 100644 --- a/dev/concept_reference/connection_flow_delay/index.html +++ b/dev/concept_reference/connection_flow_delay/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_investment_cost/index.html b/dev/concept_reference/connection_investment_cost/index.html index f106ef405c..80d13f35ca 100644 --- a/dev/concept_reference/connection_investment_cost/index.html +++ b/dev/concept_reference/connection_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the connection_investment_cost parameter for a specific connection, a cost term will be added to the objective function whenever a connection investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the connection_investment_cost parameter for a specific connection, a cost term will be added to the objective function whenever a connection investment is made during the current optimization window.

diff --git a/dev/concept_reference/connection_investment_lifetime/index.html b/dev/concept_reference/connection_investment_lifetime/index.html index 4157acd7ae..870eb892e7 100644 --- a/dev/concept_reference/connection_investment_lifetime/index.html +++ b/dev/concept_reference/connection_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

connection_investment_lifetime is the minimum amount of time that a connection has to stay in operation once it's invested-in. Only after that time, the connection can be decomissioned. Note that connection_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

+- · SpineOpt.jl

connection_investment_lifetime is the minimum amount of time that a connection has to stay in operation once it's invested-in. Only after that time, the connection can be decomissioned. Note that connection_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

diff --git a/dev/concept_reference/connection_investment_variable_type/index.html b/dev/concept_reference/connection_investment_variable_type/index.html index 65add93388..58d3796fa1 100644 --- a/dev/concept_reference/connection_investment_variable_type/index.html +++ b/dev/concept_reference/connection_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The connection_investment_variable_type parameter represents the type of the connections_invested_available decision variable.

The default value, variable_type_integer, means that only integer factors of the connection_capacity can be invested in. The value variable_type_continuous means that any fractional factor can also be invested in. The value variable_type_binary means that only a factor of 1 or zero are possible.

+- · SpineOpt.jl

The connection_investment_variable_type parameter represents the type of the connections_invested_available decision variable.

The default value, variable_type_integer, means that only integer factors of the connection_capacity can be invested in. The value variable_type_continuous means that any fractional factor can also be invested in. The value variable_type_binary means that only a factor of 1 or zero are possible.

diff --git a/dev/concept_reference/connection_investment_variable_type_list/index.html b/dev/concept_reference/connection_investment_variable_type_list/index.html index e89530f61b..b51c0f2aaf 100644 --- a/dev/concept_reference/connection_investment_variable_type_list/index.html +++ b/dev/concept_reference/connection_investment_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_linepack_constant/index.html b/dev/concept_reference/connection_linepack_constant/index.html index 7246781c2e..942745e02e 100644 --- a/dev/concept_reference/connection_linepack_constant/index.html +++ b/dev/concept_reference/connection_linepack_constant/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The linepack constant is a physical property of a connection representing a pipeline and holds information on how the linepack flexibility relates to pressures of the adjacent nodes. If, and only if, this parameter is defined, the linepack flexibility of a pipeline can be modelled. The existence of the parameter triggers the generation of the constraint on line pack storage. The connection_linepack_constant should always be defined on the tuple (connection pipeline, linepack storage node, node group (containing both pressure nodes, i.e. start and end of the pipeline)). See also.

+- · SpineOpt.jl

The linepack constant is a physical property of a connection representing a pipeline and holds information on how the linepack flexibility relates to pressures of the adjacent nodes. If, and only if, this parameter is defined, the linepack flexibility of a pipeline can be modelled. The existence of the parameter triggers the generation of the constraint on line pack storage. The connection_linepack_constant should always be defined on the tuple (connection pipeline, linepack storage node, node group (containing both pressure nodes, i.e. start and end of the pipeline)). See also.

diff --git a/dev/concept_reference/connection_monitored/index.html b/dev/concept_reference/connection_monitored/index.html index d29338e965..181dcd2013 100644 --- a/dev/concept_reference/connection_monitored/index.html +++ b/dev/concept_reference/connection_monitored/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_reactance/index.html b/dev/concept_reference/connection_reactance/index.html index 04b2644748..0a5815c03c 100644 --- a/dev/concept_reference/connection_reactance/index.html +++ b/dev/concept_reference/connection_reactance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The per unit reactance of a transmission line. Used in ptdf based dc load flow where the relative reactances of lines determine the ptdfs of the network and in lossless dc powerflow where the flow on a line is given by flow = 1/x(theta_to-theta_from) where x is the reatance of the line, thetato is the voltage angle of the remote node and thetafrom is the voltage angle of the sending node.

+- · SpineOpt.jl

The per unit reactance of a transmission line. Used in ptdf based dc load flow where the relative reactances of lines determine the ptdfs of the network and in lossless dc powerflow where the flow on a line is given by flow = 1/x(theta_to-theta_from) where x is the reatance of the line, thetato is the voltage angle of the remote node and thetafrom is the voltage angle of the sending node.

diff --git a/dev/concept_reference/connection_reactance_base/index.html b/dev/concept_reference/connection_reactance_base/index.html index b822d0105f..6b1bab850c 100644 --- a/dev/concept_reference/connection_reactance_base/index.html +++ b/dev/concept_reference/connection_reactance_base/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connection_resistance/index.html b/dev/concept_reference/connection_resistance/index.html index 3ca89ef888..da5524ae93 100644 --- a/dev/concept_reference/connection_resistance/index.html +++ b/dev/concept_reference/connection_resistance/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The per unit resistance of a transmission line. Currently unimplemented!

+- · SpineOpt.jl

The per unit resistance of a transmission line. Currently unimplemented!

diff --git a/dev/concept_reference/connection_type/index.html b/dev/concept_reference/connection_type/index.html index d58c42b670..89ee49ff4e 100644 --- a/dev/concept_reference/connection_type/index.html +++ b/dev/concept_reference/connection_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to control specific pre-processing actions on connections. Currently, the primary purpose of connection_type is to simplify the data that is required to define a simple bi-directional, lossless line. If connection_type=:connection_type_lossless_bidirectional, it is only necessary to specify the following minimum data:

If connection_type=:connection_type_lossless_bidirectional the following pre-processing actions are taken:

+- · SpineOpt.jl

Used to control specific pre-processing actions on connections. Currently, the primary purpose of connection_type is to simplify the data that is required to define a simple bi-directional, lossless line. If connection_type=:connection_type_lossless_bidirectional, it is only necessary to specify the following minimum data:

If connection_type=:connection_type_lossless_bidirectional the following pre-processing actions are taken:

diff --git a/dev/concept_reference/connection_type_list/index.html b/dev/concept_reference/connection_type_list/index.html index 75a5baa641..989426a07d 100644 --- a/dev/concept_reference/connection_type_list/index.html +++ b/dev/concept_reference/connection_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_avaiable_coefficient/index.html b/dev/concept_reference/connections_invested_avaiable_coefficient/index.html index 0db32166b9..e3b012d8fe 100644 --- a/dev/concept_reference/connections_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/connections_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_big_m_mga/index.html b/dev/concept_reference/connections_invested_big_m_mga/index.html index 533758b318..12443c7f7e 100644 --- a/dev/concept_reference/connections_invested_big_m_mga/index.html +++ b/dev/concept_reference/connections_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The connections_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_connections could suffice.)

+- · SpineOpt.jl

The connections_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_connections could suffice.)

diff --git a/dev/concept_reference/connections_invested_coefficient/index.html b/dev/concept_reference/connections_invested_coefficient/index.html index 4b4072ccfd..47b2c789c6 100644 --- a/dev/concept_reference/connections_invested_coefficient/index.html +++ b/dev/concept_reference/connections_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/connections_invested_mga/index.html b/dev/concept_reference/connections_invested_mga/index.html index 81c0a38eb4..f8f64643e7 100644 --- a/dev/concept_reference/connections_invested_mga/index.html +++ b/dev/concept_reference/connections_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/constraint_sense/index.html b/dev/concept_reference/constraint_sense/index.html index 4884ce5071..5e0afb7393 100644 --- a/dev/concept_reference/constraint_sense/index.html +++ b/dev/concept_reference/constraint_sense/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/constraint_sense_list/index.html b/dev/concept_reference/constraint_sense_list/index.html index 33d58c9591..a5f8fc00c9 100644 --- a/dev/concept_reference/constraint_sense_list/index.html +++ b/dev/concept_reference/constraint_sense_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/curtailment_cost/index.html b/dev/concept_reference/curtailment_cost/index.html index 47aeddf2b5..846adcbbce 100644 --- a/dev/concept_reference/curtailment_cost/index.html +++ b/dev/concept_reference/curtailment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the curtailment_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit's available capacity exceeds its activity (i.e., the unit_flow variable) over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the curtailment_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit's available capacity exceeds its activity (i.e., the unit_flow variable) over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/cyclic_condition/index.html b/dev/concept_reference/cyclic_condition/index.html index 9d535e3b50..33a8a1126a 100644 --- a/dev/concept_reference/cyclic_condition/index.html +++ b/dev/concept_reference/cyclic_condition/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/db_lp_solver/index.html b/dev/concept_reference/db_lp_solver/index.html index 3b3b2f713f..150a30d827 100644 --- a/dev/concept_reference/db_lp_solver/index.html +++ b/dev/concept_reference/db_lp_solver/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Linear Programming Problems (LPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Clp.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support LP problems. Solver options are specified using the db_lp_solver_options parameter for the model. Note also that if run_spineopt() is called with the lp_solver keyword argument specified, this will override this parameter.

+- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Linear Programming Problems (LPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Clp.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support LP problems. Solver options are specified using the db_lp_solver_options parameter for the model. Note also that if run_spineopt() is called with the lp_solver keyword argument specified, this will override this parameter.

diff --git a/dev/concept_reference/db_lp_solver_list/index.html b/dev/concept_reference/db_lp_solver_list/index.html index 360be6403f..0652f2f2b5 100644 --- a/dev/concept_reference/db_lp_solver_list/index.html +++ b/dev/concept_reference/db_lp_solver_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

List of supported LP solvers which may be specified for the db_lp_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Clp.jl) and is case sensitive.

+- · SpineOpt.jl

List of supported LP solvers which may be specified for the db_lp_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Clp.jl) and is case sensitive.

diff --git a/dev/concept_reference/db_lp_solver_options/index.html b/dev/concept_reference/db_lp_solver_options/index.html index 5e539f9bb4..1e3d6c85e4 100644 --- a/dev/concept_reference/db_lp_solver_options/index.html +++ b/dev/concept_reference/db_lp_solver_options/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

LP solver options are specified for a model using the db_lp_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Clp.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_lp_solver_options map parameter

+- · SpineOpt.jl

LP solver options are specified for a model using the db_lp_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Clp.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_lp_solver_options map parameter

diff --git a/dev/concept_reference/db_mip_solver/index.html b/dev/concept_reference/db_mip_solver/index.html index c64553f080..bcfc6480e9 100644 --- a/dev/concept_reference/db_mip_solver/index.html +++ b/dev/concept_reference/db_mip_solver/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Mixed Integer Programming Problems (MIPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Cbc.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support MIP problems. Solver options are specified using the db_mip_solver_options parameter for the model. Note also that if run_spineopt() is called with the mip_solver keyword argument specified, this will override this parameter.

+- · SpineOpt.jl

Specifies the Julia solver package to be used to solve Mixed Integer Programming Problems (MIPs) for the specific model. The value must correspond exactly (case sensitive) to the name of the Julia solver package (e.g. Cbc.jl). Installation and configuration of solvers is the responsibility of the user. A full list of solvers supported by JuMP can be found here. Note that the specified problem must support MIP problems. Solver options are specified using the db_mip_solver_options parameter for the model. Note also that if run_spineopt() is called with the mip_solver keyword argument specified, this will override this parameter.

diff --git a/dev/concept_reference/db_mip_solver_list/index.html b/dev/concept_reference/db_mip_solver_list/index.html index 9fbe8861e5..a03541e30a 100644 --- a/dev/concept_reference/db_mip_solver_list/index.html +++ b/dev/concept_reference/db_mip_solver_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

List of supported MIP solvers which may be specified for the db_mip_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Cbc.jl) and is case sensitive.

+- · SpineOpt.jl

List of supported MIP solvers which may be specified for the db_mip_solver_options parameter. The value must correspond exactly to the name of the Julia solver package (e.g. Cbc.jl) and is case sensitive.

diff --git a/dev/concept_reference/db_mip_solver_options/index.html b/dev/concept_reference/db_mip_solver_options/index.html index 6f54e87048..ccb3398490 100644 --- a/dev/concept_reference/db_mip_solver_options/index.html +++ b/dev/concept_reference/db_mip_solver_options/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

MIP solver options are specified for a model using the db_mip_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Cbc.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_mip_solver_options map parameter

+- · SpineOpt.jl

MIP solver options are specified for a model using the db_mip_solver_options parameter. This parameter value must take the form of a nested map where the outer key corresponds to the solver package name (case sensitive). E.g. Cbc.jl. The inner map consists of option name and value pairs. See the below example. By default, the SpineOpt template contains some common parameters for some common solvers. For a list of supported solver options, one should consult the documentation for the solver and//or the julia solver wrapper package. example db_mip_solver_options map parameter

diff --git a/dev/concept_reference/demand/index.html b/dev/concept_reference/demand/index.html index 12b0ff50d2..a5462b4bc6 100644 --- a/dev/concept_reference/demand/index.html +++ b/dev/concept_reference/demand/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The demand parameter represents a "demand" or a "load" of a commodity on a node. It appears in the node injection constraint, with positive values interpreted as "demand" or "load" for the modelled system, while negative values provide the system with "influx" or "gain". When the node is part of a group, the fractional_demand parameter can be used to split demand into fractions, when desired. See also: Introduction to groups of objects

The demand parameter can also be included in custom user_constraints using the demand_coefficient parameter for the node__user_constraint relationship.

+- · SpineOpt.jl

The demand parameter represents a "demand" or a "load" of a commodity on a node. It appears in the node injection constraint, with positive values interpreted as "demand" or "load" for the modelled system, while negative values provide the system with "influx" or "gain". When the node is part of a group, the fractional_demand parameter can be used to split demand into fractions, when desired. See also: Introduction to groups of objects

The demand parameter can also be included in custom user_constraints using the demand_coefficient parameter for the node__user_constraint relationship.

diff --git a/dev/concept_reference/demand_coefficient/index.html b/dev/concept_reference/demand_coefficient/index.html index 9b92d58844..9123fd779a 100644 --- a/dev/concept_reference/demand_coefficient/index.html +++ b/dev/concept_reference/demand_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/diff_coeff/index.html b/dev/concept_reference/diff_coeff/index.html index ec35c6f5f0..416f1cf863 100644 --- a/dev/concept_reference/diff_coeff/index.html +++ b/dev/concept_reference/diff_coeff/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The diff_coeff parameter represents diffusion of a commodity between the two nodes in the node__node relationship. It appears as a coefficient on the node_state variable in the node injection constraint, essentially representing diffusion power per unit of state. Note that the diff_coeff is interpreted as one-directional, meaning that if one defines

diff_coeff(node1=n1, node2=n2),

there will only be diffusion from n1 to n2, but not vice versa. Symmetric diffusion is likely used in most cases, requiring defining the diff_coeff both ways

diff_coeff(node1=n1, node2=n2) == diff_coeff(node1=n2, node2=n1).
+- · SpineOpt.jl

The diff_coeff parameter represents diffusion of a commodity between the two nodes in the node__node relationship. It appears as a coefficient on the node_state variable in the node injection constraint, essentially representing diffusion power per unit of state. Note that the diff_coeff is interpreted as one-directional, meaning that if one defines

diff_coeff(node1=n1, node2=n2),

there will only be diffusion from n1 to n2, but not vice versa. Symmetric diffusion is likely used in most cases, requiring defining the diff_coeff both ways

diff_coeff(node1=n1, node2=n2) == diff_coeff(node1=n2, node2=n1).
diff --git a/dev/concept_reference/downward_reserve/index.html b/dev/concept_reference/downward_reserve/index.html index cde5e4c8a3..373a07b81b 100644 --- a/dev/concept_reference/downward_reserve/index.html +++ b/dev/concept_reference/downward_reserve/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

+- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

diff --git a/dev/concept_reference/duration_unit/index.html b/dev/concept_reference/duration_unit/index.html index 9126f6c934..7d09ff8f9b 100644 --- a/dev/concept_reference/duration_unit/index.html +++ b/dev/concept_reference/duration_unit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The duration_unit parameter specifies the base unit of time in a model. Two values are currently supported, hour and the default minute. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

+- · SpineOpt.jl

The duration_unit parameter specifies the base unit of time in a model. Two values are currently supported, hour and the default minute. E.g. if the duration_unit is set to hour, a Duration of one minute gets converted into 1/60 hours for the calculations.

diff --git a/dev/concept_reference/duration_unit_list/index.html b/dev/concept_reference/duration_unit_list/index.html index 7c9189c7f3..014144963b 100644 --- a/dev/concept_reference/duration_unit_list/index.html +++ b/dev/concept_reference/duration_unit_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_binary_gas_connection_flow/index.html b/dev/concept_reference/fix_binary_gas_connection_flow/index.html index 1249bb8def..ea5375a311 100644 --- a/dev/concept_reference/fix_binary_gas_connection_flow/index.html +++ b/dev/concept_reference/fix_binary_gas_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connection_flow/index.html b/dev/concept_reference/fix_connection_flow/index.html index 8d617d057b..34d091120c 100644 --- a/dev/concept_reference/fix_connection_flow/index.html +++ b/dev/concept_reference/fix_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connection_intact_flow/index.html b/dev/concept_reference/fix_connection_intact_flow/index.html index 7ecdffb906..9e0b12e801 100644 --- a/dev/concept_reference/fix_connection_intact_flow/index.html +++ b/dev/concept_reference/fix_connection_intact_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connections_invested/index.html b/dev/concept_reference/fix_connections_invested/index.html index 508a6015bd..dfd5a7605f 100644 --- a/dev/concept_reference/fix_connections_invested/index.html +++ b/dev/concept_reference/fix_connections_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_connections_invested_available/index.html b/dev/concept_reference/fix_connections_invested_available/index.html index f8d37d8bc5..4fb307443f 100644 --- a/dev/concept_reference/fix_connections_invested_available/index.html +++ b/dev/concept_reference/fix_connections_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_node_pressure/index.html b/dev/concept_reference/fix_node_pressure/index.html index e0c530f538..ae3b5dacd2 100644 --- a/dev/concept_reference/fix_node_pressure/index.html +++ b/dev/concept_reference/fix_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In a pressure driven gas model, gas network nodes are associated with the node_pressure variable. In order to fix the pressure at a certain node or to give intial conditions the fix_node_pressure parameter can be used.

+- · SpineOpt.jl

In a pressure driven gas model, gas network nodes are associated with the node_pressure variable. In order to fix the pressure at a certain node or to give intial conditions the fix_node_pressure parameter can be used.

diff --git a/dev/concept_reference/fix_node_state/index.html b/dev/concept_reference/fix_node_state/index.html index 10be048bab..757414b5bc 100644 --- a/dev/concept_reference/fix_node_state/index.html +++ b/dev/concept_reference/fix_node_state/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_node_state parameter simply fixes the value of the node_state variable to the provided value, if one is found. Common uses for the parameter include e.g. providing initial values for node_state variables, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the node_state variables are only fixed for time steps with defined fix_node_state parameter values.

+- · SpineOpt.jl

The fix_node_state parameter simply fixes the value of the node_state variable to the provided value, if one is found. Common uses for the parameter include e.g. providing initial values for node_state variables, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the node_state variables are only fixed for time steps with defined fix_node_state parameter values.

diff --git a/dev/concept_reference/fix_node_voltage_angle/index.html b/dev/concept_reference/fix_node_voltage_angle/index.html index cd82aab22f..044f2b9987 100644 --- a/dev/concept_reference/fix_node_voltage_angle/index.html +++ b/dev/concept_reference/fix_node_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For a lossless nodal DC power flow network, each node is associated with a node_voltage_angle variable. In order to fix the voltage angle at a certain node or to give initial conditions the fix_node_voltage_angle parameter can be used.

+- · SpineOpt.jl

For a lossless nodal DC power flow network, each node is associated with a node_voltage_angle variable. In order to fix the voltage angle at a certain node or to give initial conditions the fix_node_voltage_angle parameter can be used.

diff --git a/dev/concept_reference/fix_nonspin_units_shut_down/index.html b/dev/concept_reference/fix_nonspin_units_shut_down/index.html index 2c89cf3991..006dc99324 100644 --- a/dev/concept_reference/fix_nonspin_units_shut_down/index.html +++ b/dev/concept_reference/fix_nonspin_units_shut_down/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_nonspin_units_shut_down parameter simply fixes the value of the nonspin_units_shut_down variable to the provided value. As such, it determines directly how many member units are involved in providing downward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_nonspin_units_shut_down parameter simply fixes the value of the nonspin_units_shut_down variable to the provided value. As such, it determines directly how many member units are involved in providing downward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_nonspin_units_started_up/index.html b/dev/concept_reference/fix_nonspin_units_started_up/index.html index 6987871eaf..46385c5e42 100644 --- a/dev/concept_reference/fix_nonspin_units_started_up/index.html +++ b/dev/concept_reference/fix_nonspin_units_started_up/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_nonspin_units_started_up parameter simply fixes the value of the nonspin_units_started_up variable to the provided value. As such, it determines directly how many member units are involved in providing upward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_nonspin_units_started_up parameter simply fixes the value of the nonspin_units_started_up variable to the provided value. As such, it determines directly how many member units are involved in providing upward reserve commodity flows to the node to which it is linked by the unit__to_node relationship.

When a single value is selected, this value is kept constant throughout the model. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html b/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html index 0c6face140..2a07ea5317 100644 --- a/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_in_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_in_unit_flow and fixes the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a fixed share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the fix_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the fix_ratio_in_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_in_unit_flow and fixes the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a fixed share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the fix_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html b/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html index da23cf70da..3b4e617652 100644 --- a/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_in_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_out_unit_flow and fixes the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node,i i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flows to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

+- · SpineOpt.jl

The definition of the fix_ratio_in_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_in_out_unit_flow and fixes the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node,i i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flows to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

diff --git a/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html b/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html index 91ea0f5d7b..14cdb8a81c 100644 --- a/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_in_connection_flow parameter triggers the generation of the constraint_fix_ratio_out_in_connection_flow and fixes the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. In most cases the fix_ratio_out_in_connection_flow parameter is set to equal or lower than 1, linking the flows entering to the flows leaving the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right order. The parameter can be used to e.g. account for losses over a connection in a certain direction.

To enforce e.g. a fixed ratio of 0.8 for a connection conn between its outgoing electricity flow to node el1 and its incoming flows from the node node el2, the fix_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship u__el1__el2.

+- · SpineOpt.jl

The definition of the fix_ratio_out_in_connection_flow parameter triggers the generation of the constraint_fix_ratio_out_in_connection_flow and fixes the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. In most cases the fix_ratio_out_in_connection_flow parameter is set to equal or lower than 1, linking the flows entering to the flows leaving the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right order. The parameter can be used to e.g. account for losses over a connection in a certain direction.

To enforce e.g. a fixed ratio of 0.8 for a connection conn between its outgoing electricity flow to node el1 and its incoming flows from the node node el2, the fix_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship u__el1__el2.

diff --git a/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html b/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html index 0b1eef2d2b..644fd02797 100644 --- a/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_in_unit_flow and fixes the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ngthe fix_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the fix_ratio_out_in_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_in_unit_flow and fixes the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right order.

To enforce e.g. a fixed ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ngthe fix_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html b/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html index f0fe5b6a0f..27e72a5d41 100644 --- a/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/fix_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the fix_ratio_out_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_out_unit_flow and fixes the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a fixed ratio between two products of a unit u, e.g. fixing the share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the fix_ratio_out_out_unit_flow parameter triggers the generation of the constraint_fix_ratio_out_out_unit_flow and fixes the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a fixed ratio between two products of a unit u, e.g. fixing the share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/fix_storages_invested/index.html b/dev/concept_reference/fix_storages_invested/index.html index d6ec5d4c2b..29afe79588 100644 --- a/dev/concept_reference/fix_storages_invested/index.html +++ b/dev/concept_reference/fix_storages_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_storages_invested_available/index.html b/dev/concept_reference/fix_storages_invested_available/index.html index a74857343c..a0735a2ce2 100644 --- a/dev/concept_reference/fix_storages_invested_available/index.html +++ b/dev/concept_reference/fix_storages_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used primarily to fix the value of the storages_invested_available variable which represents the storages investment decision variable and how many candidate storages are available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also candidate_storages and Investment Optimization

+- · SpineOpt.jl

Used primarily to fix the value of the storages_invested_available variable which represents the storages investment decision variable and how many candidate storages are available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also candidate_storages and Investment Optimization

diff --git a/dev/concept_reference/fix_unit_flow/index.html b/dev/concept_reference/fix_unit_flow/index.html index b608c88d56..ef6edbbe71 100644 --- a/dev/concept_reference/fix_unit_flow/index.html +++ b/dev/concept_reference/fix_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_unit_flow parameter fixes the value of the unit_flow variable to the provided value, if the parameter is defined.

Common uses for the parameter include e.g. providing initial values for the unit_flow variable, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the unit_flow variable is only fixed for time steps with defined fix_unit_flow parameter values.

Other uses can include e.g. a constant or time-varying exogenous commodity flow from or to a unit.

+- · SpineOpt.jl

The fix_unit_flow parameter fixes the value of the unit_flow variable to the provided value, if the parameter is defined.

Common uses for the parameter include e.g. providing initial values for the unit_flow variable, by fixing the value on the first modelled time step (or the value before the first modelled time step) using a TimeSeries type parameter value with an appropriate timestamp. Due to the way SpineOpt handles TimeSeries data, the unit_flow variable is only fixed for time steps with defined fix_unit_flow parameter values.

Other uses can include e.g. a constant or time-varying exogenous commodity flow from or to a unit.

diff --git a/dev/concept_reference/fix_unit_flow_op/index.html b/dev/concept_reference/fix_unit_flow_op/index.html index c4c9244be6..9dd81c121f 100644 --- a/dev/concept_reference/fix_unit_flow_op/index.html +++ b/dev/concept_reference/fix_unit_flow_op/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If operating_points is defined on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub-variables, unit_flow_op one for each operating point, with an additional index, i to reference the specific operating point. fix_unit_flow_op can thus be used to fix the value of one or more of the variables as desired.

+- · SpineOpt.jl

If operating_points is defined on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub-variables, unit_flow_op one for each operating point, with an additional index, i to reference the specific operating point. fix_unit_flow_op can thus be used to fix the value of one or more of the variables as desired.

diff --git a/dev/concept_reference/fix_units_invested/index.html b/dev/concept_reference/fix_units_invested/index.html index c1656db7a5..9fd110a481 100644 --- a/dev/concept_reference/fix_units_invested/index.html +++ b/dev/concept_reference/fix_units_invested/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fix_units_invested_available/index.html b/dev/concept_reference/fix_units_invested_available/index.html index c3c8c66cc4..c4db4637b3 100644 --- a/dev/concept_reference/fix_units_invested_available/index.html +++ b/dev/concept_reference/fix_units_invested_available/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used primarily to fix the value of the units_invested_available variable which represents the unit investment decision variable and how many candidate units are invested-in and available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also Investment Optimization, candidate_units and unit_investment_variable_type

+- · SpineOpt.jl

Used primarily to fix the value of the units_invested_available variable which represents the unit investment decision variable and how many candidate units are invested-in and available at the corresponding node, time step and stochastic scenario. Used also in the decomposition framework to communicate the value of the master problem solution variables to the operational sub-problem.

See also Investment Optimization, candidate_units and unit_investment_variable_type

diff --git a/dev/concept_reference/fix_units_on/index.html b/dev/concept_reference/fix_units_on/index.html index dff56f6811..a5cecfe72a 100644 --- a/dev/concept_reference/fix_units_on/index.html +++ b/dev/concept_reference/fix_units_on/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on parameter simply fixes the value of the units_on variable to the provided value. As such, it determines directly how many members of the specific unit will be online throughout the model when a single value is selected. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

+- · SpineOpt.jl

The fix_units_on parameter simply fixes the value of the units_on variable to the provided value. As such, it determines directly how many members of the specific unit will be online throughout the model when a single value is selected. It is also possible to provide a timeseries of values, which can be used for example to impose initial conditions by providing a value only for the first timestep included in the model.

diff --git a/dev/concept_reference/fix_units_on_coefficient_in_in/index.html b/dev/concept_reference/fix_units_on_coefficient_in_in/index.html index aaa1b0014d..9401190973 100644 --- a/dev/concept_reference/fix_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the fix_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_in and max_units_on_coefficient_in_in.

+- · SpineOpt.jl

The fix_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the fix_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_out, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_in and max_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/fix_units_on_coefficient_in_out/index.html b/dev/concept_reference/fix_units_on_coefficient_in_out/index.html index c88539f666..63d84c8a22 100644 --- a/dev/concept_reference/fix_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the fix_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_out and max_units_on_coefficient_in_out.

+- · SpineOpt.jl

The fix_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the fix_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_out_in, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_in_out and max_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/fix_units_on_coefficient_out_in/index.html b/dev/concept_reference/fix_units_on_coefficient_out_in/index.html index 30fa1f64d0..4e053bedd1 100644 --- a/dev/concept_reference/fix_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the fix_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_in and max_units_on_coefficient_out_in.

+- · SpineOpt.jl

The fix_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the fix_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_in and max_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/fix_units_on_coefficient_out_out/index.html b/dev/concept_reference/fix_units_on_coefficient_out_out/index.html index ac6bacbfc0..35bb360edc 100644 --- a/dev/concept_reference/fix_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/fix_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The fix_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the fix_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_out and max_units_on_coefficient_out_out.

+- · SpineOpt.jl

The fix_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the fix_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for fixing the conversion ratio depending on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: fix_units_on_coefficient_in_in, fix_units_on_coefficient_in_out, and fix_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or maximum conversion rates, e.g. min_units_on_coefficient_out_out and max_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/fixed_pressure_constant_0/index.html b/dev/concept_reference/fixed_pressure_constant_0/index.html index 3dc243a922..6dfb2fffb3 100644 --- a/dev/concept_reference/fixed_pressure_constant_0/index.html +++ b/dev/concept_reference/fixed_pressure_constant_0/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The second fixed pressure constant, which will be multiplied with the pressure of the destination node, is represented by an Array value of the fixed_pressure_constant_0. The first pressure constant corresponds to the related parameter fixed_pressure_constant_1. Note that the fixed_pressure_constant_0 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

+- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The second fixed pressure constant, which will be multiplied with the pressure of the destination node, is represented by an Array value of the fixed_pressure_constant_0. The first pressure constant corresponds to the related parameter fixed_pressure_constant_1. Note that the fixed_pressure_constant_0 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

diff --git a/dev/concept_reference/fixed_pressure_constant_1/index.html b/dev/concept_reference/fixed_pressure_constant_1/index.html index 38319a9666..1861aa9920 100644 --- a/dev/concept_reference/fixed_pressure_constant_1/index.html +++ b/dev/concept_reference/fixed_pressure_constant_1/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The first fixed pressure constant, which will be multiplied with the pressure of the origin node, is represented by an Array value of the fixed_pressure_constant_1. The second pressure constant corresponds to the related parameter fixed_pressure_constant_0. Note that the fixed_pressure_constant_1 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

+- · SpineOpt.jl

For the MILP representation of pressure driven gas transfer, we use an outer approximation approach as described by Schwele et al.. The Weymouth equation is approximated around fixed pressure points, as described by the constraint on fixed node pressure points, constraining the average flow in each direction dependent on the adjacent node pressures. The first fixed pressure constant, which will be multiplied with the pressure of the origin node, is represented by an Array value of the fixed_pressure_constant_1. The second pressure constant corresponds to the related parameter fixed_pressure_constant_0. Note that the fixed_pressure_constant_1 parameter should be defined on a connection__node__node relationship, for which the first node corresponds to the origin node, while the second node corresponds to the destination node. For a typical gas pipeline, the will be a fixed_pressure_constant_1 for both directions of flow.

diff --git a/dev/concept_reference/fom_cost/index.html b/dev/concept_reference/fom_cost/index.html index 3919c8bc64..6b7fe6e3ce 100644 --- a/dev/concept_reference/fom_cost/index.html +++ b/dev/concept_reference/fom_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the fom_cost parameter for a specific unit, a cost term will be added to the objective function to account for the fixed operation and maintenance costs associated with that unit during the current optimization window. fom_cost differs from units_on_cost in a way that the fixed operation and maintenance costs apply to both the online and offline unit.

+- · SpineOpt.jl

By defining the fom_cost parameter for a specific unit, a cost term will be added to the objective function to account for the fixed operation and maintenance costs associated with that unit during the current optimization window. fom_cost differs from units_on_cost in a way that the fixed operation and maintenance costs apply to both the online and offline unit.

diff --git a/dev/concept_reference/frac_state_loss/index.html b/dev/concept_reference/frac_state_loss/index.html index e22428ba71..5f482929b5 100644 --- a/dev/concept_reference/frac_state_loss/index.html +++ b/dev/concept_reference/frac_state_loss/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The frac_state_loss parameter allows setting self-discharge losses for nodes with the node_state variables enabled using the has_state variable. Effectively, the frac_state_loss parameter acts as a coefficient on the node_state variable in the node injection constraint, imposing losses for the node. In simple cases, storage losses are typically fractional, e.g. a frac_state_loss parameter value of 0.01 would represent 1% of node_state lost per unit of time. However, a more general definition of what the frac_state_loss parameter represents in SpineOpt would be loss power per unit of node_state.

+- · SpineOpt.jl

The frac_state_loss parameter allows setting self-discharge losses for nodes with the node_state variables enabled using the has_state variable. Effectively, the frac_state_loss parameter acts as a coefficient on the node_state variable in the node injection constraint, imposing losses for the node. In simple cases, storage losses are typically fractional, e.g. a frac_state_loss parameter value of 0.01 would represent 1% of node_state lost per unit of time. However, a more general definition of what the frac_state_loss parameter represents in SpineOpt would be loss power per unit of node_state.

diff --git a/dev/concept_reference/fractional_demand/index.html b/dev/concept_reference/fractional_demand/index.html index 7f979a7780..ff9bd25c5b 100644 --- a/dev/concept_reference/fractional_demand/index.html +++ b/dev/concept_reference/fractional_demand/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/fuel_cost/index.html b/dev/concept_reference/fuel_cost/index.html index b15f374d52..8dfc62a8ea 100644 --- a/dev/concept_reference/fuel_cost/index.html +++ b/dev/concept_reference/fuel_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the fuel_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for costs associated with the unit's fuel usage over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the fuel_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for costs associated with the unit's fuel usage over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/graph_view_position/index.html b/dev/concept_reference/graph_view_position/index.html index 3903a7833c..31bcbef749 100644 --- a/dev/concept_reference/graph_view_position/index.html +++ b/dev/concept_reference/graph_view_position/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The graph_view_position parameter can be used to fix the positions of various objects and relationships when plotted using the Spine Toolbox Graph View. If not defined, Spine Toolbox simply plots the element in question wherever it sees fit in the graph.

+- · SpineOpt.jl

The graph_view_position parameter can be used to fix the positions of various objects and relationships when plotted using the Spine Toolbox Graph View. If not defined, Spine Toolbox simply plots the element in question wherever it sees fit in the graph.

diff --git a/dev/concept_reference/has_binary_gas_flow/index.html b/dev/concept_reference/has_binary_gas_flow/index.html index c78b42290b..a61b504584 100644 --- a/dev/concept_reference/has_binary_gas_flow/index.html +++ b/dev/concept_reference/has_binary_gas_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is necessary for the use of pressure driven gas transfer, for which the direction of flow is not known a priori. The parameter has_binary_gas_flow is a booelean method parameter, which - when set to true - triggers the generation of the binary variables binary_gas_connection_flow, which (together with the big_m parameter) forces the average flow through a pipeline to be unidirectional.

+- · SpineOpt.jl

This parameter is necessary for the use of pressure driven gas transfer, for which the direction of flow is not known a priori. The parameter has_binary_gas_flow is a booelean method parameter, which - when set to true - triggers the generation of the binary variables binary_gas_connection_flow, which (together with the big_m parameter) forces the average flow through a pipeline to be unidirectional.

diff --git a/dev/concept_reference/has_pressure/index.html b/dev/concept_reference/has_pressure/index.html index 190dda89f2..701afcf46a 100644 --- a/dev/concept_reference/has_pressure/index.html +++ b/dev/concept_reference/has_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node is to represent a node in a pressure driven gas network, the boolean parameter has_pressure should be set true, in order to trigger the generation of the node_pressure variable. The pressure at a certain node can also be constrainted through the parameters max_node_pressure and min_node_pressure. More details on the use of pressure driven gas transfer are described here

+- · SpineOpt.jl

If a node is to represent a node in a pressure driven gas network, the boolean parameter has_pressure should be set true, in order to trigger the generation of the node_pressure variable. The pressure at a certain node can also be constrainted through the parameters max_node_pressure and min_node_pressure. More details on the use of pressure driven gas transfer are described here

diff --git a/dev/concept_reference/has_state/index.html b/dev/concept_reference/has_state/index.html index 969507a52f..65e2f3a631 100644 --- a/dev/concept_reference/has_state/index.html +++ b/dev/concept_reference/has_state/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The has_state parameter is simply a Bool flag for whether a node has a node_state variable. By default, it is set to false, so the nodes enforce instantaneous commodity balance according to the nodal balance and node injection constraints. If set to true, the node will have a node_state variable generated for it, allowing for commodity storage at the node. Note that you'll also have to specify a value for the state_coeff parameter, as otherwise the node_state variable has zero commodity capacity.

+- · SpineOpt.jl

The has_state parameter is simply a Bool flag for whether a node has a node_state variable. By default, it is set to false, so the nodes enforce instantaneous commodity balance according to the nodal balance and node injection constraints. If set to true, the node will have a node_state variable generated for it, allowing for commodity storage at the node. Note that you'll also have to specify a value for the state_coeff parameter, as otherwise the node_state variable has zero commodity capacity.

diff --git a/dev/concept_reference/has_voltage_angle/index.html b/dev/concept_reference/has_voltage_angle/index.html index 2d0f8c12aa..1dbcb21df2 100644 --- a/dev/concept_reference/has_voltage_angle/index.html +++ b/dev/concept_reference/has_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For the use of node-based lossless DC powerflow, each node will be associated with a node_voltage_angle variable. To enable the generation of the variable in the optimization model, the boolean parameter has_voltage_angle should be set true. The voltage angle at a certain node can also be constrained through the parameters max_voltage_angle and min_voltage_angle. More details on the use of lossless nodal DC power flows are described here

+- · SpineOpt.jl

For the use of node-based lossless DC powerflow, each node will be associated with a node_voltage_angle variable. To enable the generation of the variable in the optimization model, the boolean parameter has_voltage_angle should be set true. The voltage angle at a certain node can also be constrained through the parameters max_voltage_angle and min_voltage_angle. More details on the use of lossless nodal DC power flows are described here

diff --git a/dev/concept_reference/investment_group/index.html b/dev/concept_reference/investment_group/index.html index a8b267deca..04e4dcd8f4 100644 --- a/dev/concept_reference/investment_group/index.html +++ b/dev/concept_reference/investment_group/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The investment_group class represents a group of investments that need to be done together. For example, a storage investment on a node might only make sense if done together with a unit or a connection investment.

To use this functionality, you must first create an investment_group and then specify any number of unit__investment_group, node__investment_group, and/or connection__investment_group relationships between your investment_group and the unit, node, and/or connection investments that you want to be done together. This will ensure that the investment variables of all the entities in the investment_group have the same value.

+- · SpineOpt.jl

The investment_group class represents a group of investments that need to be done together. For example, a storage investment on a node might only make sense if done together with a unit or a connection investment.

To use this functionality, you must first create an investment_group and then specify any number of unit__investment_group, node__investment_group, and/or connection__investment_group relationships between your investment_group and the unit, node, and/or connection investments that you want to be done together. This will ensure that the investment variables of all the entities in the investment_group have the same value.

diff --git a/dev/concept_reference/is_active/index.html b/dev/concept_reference/is_active/index.html index a8a56f880e..dc39157b1f 100644 --- a/dev/concept_reference/is_active/index.html +++ b/dev/concept_reference/is_active/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

is_acive is a universal, utility parameter that is defined for every object class. When used in conjunction with the activity_control feature, the is_active parameter allows one to control whether or not a specific object is active within a model or not.

+- · SpineOpt.jl

is_acive is a universal, utility parameter that is defined for every object class. When used in conjunction with the activity_control feature, the is_active parameter allows one to control whether or not a specific object is active within a model or not.

diff --git a/dev/concept_reference/is_non_spinning/index.html b/dev/concept_reference/is_non_spinning/index.html index 0f9e759ee3..5c60e4fb43 100644 --- a/dev/concept_reference/is_non_spinning/index.html +++ b/dev/concept_reference/is_non_spinning/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By setting the parameter is_non_spinning to true, a node is treated as a non-spinning reserve node. Note that this is only to differentiate spinning from non-spinning reserves. It is still necessary to set is_reserve_node to true. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

+- · SpineOpt.jl

By setting the parameter is_non_spinning to true, a node is treated as a non-spinning reserve node. Note that this is only to differentiate spinning from non-spinning reserves. It is still necessary to set is_reserve_node to true. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

diff --git a/dev/concept_reference/is_renewable/index.html b/dev/concept_reference/is_renewable/index.html index 631f6f83e4..3a0bebcc73 100644 --- a/dev/concept_reference/is_renewable/index.html +++ b/dev/concept_reference/is_renewable/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A boolean value indicating whether a unit is a renewable energy source (RES). If true, then the unit contributes to the share of the demand that is supplied by RES in the context of mp_min_res_gen_to_demand_ratio.

+- · SpineOpt.jl

A boolean value indicating whether a unit is a renewable energy source (RES). If true, then the unit contributes to the share of the demand that is supplied by RES in the context of mp_min_res_gen_to_demand_ratio.

diff --git a/dev/concept_reference/is_reserve_node/index.html b/dev/concept_reference/is_reserve_node/index.html index 89966c4979..8a3f95fae3 100644 --- a/dev/concept_reference/is_reserve_node/index.html +++ b/dev/concept_reference/is_reserve_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By setting the parameter is_reserve_node to true, a node is treated as a reserve node in the model. Units that are linked through a unit__to_node relationship will be able to provide balancing services to the reserve node, but within their technical feasibility. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

+- · SpineOpt.jl

By setting the parameter is_reserve_node to true, a node is treated as a reserve node in the model. Units that are linked through a unit__to_node relationship will be able to provide balancing services to the reserve node, but within their technical feasibility. The mathematical formulation holds a chapter on Reserve constraints and the general concept of setting up a model with reserves is described in Reserves.

diff --git a/dev/concept_reference/max_cum_in_unit_flow_bound/index.html b/dev/concept_reference/max_cum_in_unit_flow_bound/index.html index 67c6f4fc2c..f27b6a2d92 100644 --- a/dev/concept_reference/max_cum_in_unit_flow_bound/index.html +++ b/dev/concept_reference/max_cum_in_unit_flow_bound/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To impose a limit on the cumulative in flows to a unit for the entire modelling horizon, e.g. to enforce limits on emissions, the max_cum_in_unit_flow_bound parameter can be used. Defining this parameter triggers the generation of the constraint_max_cum_in_unit_flow_bound.

Assuming for instance that the total intake of a unit u_A should not exceed 10MWh for the entire modelling horizon, then the max_cum_in_unit_flow_bound would need to take the value 10. (Assuming here that the unit_flow variable is in MW, and the model duration_unit is hours)

+- · SpineOpt.jl

To impose a limit on the cumulative in flows to a unit for the entire modelling horizon, e.g. to enforce limits on emissions, the max_cum_in_unit_flow_bound parameter can be used. Defining this parameter triggers the generation of the constraint_max_cum_in_unit_flow_bound.

Assuming for instance that the total intake of a unit u_A should not exceed 10MWh for the entire modelling horizon, then the max_cum_in_unit_flow_bound would need to take the value 10. (Assuming here that the unit_flow variable is in MW, and the model duration_unit is hours)

diff --git a/dev/concept_reference/max_gap/index.html b/dev/concept_reference/max_gap/index.html index 2dd191a514..d79a2c9e41 100644 --- a/dev/concept_reference/max_gap/index.html +++ b/dev/concept_reference/max_gap/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This determines the optimality convergence criterion and is the benders gap tolerance for the master problem in a decomposed investments model. The benders gap is the relative difference between the current objective function upper bound(zupper) and lower bound (zlower) and is defined as 2*(zupper-zlower)/(zupper + zlower). When this value is lower than max_gap the benders algorithm will terminate having achieved satisfactory optimality.

+- · SpineOpt.jl

This determines the optimality convergence criterion and is the benders gap tolerance for the master problem in a decomposed investments model. The benders gap is the relative difference between the current objective function upper bound(zupper) and lower bound (zlower) and is defined as 2*(zupper-zlower)/(zupper + zlower). When this value is lower than max_gap the benders algorithm will terminate having achieved satisfactory optimality.

diff --git a/dev/concept_reference/max_iterations/index.html b/dev/concept_reference/max_iterations/index.html index 83d8db39b4..1d46cefd49 100644 --- a/dev/concept_reference/max_iterations/index.html +++ b/dev/concept_reference/max_iterations/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

When the model in question is of type :spineopt_benders_master, this determines the maximum number of Benders iterations.

+- · SpineOpt.jl

When the model in question is of type :spineopt_benders_master, this determines the maximum number of Benders iterations.

diff --git a/dev/concept_reference/max_mga_iterations/index.html b/dev/concept_reference/max_mga_iterations/index.html index 4a2bb18e42..ca90357fd2 100644 --- a/dev/concept_reference/max_mga_iterations/index.html +++ b/dev/concept_reference/max_mga_iterations/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_iterations defines how many MGA iterations will be performed, i.e. how many near-optimal solutions will be generated.

+- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_iterations defines how many MGA iterations will be performed, i.e. how many near-optimal solutions will be generated.

diff --git a/dev/concept_reference/max_mga_slack/index.html b/dev/concept_reference/max_mga_slack/index.html index 188abaecda..446969d7cf 100644 --- a/dev/concept_reference/max_mga_slack/index.html +++ b/dev/concept_reference/max_mga_slack/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_slack defines how far from the optimum the new solutions can maximally be (e.g. a value of 0.05 would alow for a 5% increase of the orginal objective value).

+- · SpineOpt.jl

In the MGA algorithm the original problem is reoptimized (see also mga-advanced), and finds near-optimal solutions. The parameter max_mga_slack defines how far from the optimum the new solutions can maximally be (e.g. a value of 0.05 would alow for a 5% increase of the orginal objective value).

diff --git a/dev/concept_reference/max_node_pressure/index.html b/dev/concept_reference/max_node_pressure/index.html index 8ad126962e..54becfdc29 100644 --- a/dev/concept_reference/max_node_pressure/index.html +++ b/dev/concept_reference/max_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/max_ratio_in_in_unit_flow/index.html b/dev/concept_reference/max_ratio_in_in_unit_flow/index.html index 3b4fb1bebb..87ea320d17 100644 --- a/dev/concept_reference/max_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_in_in_unit_flow parameter triggers the generation of the constraint_max_ratio_in_in_unit_flow and enforces an upper bound on the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a maximum share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the max_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the max_ratio_in_in_unit_flow parameter triggers the generation of the constraint_max_ratio_in_in_unit_flow and enforces an upper bound on the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a maximum share of 0.8 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the max_ratio_in_in_unit_flow parameter would be set to 0.8 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/max_ratio_in_out_unit_flow/index.html b/dev/concept_reference/max_ratio_in_out_unit_flow/index.html index 18c64607ce..1f9561d576 100644 --- a/dev/concept_reference/max_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_in_out_unit_flow parameter triggers the generation of the constraint_max_ratio_in_out_unit_flow and sets an upper bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node, i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the max_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

+- · SpineOpt.jl

The definition of the max_ratio_in_out_unit_flow parameter triggers the generation of the constraint_max_ratio_in_out_unit_flow and sets an upper bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the from_node, i.e. the incoming flows to the unit, and the second node (or group of nodes), represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the max_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

diff --git a/dev/concept_reference/max_ratio_out_in_connection_flow/index.html b/dev/concept_reference/max_ratio_out_in_connection_flow/index.html index 69a883f235..379cd36446 100644 --- a/dev/concept_reference/max_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/max_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_in_connection_flow parameter triggers the generation of the constraint_max_ratio_out_in_connection_flow and sets an upper bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the max_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

Note that the ratio can also be defined for connection__node__node relationships where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

+- · SpineOpt.jl

The definition of the max_ratio_out_in_connection_flow parameter triggers the generation of the constraint_max_ratio_out_in_connection_flow and sets an upper bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the max_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

Note that the ratio can also be defined for connection__node__node relationships where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

diff --git a/dev/concept_reference/max_ratio_out_in_unit_flow/index.html b/dev/concept_reference/max_ratio_out_in_unit_flow/index.html index eaa5fb20a3..9345f88fcd 100644 --- a/dev/concept_reference/max_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_in_unit_flow parameter triggers the generation of the constraint_max_ratio_out_in_unit_flow and enforces an upper bound on the ratio between outgoing and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the max_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the max_ratio_out_in_unit_flow parameter triggers the generation of the constraint_max_ratio_out_in_unit_flow and enforces an upper bound on the ratio between outgoing and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a maximum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the max_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/max_ratio_out_out_unit_flow/index.html b/dev/concept_reference/max_ratio_out_out_unit_flow/index.html index 035cf54202..08d2f1dccf 100644 --- a/dev/concept_reference/max_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/max_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_ratio_out_out_unit_flow parameter triggers the generation of the constraint_max_ratio_out_out_unit_flow and sets an upper bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a maximum ratio between two products of a unit u, e.g. setting the maximum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the max_ratio_out_out_unit_flow parameter triggers the generation of the constraint_max_ratio_out_out_unit_flow and sets an upper bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a maximum ratio between two products of a unit u, e.g. setting the maximum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html b/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html index 04a5e7f011..567b279a4e 100644 --- a/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html +++ b/dev/concept_reference/max_total_cumulated_unit_flow_from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is limiting the consumption of commodities such as oil or gas. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is limiting the consumption of commodities such as oil or gas. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html b/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html index 9d2bb1f55a..c85dba93a6 100644 --- a/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html +++ b/dev/concept_reference/max_total_cumulated_unit_flow_to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is the capping of CO2 emissions. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the max_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets an upper bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be below the given value. A possible use case is the capping of CO2 emissions. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/max_units_on_coefficient_in_in/index.html b/dev/concept_reference/max_units_on_coefficient_in_in/index.html index dd7701001c..c9353175c4 100644 --- a/dev/concept_reference/max_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/max_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the max_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

+- · SpineOpt.jl

The max_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the max_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_out, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/max_units_on_coefficient_in_out/index.html b/dev/concept_reference/max_units_on_coefficient_in_out/index.html index eea0d3791b..d07133278c 100644 --- a/dev/concept_reference/max_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/max_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the max_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

+- · SpineOpt.jl

The max_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the max_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/max_units_on_coefficient_out_in/index.html b/dev/concept_reference/max_units_on_coefficient_out_in/index.html index f379442048..63b82c5578 100644 --- a/dev/concept_reference/max_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/max_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the max_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

+- · SpineOpt.jl

The max_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the max_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_in_out, and max_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/max_units_on_coefficient_out_out/index.html b/dev/concept_reference/max_units_on_coefficient_out_out/index.html index 6b47798bf3..1f839bd59a 100644 --- a/dev/concept_reference/max_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/max_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The max_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the max_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_in_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

+- · SpineOpt.jl

The max_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the max_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the maximum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: max_units_on_coefficient_in_in, max_units_on_coefficient_out_in, and max_units_on_coefficient_in_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting minimum or fixed conversion rates, e.g. min_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/max_voltage_angle/index.html b/dev/concept_reference/max_voltage_angle/index.html index 17ef19cf7d..ff75f6e71e 100644 --- a/dev/concept_reference/max_voltage_angle/index.html +++ b/dev/concept_reference/max_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/mga_diff_relative/index.html b/dev/concept_reference/mga_diff_relative/index.html index 6e1f02dc8f..686bc5dd13 100644 --- a/dev/concept_reference/mga_diff_relative/index.html +++ b/dev/concept_reference/mga_diff_relative/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Currently, the MGA algorithm (see mga-advanced) only supports absolute differences between MGA variables (e.g. absolute differences between units_invested_available etc.). Hence, the default for this parameter is false and should not be changed for now.

+- · SpineOpt.jl

Currently, the MGA algorithm (see mga-advanced) only supports absolute differences between MGA variables (e.g. absolute differences between units_invested_available etc.). Hence, the default for this parameter is false and should not be changed for now.

diff --git a/dev/concept_reference/min_capacity_margin/index.html b/dev/concept_reference/min_capacity_margin/index.html index fd1338c2d7..44c1a0ab50 100644 --- a/dev/concept_reference/min_capacity_margin/index.html +++ b/dev/concept_reference/min_capacity_margin/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parameter min_capacity_margin triggers the creation of a constraint of the same name which ensures that the difference between available unit capacity and demand at the corresponding node is at least min_capacity_margin. In the calculation of capacity_margin, storage units' actual flows are used in place of the capacity. Defining a min_capacity_margin can be useful for scheduling unit maintenance outages (see scheduled_outage_duration for how to define a unit outage requirement) and for triggering unit investments due to capacity shortage. The min_capacity_margin constraint can be softened by defining min_capacity_margin_penalty this allows violation of the constraint which are penalised in the objective function.

+- · SpineOpt.jl

The parameter min_capacity_margin triggers the creation of a constraint of the same name which ensures that the difference between available unit capacity and demand at the corresponding node is at least min_capacity_margin. In the calculation of capacity_margin, storage units' actual flows are used in place of the capacity. Defining a min_capacity_margin can be useful for scheduling unit maintenance outages (see scheduled_outage_duration for how to define a unit outage requirement) and for triggering unit investments due to capacity shortage. The min_capacity_margin constraint can be softened by defining min_capacity_margin_penalty this allows violation of the constraint which are penalised in the objective function.

diff --git a/dev/concept_reference/min_capacity_margin_penalty/index.html b/dev/concept_reference/min_capacity_margin_penalty/index.html index 66d65e768a..7ad37cb4f4 100644 --- a/dev/concept_reference/min_capacity_margin_penalty/index.html +++ b/dev/concept_reference/min_capacity_margin_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_capacity_margin_penalty parameter triggers the addition of the min_capacity_margin_slack slack variable in the min_capacity_margin constraint. This allows violation of the constraint which are penalised in the objective function. This can be used to capture the capacity_value of investments. This can also be used to disincentivise scheduling of maintenance outages during times of low capacity. See scheduled_outage_duration for how to define a unit scheduled outage requirement

+- · SpineOpt.jl

The min_capacity_margin_penalty parameter triggers the addition of the min_capacity_margin_slack slack variable in the min_capacity_margin constraint. This allows violation of the constraint which are penalised in the objective function. This can be used to capture the capacity_value of investments. This can also be used to disincentivise scheduling of maintenance outages during times of low capacity. See scheduled_outage_duration for how to define a unit scheduled outage requirement

diff --git a/dev/concept_reference/min_down_time/index.html b/dev/concept_reference/min_down_time/index.html index f07165b214..12492887ed 100644 --- a/dev/concept_reference/min_down_time/index.html +++ b/dev/concept_reference/min_down_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_down_time parameter will trigger the creation of the Constraint on minimum down time. It sets a lower bound on the period that a unit has to stay offline after a shutdown.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

+- · SpineOpt.jl

The definition of the min_down_time parameter will trigger the creation of the Constraint on minimum down time. It sets a lower bound on the period that a unit has to stay offline after a shutdown.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

diff --git a/dev/concept_reference/min_node_pressure/index.html b/dev/concept_reference/min_node_pressure/index.html index 3a08eb9581..8675b22960 100644 --- a/dev/concept_reference/min_node_pressure/index.html +++ b/dev/concept_reference/min_node_pressure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/min_ratio_in_in_unit_flow/index.html b/dev/concept_reference/min_ratio_in_in_unit_flow/index.html index 3ea0a192ae..07aed00bb2 100644 --- a/dev/concept_reference/min_ratio_in_in_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_in_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_in_in_unit_flow parameter triggers the generation of the constraint_min_ratio_in_in_unit_flow and sets a lower bound for the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a minimum share of 0.2 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the min_ratio_in_in_unit_flow parameter would be set to 0.2 for the relationship u__supply_fuel_1__supply_fuel_2.

+- · SpineOpt.jl

The definition of the min_ratio_in_in_unit_flow parameter triggers the generation of the constraint_min_ratio_in_in_unit_flow and sets a lower bound for the ratio between incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where both nodes (or group of nodes) in this relationship represent from_nodes, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of in1 over in2, where in1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order. This parameter can be useful, for instance if a unit requires a specific commodity mix as a fuel supply.

To enforce e.g. for a unit u a minimum share of 0.2 of its incoming flow from the node supply_fuel_1 compared to its incoming flow from the node group supply_fuel_2 (consisting of the two nodes supply_fuel_2_component_a and supply_fuel_2_component_b) the min_ratio_in_in_unit_flow parameter would be set to 0.2 for the relationship u__supply_fuel_1__supply_fuel_2.

diff --git a/dev/concept_reference/min_ratio_in_out_unit_flow/index.html b/dev/concept_reference/min_ratio_in_out_unit_flow/index.html index 87e356caa5..c50b59ef3a 100644 --- a/dev/concept_reference/min_ratio_in_out_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_in_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_in_out_unit_flow parameter triggers the generation of the constraint_min_ratio_in_out_unit_flow and enforces a lower bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes, see) in this relationship represents the from_node, i.e. the incoming flow to the unit, and the second node (or group of nodes) represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

+- · SpineOpt.jl

The definition of the min_ratio_in_out_unit_flow parameter triggers the generation of the constraint_min_ratio_in_out_unit_flow and enforces a lower bound on the ratio between incoming and outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes, see) in this relationship represents the from_node, i.e. the incoming flow to the unit, and the second node (or group of nodes) represents the to_node i.e. the outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of in over out, where in is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 1.4 for a unit u between its incoming gas flow from the node ng and its outgoing flow to the node group el_heat (consisting of the two nodes el and heat), the fix_ratio_in_out_unit_flow parameter would be set to 1.4 for the relationship u__ng__el_heat.

diff --git a/dev/concept_reference/min_ratio_out_in_connection_flow/index.html b/dev/concept_reference/min_ratio_out_in_connection_flow/index.html index cdd13f631d..146e3eb34b 100644 --- a/dev/concept_reference/min_ratio_out_in_connection_flow/index.html +++ b/dev/concept_reference/min_ratio_out_in_connection_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_out_in_connection_flow parameter triggers the generation of the constraint_min_ratio_out_in_connection_flow and sets a lower bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

Note that the ratio can also be defined for connection__node__node relationships, where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

To enforce e.g. a minimum ratio of 0.2 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the min_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

+- · SpineOpt.jl

The definition of the min_ratio_out_in_connection_flow parameter triggers the generation of the constraint_min_ratio_out_in_connection_flow and sets a lower bound on the ratio between outgoing and incoming flows of a connection. The parameter is defined on the relationship class connection__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the connection, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the connection. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the connection_flow variable from the first node in the connection__node__node relationship in a left-to-right reading order.

Note that the ratio can also be defined for connection__node__node relationships, where one or both of the nodes correspond to node groups in order to impose a ratio on aggregated connection flows.

To enforce e.g. a minimum ratio of 0.2 for a connection conn between its outgoing electricity flow to node commodity1 and its incoming flows from the node node commodity2, the min_ratio_out_in_connection_flow parameter would be set to 0.8 for the relationship conn__commodity1__commodity2.

diff --git a/dev/concept_reference/min_ratio_out_in_unit_flow/index.html b/dev/concept_reference/min_ratio_out_in_unit_flow/index.html index 1d28d58265..3965d604ac 100644 --- a/dev/concept_reference/min_ratio_out_in_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_out_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the [min_ratio_out_in_unit_flow] parameter triggers the generation of the constraint_min_ratio_out_in_unit_flow and corresponds to a lower bound of the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the min_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

+- · SpineOpt.jl

The definition of the [min_ratio_out_in_unit_flow] parameter triggers the generation of the constraint_min_ratio_out_in_unit_flow and corresponds to a lower bound of the ratio between out and incoming flows of a unit. The parameter is defined on the relationship class unit__node__node, where the first node (or group of nodes) in this relationship represents the to_node, i.e. the outgoing flow from the unit, and the second node (or group of nodes), represents the from_node, i.e. the incoming flows to the unit. The ratio parameter is interpreted such that it constrains the ratio of out over in, where out is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce e.g. a minimum ratio of 0.8 for a unit u between its outgoing flows to the node group el_heat (consisting of the two nodes el and heat) and its incoming gas flow from ng the min_ratio_out_in_unit_flow parameter would be set to 0.8 for the relationship u__el_heat__ng.

diff --git a/dev/concept_reference/min_ratio_out_out_unit_flow/index.html b/dev/concept_reference/min_ratio_out_out_unit_flow/index.html index 343ab46151..1d41819716 100644 --- a/dev/concept_reference/min_ratio_out_out_unit_flow/index.html +++ b/dev/concept_reference/min_ratio_out_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_ratio_out_out_unit_flow parameter triggers the generation of the constraint_min_ratio_out_out_unit_flow and enforces a lower bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a minimum ratio between two products of a unit u, e.g. setting the minimum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

+- · SpineOpt.jl

The definition of the min_ratio_out_out_unit_flow parameter triggers the generation of the constraint_min_ratio_out_out_unit_flow and enforces a lower bound on the ratio between outgoing flows of a unit. The parameter is defined on the relationship class unit__node__node, where the nodes (or group of nodes) in this relationship represent the to_node's', i.e. outgoing flow from the unit. The ratio parameter is interpreted such that it constrains the ratio of out1 over out2, where out1 is the unit_flow variable from the first node in the unit__node__node relationship in a left-to-right reading order.

To enforce a minimum ratio between two products of a unit u, e.g. setting the minimum share of produced electricity flowing to node el to 0.4 of the production of heat flowing to node heat, the fix_ratio_out_out_unit_flow parameter would be set to 0.4 for the relationship u__el__heat.

diff --git a/dev/concept_reference/min_scheduled_outage_duration/index.html b/dev/concept_reference/min_scheduled_outage_duration/index.html index 8db249ec1a..a18194bbab 100644 --- a/dev/concept_reference/min_scheduled_outage_duration/index.html +++ b/dev/concept_reference/min_scheduled_outage_duration/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_scheduled_outage_duration duration parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the sum of the units_out_of_service variable over the optimisation window. The primary function of this parameter is thus, to schedule maintenance outages for units. This parameter enforces that the unit must be taken out of service for at least an amount of time equal to min_scheduled_outage_duration

It can be defined for a unit and will then impose restrictions on the units_out\of_service variables that represent whether a unit is on maintenance ourage at that particular time. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

To scheduled maintenance outages using this functionality, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

+- · SpineOpt.jl

The definition of the min_scheduled_outage_duration duration parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the sum of the units_out_of_service variable over the optimisation window. The primary function of this parameter is thus, to schedule maintenance outages for units. This parameter enforces that the unit must be taken out of service for at least an amount of time equal to min_scheduled_outage_duration

It can be defined for a unit and will then impose restrictions on the units_out\of_service variables that represent whether a unit is on maintenance ourage at that particular time. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

To scheduled maintenance outages using this functionality, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

diff --git a/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html b/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html index 53dfd84ba4..2c0533e33d 100644 --- a/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html +++ b/dev/concept_reference/min_total_cumulated_unit_flow_from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_from_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__from_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html b/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html index 7723c332e9..625cce63c5 100644 --- a/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html +++ b/dev/concept_reference/min_total_cumulated_unit_flow_to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. A possible use case is a minimum value for electricity generated from renewable sources. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

+- · SpineOpt.jl

The definition of the min_total_cumulated_unit_flow_to_node parameter will trigger the creation of the constraint_total_cumulated_unit_flow. It sets a lower bound on the sum of the unit_flow variable for all timesteps.

It can be defined for the unit__to_node relationships, as well as their counterparts for node- and unit groups. It will then restrict the total accumulation of unit_flow variables to be above the given value. A possible use case is a minimum value for electricity generated from renewable sources. The parameter is given as an absolute value thus has to be coherent with the units used for the unit flows.

diff --git a/dev/concept_reference/min_units_on_coefficient_in_in/index.html b/dev/concept_reference/min_units_on_coefficient_in_in/index.html index 75a1962375..5393849408 100644 --- a/dev/concept_reference/min_units_on_coefficient_in_in/index.html +++ b/dev/concept_reference/min_units_on_coefficient_in_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the min_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

+- · SpineOpt.jl

The min_units_on_coefficient_in_in parameter is an optional coefficient in the unit input-input ratio constraint controlled by the min_ratio_in_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_out, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_in and fix_units_on_coefficient_in_in.

diff --git a/dev/concept_reference/min_units_on_coefficient_in_out/index.html b/dev/concept_reference/min_units_on_coefficient_in_out/index.html index b377e1ac34..f5ae39714b 100644 --- a/dev/concept_reference/min_units_on_coefficient_in_out/index.html +++ b/dev/concept_reference/min_units_on_coefficient_in_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the min_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

+- · SpineOpt.jl

The min_units_on_coefficient_in_out parameter is an optional coefficient in the unit input-output ratio constraint controlled by the min_ratio_in_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_out_in, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_in_out and fix_units_on_coefficient_in_out.

diff --git a/dev/concept_reference/min_units_on_coefficient_out_in/index.html b/dev/concept_reference/min_units_on_coefficient_out_in/index.html index 31595030ce..460708832e 100644 --- a/dev/concept_reference/min_units_on_coefficient_out_in/index.html +++ b/dev/concept_reference/min_units_on_coefficient_out_in/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the min_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

+- · SpineOpt.jl

The min_units_on_coefficient_out_in parameter is an optional coefficient in the unit output-input ratio constraint controlled by the min_ratio_out_in_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_out, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_in and fix_units_on_coefficient_out_in.

diff --git a/dev/concept_reference/min_units_on_coefficient_out_out/index.html b/dev/concept_reference/min_units_on_coefficient_out_out/index.html index a8bfea4fa1..7878af1ae3 100644 --- a/dev/concept_reference/min_units_on_coefficient_out_out/index.html +++ b/dev/concept_reference/min_units_on_coefficient_out_out/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The min_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the min_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

+- · SpineOpt.jl

The min_units_on_coefficient_out_out parameter is an optional coefficient in the unit output-output ratio constraint controlled by the min_ratio_out_out_unit_flow parameter. Essentially, it acts as a coefficient for the units_on variable in the constraint, allowing for making the minimum conversion ratio dependent on the amount of online capacity.

Note that there are different parameters depending on the directions of the unit_flow variables being constrained: min_units_on_coefficient_in_in, min_units_on_coefficient_in_out, and min_units_on_coefficient_out_in, all of which apply to their respective constraints. Similarly, there are different parameters for setting maximum or fixed conversion rates, e.g. max_units_on_coefficient_out_out and fix_units_on_coefficient_out_out.

diff --git a/dev/concept_reference/min_up_time/index.html b/dev/concept_reference/min_up_time/index.html index 2ebb7d1a44..f85b2662b8 100644 --- a/dev/concept_reference/min_up_time/index.html +++ b/dev/concept_reference/min_up_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the min_up_time parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the period that a unit has to stay online after a startup.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

+- · SpineOpt.jl

The definition of the min_up_time parameter will trigger the creation of the Constraint on minimum up time. It sets a lower bound on the period that a unit has to stay online after a startup.

It can be defined for a unit and will then impose restrictions on the units_on variables that represent the on- or offline status of the unit. The parameter is given as a duration value. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

For a more complete description of unit commmitment restrictions, see Unit commitment.

diff --git a/dev/concept_reference/min_voltage_angle/index.html b/dev/concept_reference/min_voltage_angle/index.html index aff5bfdb50..aed0a3d8e2 100644 --- a/dev/concept_reference/min_voltage_angle/index.html +++ b/dev/concept_reference/min_voltage_angle/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/minimum_operating_point/index.html b/dev/concept_reference/minimum_operating_point/index.html index 0ec005f2cd..969270b630 100644 --- a/dev/concept_reference/minimum_operating_point/index.html +++ b/dev/concept_reference/minimum_operating_point/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the minimum_operating_point parameter will trigger the creation of the Constraint on minimum operating point. It sets a lower bound on the value of the unit_flow variable for a unit that is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

+- · SpineOpt.jl

The definition of the minimum_operating_point parameter will trigger the creation of the Constraint on minimum operating point. It sets a lower bound on the value of the unit_flow variable for a unit that is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not included, the aforementioned constraint will not be created, which is equivalent to choosing a value of 0.

diff --git a/dev/concept_reference/minimum_reserve_activation_time/index.html b/dev/concept_reference/minimum_reserve_activation_time/index.html index 443ace67e2..e120bf9605 100644 --- a/dev/concept_reference/minimum_reserve_activation_time/index.html +++ b/dev/concept_reference/minimum_reserve_activation_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parameter minimum_reserve_activation_time is the duration a reserve product needs to be online, before it can be replaced by another (slower) reserve product.

In SpineOpt, the parameter is used to model reserve provision through storages. If a storage provides reserves to a reserve node (see also is_reserve_node) one needs to ensure that the node state is sufficiently high to provide these scheduled reserves as least for the duration of the minimum_reserve_activation_time. The constraint on the minimum node state with reserve provision is triggered by the existence of the minimum_reserve_activation_time. See also Reserves

+- · SpineOpt.jl

The parameter minimum_reserve_activation_time is the duration a reserve product needs to be online, before it can be replaced by another (slower) reserve product.

In SpineOpt, the parameter is used to model reserve provision through storages. If a storage provides reserves to a reserve node (see also is_reserve_node) one needs to ensure that the node state is sufficiently high to provide these scheduled reserves as least for the duration of the minimum_reserve_activation_time. The constraint on the minimum node state with reserve provision is triggered by the existence of the minimum_reserve_activation_time. See also Reserves

diff --git a/dev/concept_reference/model/index.html b/dev/concept_reference/model/index.html index 99a7458b91..688a71766f 100644 --- a/dev/concept_reference/model/index.html +++ b/dev/concept_reference/model/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model object holds general information about the optimization problem at hand. Firstly, the modelling horizon is specified on the model object, i.e. the scope of the optimization model, and if applicable the duration of the rolling window (see also model_start, model_end and roll_forward). Secondly, the model works as an overarching assembler - only through linking temporal_blocks and stochastic_structures to a model object via relationships, they become part of the optimization problem, and respectively linked nodes, connections and units. If desired the user can also specify defaults for temporals and stochastic via the designated default relationships (see e.g., model__default_temporal_block). In this case, the default temporal is populated for missing node__temporal_block relationships. Lastly, the model object contains information about the algorithm used for solving the problem (see model_type).

+- · SpineOpt.jl

The model object holds general information about the optimization problem at hand. Firstly, the modelling horizon is specified on the model object, i.e. the scope of the optimization model, and if applicable the duration of the rolling window (see also model_start, model_end and roll_forward). Secondly, the model works as an overarching assembler - only through linking temporal_blocks and stochastic_structures to a model object via relationships, they become part of the optimization problem, and respectively linked nodes, connections and units. If desired the user can also specify defaults for temporals and stochastic via the designated default relationships (see e.g., model__default_temporal_block). In this case, the default temporal is populated for missing node__temporal_block relationships. Lastly, the model object contains information about the algorithm used for solving the problem (see model_type).

diff --git a/dev/concept_reference/model__default_investment_stochastic_structure/index.html b/dev/concept_reference/model__default_investment_stochastic_structure/index.html index 3f5b315908..6bd7f04582 100644 --- a/dev/concept_reference/model__default_investment_stochastic_structure/index.html +++ b/dev/concept_reference/model__default_investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model__default_investment_stochastic_structure relationship can be used to set model-wide default unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships. Its main purpose is to allow users to avoid defining each relationship individually, and instead allow them to focus on defining only the exceptions. As such, any specific unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships take priority over the model__default_investment_stochastic_structure relationship.

+- · SpineOpt.jl

The model__default_investment_stochastic_structure relationship can be used to set model-wide default unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships. Its main purpose is to allow users to avoid defining each relationship individually, and instead allow them to focus on defining only the exceptions. As such, any specific unit__investment_stochastic_structure, connection__investment_stochastic_structure, and node__investment_stochastic_structure relationships take priority over the model__default_investment_stochastic_structure relationship.

diff --git a/dev/concept_reference/model__default_investment_temporal_block/index.html b/dev/concept_reference/model__default_investment_temporal_block/index.html index 046832a074..e93e870769 100644 --- a/dev/concept_reference/model__default_investment_temporal_block/index.html +++ b/dev/concept_reference/model__default_investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

model__default_investment_temporal_block is a two-dimensional relationship between a model and a temporal_block. This relationship defines the default temporal resolution and scope for all investment decisions in the model (units, connections and storages). Specifying model__default_investment_temporal_block for a model avoids the need to specify individual node__investment_temporal_block, unit__investment_temporal_block and connection__investment_temporal_block relationships. Conversely, if any of these individual relationships are defined (e.g. connection__investment_temporal_block) along with model__temporal_block, these will override model__default_investment_temporal_block.

See also Investment Optimization

+- · SpineOpt.jl

model__default_investment_temporal_block is a two-dimensional relationship between a model and a temporal_block. This relationship defines the default temporal resolution and scope for all investment decisions in the model (units, connections and storages). Specifying model__default_investment_temporal_block for a model avoids the need to specify individual node__investment_temporal_block, unit__investment_temporal_block and connection__investment_temporal_block relationships. Conversely, if any of these individual relationships are defined (e.g. connection__investment_temporal_block) along with model__temporal_block, these will override model__default_investment_temporal_block.

See also Investment Optimization

diff --git a/dev/concept_reference/model__default_stochastic_structure/index.html b/dev/concept_reference/model__default_stochastic_structure/index.html index 5a3f030e60..6723d31123 100644 --- a/dev/concept_reference/model__default_stochastic_structure/index.html +++ b/dev/concept_reference/model__default_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__default_temporal_block/index.html b/dev/concept_reference/model__default_temporal_block/index.html index 0358681c58..4740e1c6c4 100644 --- a/dev/concept_reference/model__default_temporal_block/index.html +++ b/dev/concept_reference/model__default_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__report/index.html b/dev/concept_reference/model__report/index.html index dd64e0e7b9..db833a9730 100644 --- a/dev/concept_reference/model__report/index.html +++ b/dev/concept_reference/model__report/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/model__stochastic_structure/index.html b/dev/concept_reference/model__stochastic_structure/index.html index bd16407e75..e8b7fcb8b0 100644 --- a/dev/concept_reference/model__stochastic_structure/index.html +++ b/dev/concept_reference/model__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The [model__stochastic_structure] relationship defines which stochastic_structures are active in which models. Essentially, this relationship allows for e.g. attributing multiple node__stochastic_structure relationships for a single node, and switching between them in different models. Any stochastic_structure in the model__default_stochastic_structure relationship is automatically assumed to be active in the connected model, so there's no need to include it in [model__stochastic_structure] separately.

+- · SpineOpt.jl

The [model__stochastic_structure] relationship defines which stochastic_structures are active in which models. Essentially, this relationship allows for e.g. attributing multiple node__stochastic_structure relationships for a single node, and switching between them in different models. Any stochastic_structure in the model__default_stochastic_structure relationship is automatically assumed to be active in the connected model, so there's no need to include it in [model__stochastic_structure] separately.

diff --git a/dev/concept_reference/model__temporal_block/index.html b/dev/concept_reference/model__temporal_block/index.html index 946cb8445f..2c4bbdc1d6 100644 --- a/dev/concept_reference/model__temporal_block/index.html +++ b/dev/concept_reference/model__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The model__temporal_block relationship is used to determine which temporal_blocks are included in a specific model. Note that defining this relationship does not yet imply that any element of the model will be governed by the specified temporal_block, for this to happen additional relationships have to be defined such as the model__default_temporal_block relationship.

+- · SpineOpt.jl

The model__temporal_block relationship is used to determine which temporal_blocks are included in a specific model. Note that defining this relationship does not yet imply that any element of the model will be governed by the specified temporal_block, for this to happen additional relationships have to be defined such as the model__default_temporal_block relationship.

diff --git a/dev/concept_reference/model_end/index.html b/dev/concept_reference/model_end/index.html index cd36093de4..8cd2b82a9f 100644 --- a/dev/concept_reference/model_end/index.html +++ b/dev/concept_reference/model_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Together with the model_start parameter, it is used to define the temporal horizon of the model. In case of a single solve optimization, the parameter marks the end of the last timestep that is possibly part of the optimization. Note that it poses an upper bound, and that the optimization does not necessarily include this timestamp when the block_end parameters are more stringent.

In case of a rolling horizon optimization, it will tell to the model to stop rolling forward once an optimization has been performed for which the result of the indicated timestamp has been kept in the final results. For example, assume that a model_end value of 2030-01-01T05:00:00 has been chosen, a block_end of 3h, and a roll_forward of 2h. The roll_forward parameter indicates here that the results of the first two hours of each optimization window are kept as final, therefore the last optimization window will span the timeframe [2030-01-01T04:00:00 - 2030-01-01T06:00:00].

A DateTime value should be chosen for this parameter.

+- · SpineOpt.jl

Together with the model_start parameter, it is used to define the temporal horizon of the model. In case of a single solve optimization, the parameter marks the end of the last timestep that is possibly part of the optimization. Note that it poses an upper bound, and that the optimization does not necessarily include this timestamp when the block_end parameters are more stringent.

In case of a rolling horizon optimization, it will tell to the model to stop rolling forward once an optimization has been performed for which the result of the indicated timestamp has been kept in the final results. For example, assume that a model_end value of 2030-01-01T05:00:00 has been chosen, a block_end of 3h, and a roll_forward of 2h. The roll_forward parameter indicates here that the results of the first two hours of each optimization window are kept as final, therefore the last optimization window will span the timeframe [2030-01-01T04:00:00 - 2030-01-01T06:00:00].

A DateTime value should be chosen for this parameter.

diff --git a/dev/concept_reference/model_start/index.html b/dev/concept_reference/model_start/index.html index a1b5e1a90a..012957a48a 100644 --- a/dev/concept_reference/model_start/index.html +++ b/dev/concept_reference/model_start/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Together with the model_end parameter, it is used to define the temporal horizon of the model. For a single solve optimization, it marks the timestamp from which the relative offset in a temporal_block is defined by the block_start parameter. In the rolling optimization framework, it does this for the first optimization window.

A DateTime value should be chosen for this parameter.

+- · SpineOpt.jl

Together with the model_end parameter, it is used to define the temporal horizon of the model. For a single solve optimization, it marks the timestamp from which the relative offset in a temporal_block is defined by the block_start parameter. In the rolling optimization framework, it does this for the first optimization window.

A DateTime value should be chosen for this parameter.

diff --git a/dev/concept_reference/model_type/index.html b/dev/concept_reference/model_type/index.html index 8287239c29..c011762f82 100644 --- a/dev/concept_reference/model_type/index.html +++ b/dev/concept_reference/model_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter controls the low-level algorithm that SpineOpt uses to solve the underlying optimization problem. Currently three values are possible:

spineopt_standard uses the standard algorithm.

spineopt_benders uses the Benders decomposition algorithm (see Decomposition.

spineopt_mga uses the Model to Generate Alternatives algorithm.

+- · SpineOpt.jl

This parameter controls the low-level algorithm that SpineOpt uses to solve the underlying optimization problem. Currently three values are possible:

spineopt_standard uses the standard algorithm.

spineopt_benders uses the Benders decomposition algorithm (see Decomposition.

spineopt_mga uses the Model to Generate Alternatives algorithm.

diff --git a/dev/concept_reference/model_type_list/index.html b/dev/concept_reference/model_type_list/index.html index ad291fddb6..7e41d143f4 100644 --- a/dev/concept_reference/model_type_list/index.html +++ b/dev/concept_reference/model_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

model_type_list holds the possible values for the model parameter model_type parameter. See model_type for more details

+- · SpineOpt.jl

model_type_list holds the possible values for the model parameter model_type parameter. See model_type for more details

diff --git a/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html b/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html index bb411377aa..e9ff3d7d1b 100644 --- a/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html +++ b/dev/concept_reference/mp_min_res_gen_to_demand_ratio/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For investment models that are solved using the Benders algorithm (i.e., with model_type set to spineopt_benders), mp_min_res_gen_to_demand_ratio represents a lower bound on the fraction of the total system demand that must be supplied by renewable generation sources (RES).

A unit can be marked as a renewable generation source by setting is_renewable to true.

+- · SpineOpt.jl

For investment models that are solved using the Benders algorithm (i.e., with model_type set to spineopt_benders), mp_min_res_gen_to_demand_ratio represents a lower bound on the fraction of the total system demand that must be supplied by renewable generation sources (RES).

A unit can be marked as a renewable generation source by setting is_renewable to true.

diff --git a/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html b/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html index 1ce03466b5..60c77ddde3 100644 --- a/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html +++ b/dev/concept_reference/mp_min_res_gen_to_demand_ratio_slack_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A penalty for violating the mp_min_res_gen_to_demand_ratio. If set, then the lower bound on the fraction of the total system demand that must be supplied by RES becomes a 'soft' constraint. A new cost term is added to the objective, mutlitplying the penalty by the slack.

+- · SpineOpt.jl

A penalty for violating the mp_min_res_gen_to_demand_ratio. If set, then the lower bound on the fraction of the total system demand that must be supplied by RES becomes a 'soft' constraint. A new cost term is added to the objective, mutlitplying the penalty by the slack.

diff --git a/dev/concept_reference/nodal_balance_sense/index.html b/dev/concept_reference/nodal_balance_sense/index.html index df1019921d..a2e41641c7 100644 --- a/dev/concept_reference/nodal_balance_sense/index.html +++ b/dev/concept_reference/nodal_balance_sense/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

nodal_balance_sense determines whether or not a node is able to naturally consume or produce energy. The default value, ==, means that the node is unable to do any of that, and thus it needs to be perfectly balanced. The vale >= means that the node is a sink, that is, it can consume any amounts of energy. The value <= means that the node is a source, that is, it can produce any amounts of energy.

+- · SpineOpt.jl

nodal_balance_sense determines whether or not a node is able to naturally consume or produce energy. The default value, ==, means that the node is unable to do any of that, and thus it needs to be perfectly balanced. The vale >= means that the node is a sink, that is, it can consume any amounts of energy. The value <= means that the node is a source, that is, it can produce any amounts of energy.

diff --git a/dev/concept_reference/node/index.html b/dev/concept_reference/node/index.html index 0d05dd0cb4..146cef8639 100644 --- a/dev/concept_reference/node/index.html +++ b/dev/concept_reference/node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node is perhaps the most important object class out of the Systemic object classes, as it is what connects the rest together via the Systemic relationship classes. Essentially, nodes act as points in the modelled commodity network where commodity balance is enforced via the node balance and node injection constraints, tying together the inputs and outputs from units and connections, as well as any external demand. Furthermore, nodes play a crucial role for defining the temporal and stochastic structures of the model via the node__temporal_block and node__stochastic_structure relationships. For more details about the Temporal Framework and the Stochastic Framework, please refer to the dedicated sections.

Since nodes act as the points where commodity balance is enforced, this also makes them a natural fit for implementing storage. The has_state parameter controls whether a node has a node_state variable, which essentially represents the commodity content of the node. The state_coeff parameter tells how the node_state variable relates to all the commodity flows. Storage losses are handled via the frac_state_loss parameter, and potential diffusion of commodity content to other nodes via the diff_coeff parameter for the node__node relationship.

+- · SpineOpt.jl

The node is perhaps the most important object class out of the Systemic object classes, as it is what connects the rest together via the Systemic relationship classes. Essentially, nodes act as points in the modelled commodity network where commodity balance is enforced via the node balance and node injection constraints, tying together the inputs and outputs from units and connections, as well as any external demand. Furthermore, nodes play a crucial role for defining the temporal and stochastic structures of the model via the node__temporal_block and node__stochastic_structure relationships. For more details about the Temporal Framework and the Stochastic Framework, please refer to the dedicated sections.

Since nodes act as the points where commodity balance is enforced, this also makes them a natural fit for implementing storage. The has_state parameter controls whether a node has a node_state variable, which essentially represents the commodity content of the node. The state_coeff parameter tells how the node_state variable relates to all the commodity flows. Storage losses are handled via the frac_state_loss parameter, and potential diffusion of commodity content to other nodes via the diff_coeff parameter for the node__node relationship.

diff --git a/dev/concept_reference/node__commodity/index.html b/dev/concept_reference/node__commodity/index.html index dc953fed49..32ef260d87 100644 --- a/dev/concept_reference/node__commodity/index.html +++ b/dev/concept_reference/node__commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__commodity is a two-dimensional relationship between a node and a commodity and specifies the commodity that flows to or from the node. Generally, since flows are not dimensioned by commodity, this has no meaning in terms of the variables and constraint equations. However, there are two specific uses for this relationship:

  1. To specify that specific network physics should apply to the network formed by the member nodes for that commodity. See powerflow
  2. Only connection flows that are between nodes of the same or no commodity are included in the node_balance constraint.
+- · SpineOpt.jl

node__commodity is a two-dimensional relationship between a node and a commodity and specifies the commodity that flows to or from the node. Generally, since flows are not dimensioned by commodity, this has no meaning in terms of the variables and constraint equations. However, there are two specific uses for this relationship:

  1. To specify that specific network physics should apply to the network formed by the member nodes for that commodity. See powerflow
  2. Only connection flows that are between nodes of the same or no commodity are included in the node_balance constraint.
diff --git a/dev/concept_reference/node__investment_stochastic_structure/index.html b/dev/concept_reference/node__investment_stochastic_structure/index.html index 0ef72a8b6b..6a8ca0e8e7 100644 --- a/dev/concept_reference/node__investment_stochastic_structure/index.html +++ b/dev/concept_reference/node__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node__investment_temporal_block/index.html b/dev/concept_reference/node__investment_temporal_block/index.html index abfe36f114..08d2ebb5f0 100644 --- a/dev/concept_reference/node__investment_temporal_block/index.html +++ b/dev/concept_reference/node__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__investment_temporal_block is a two-dimensional relationship between a node and a temporal_block. This relationship defines the temporal resolution and scope of a node's investment decisions (currently only storage invesments). Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no node__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if node__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified node.

See also Investment Optimization

+- · SpineOpt.jl

node__investment_temporal_block is a two-dimensional relationship between a node and a temporal_block. This relationship defines the temporal resolution and scope of a node's investment decisions (currently only storage invesments). Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no node__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if node__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified node.

See also Investment Optimization

diff --git a/dev/concept_reference/node__node/index.html b/dev/concept_reference/node__node/index.html index f7daf01d60..cd747a2c93 100644 --- a/dev/concept_reference/node__node/index.html +++ b/dev/concept_reference/node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node__node relationship is used for defining direct interactions between two nodes, like diffusion of commodity content. Note that the node__node relationship is assumed to be one-directional, meaning that

node__node(node1=n1, node2=n2) != node__node(node1=n2, node2=n1).

Thus, when one wants to define symmetric relationships between two nodes, one needs to define both directions as separate relationships.

+- · SpineOpt.jl

The node__node relationship is used for defining direct interactions between two nodes, like diffusion of commodity content. Note that the node__node relationship is assumed to be one-directional, meaning that

node__node(node1=n1, node2=n2) != node__node(node1=n2, node2=n1).

Thus, when one wants to define symmetric relationships between two nodes, one needs to define both directions as separate relationships.

diff --git a/dev/concept_reference/node__stochastic_structure/index.html b/dev/concept_reference/node__stochastic_structure/index.html index 2cf1c47617..0b9b217966 100644 --- a/dev/concept_reference/node__stochastic_structure/index.html +++ b/dev/concept_reference/node__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node__stochastic_structure relationship defines which stochastic_structure the node uses. Essentially, it sets the stochastic_structure of all the flow variables connected to the node, as well as the potential node_state variable. Note that only one stochastic_structure can be defined per node per model, as interpreted based on the node__stochastic_structure and model__stochastic_structure relationships. Investment variables use dedicated relationships, as detailed in the Investment Optimization section.

The node__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

+- · SpineOpt.jl

The node__stochastic_structure relationship defines which stochastic_structure the node uses. Essentially, it sets the stochastic_structure of all the flow variables connected to the node, as well as the potential node_state variable. Note that only one stochastic_structure can be defined per node per model, as interpreted based on the node__stochastic_structure and model__stochastic_structure relationships. Investment variables use dedicated relationships, as detailed in the Investment Optimization section.

The node__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

diff --git a/dev/concept_reference/node__temporal_block/index.html b/dev/concept_reference/node__temporal_block/index.html index c4dbd47050..ca7956763e 100644 --- a/dev/concept_reference/node__temporal_block/index.html +++ b/dev/concept_reference/node__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This relationship links a node to a temporal_block and as such it will determine which temporal block governs the temporal horizon and resolution of the variables associated with this node. Specifically, the resolution of the temporal block will directly imply the duration of the time slices for which both the flow variables and their associated constraints are created.

For a more detailed description of how the temporal structure in SpineOpt can be created, see Temporal Framework.

+- · SpineOpt.jl

This relationship links a node to a temporal_block and as such it will determine which temporal block governs the temporal horizon and resolution of the variables associated with this node. Specifically, the resolution of the temporal block will directly imply the duration of the time slices for which both the flow variables and their associated constraints are created.

For a more detailed description of how the temporal structure in SpineOpt can be created, see Temporal Framework.

diff --git a/dev/concept_reference/node__unit_constraint/index.html b/dev/concept_reference/node__unit_constraint/index.html index e6dd34289d..037eeb2524 100644 --- a/dev/concept_reference/node__unit_constraint/index.html +++ b/dev/concept_reference/node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node__user_constraint is a two-dimensional relationship between a node and a user_constraint. The relationship specifies that a variable associated only with the node (currently only the node_state) is involved in the constraint. For example, the node_state_coefficient defined on node__user_constraint specifies the coefficient of the node's node_state variable in the specified user_constraint.

See also user_constraint

+- · SpineOpt.jl

node__user_constraint is a two-dimensional relationship between a node and a user_constraint. The relationship specifies that a variable associated only with the node (currently only the node_state) is involved in the constraint. For example, the node_state_coefficient defined on node__user_constraint specifies the coefficient of the node's node_state variable in the specified user_constraint.

See also user_constraint

diff --git a/dev/concept_reference/node_opf_type/index.html b/dev/concept_reference/node_opf_type/index.html index 308a2ec03a..51229820f9 100644 --- a/dev/concept_reference/node_opf_type/index.html +++ b/dev/concept_reference/node_opf_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node_opf_type_list/index.html b/dev/concept_reference/node_opf_type_list/index.html index a898e0358b..da59e3c8e5 100644 --- a/dev/concept_reference/node_opf_type_list/index.html +++ b/dev/concept_reference/node_opf_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Houses the different possible values for the node_opf_type parameter. To identify the reference node, set node_opf_type = :node_opf_type_reference, while node_opf_type = node_opf_type_normal is the default value for non-reference nodes.

See also powerflow.

+- · SpineOpt.jl

Houses the different possible values for the node_opf_type parameter. To identify the reference node, set node_opf_type = :node_opf_type_reference, while node_opf_type = node_opf_type_normal is the default value for non-reference nodes.

See also powerflow.

diff --git a/dev/concept_reference/node_slack_penalty/index.html b/dev/concept_reference/node_slack_penalty/index.html index e0c765c5a6..b3d78d1937 100644 --- a/dev/concept_reference/node_slack_penalty/index.html +++ b/dev/concept_reference/node_slack_penalty/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

node_slack_penalty triggers the creation of node slack variables, node_slack_pos and node_slack_neg. This allows the model to violate the node_balance constraint with these violations penalised in the objective function with a coefficient equal to node_slack_penalty. If node_slack_penalty = 0 the slack variables are created and violations are unpenalised. If set to none or undefined, the variables are not created and violation of the node_balance constraint is not possible.

+- · SpineOpt.jl

node_slack_penalty triggers the creation of node slack variables, node_slack_pos and node_slack_neg. This allows the model to violate the node_balance constraint with these violations penalised in the objective function with a coefficient equal to node_slack_penalty. If node_slack_penalty = 0 the slack variables are created and violations are unpenalised. If set to none or undefined, the variables are not created and violation of the node_balance constraint is not possible.

diff --git a/dev/concept_reference/node_state_cap/index.html b/dev/concept_reference/node_state_cap/index.html index beebeb47c5..bdf7186f7d 100644 --- a/dev/concept_reference/node_state_cap/index.html +++ b/dev/concept_reference/node_state_cap/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The node_state_cap parameter represents the maximum allowed value for the node_state variable. Note that in order for a node to have a node_state variable in the first place, the has_state parameter must be set to true. However, if the node has storage investments enabled using the candidate_storages parameter, the node_state_cap parameter acts as a coefficient for the storages_invested_available variable. Essentially, with investments, the node_state_cap parameter represents storage capacity per storage investment.

+- · SpineOpt.jl

The node_state_cap parameter represents the maximum allowed value for the node_state variable. Note that in order for a node to have a node_state variable in the first place, the has_state parameter must be set to true. However, if the node has storage investments enabled using the candidate_storages parameter, the node_state_cap parameter acts as a coefficient for the storages_invested_available variable. Essentially, with investments, the node_state_cap parameter represents storage capacity per storage investment.

diff --git a/dev/concept_reference/node_state_coefficient/index.html b/dev/concept_reference/node_state_coefficient/index.html index 0f04631b3b..84848b5c60 100644 --- a/dev/concept_reference/node_state_coefficient/index.html +++ b/dev/concept_reference/node_state_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/node_state_min/index.html b/dev/concept_reference/node_state_min/index.html index 48e3e2e562..8b158e808c 100644 --- a/dev/concept_reference/node_state_min/index.html +++ b/dev/concept_reference/node_state_min/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/number_of_units/index.html b/dev/concept_reference/number_of_units/index.html index cacc7a42ec..03c0d9467f 100644 --- a/dev/concept_reference/number_of_units/index.html +++ b/dev/concept_reference/number_of_units/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Defines how many members a certain unit object represents. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor and units_unavailable, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). It is possible to allow the model to increase the number_of_units itself, through Investment Optimization. It is also possible to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 1.

+- · SpineOpt.jl

Defines how many members a certain unit object represents. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor and units_unavailable, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable). It is possible to allow the model to increase the number_of_units itself, through Investment Optimization. It is also possible to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 1.

diff --git a/dev/concept_reference/online_variable_type/index.html b/dev/concept_reference/online_variable_type/index.html index b3cde0a55c..44e75325cc 100644 --- a/dev/concept_reference/online_variable_type/index.html +++ b/dev/concept_reference/online_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

online_variable_type is a method parameter to model the 'commitment' or 'activation' of a unit, that is the situation where the unit becomes online and active in the system. It can take the values "unit_online_variable_type_binary", "unit_online_variable_type_integer", "unit_online_variable_type_linear" and "unit_online_variable_type_none".

If unit\_online\_variable\_type\_binary, then the commitment is modelled as an online/offline decision (classic unit commitment).

If unit\_online\_variable\_type\_integer, then the commitment is modelled as the number of units that are online (clustered unit commitment).

If unit\_online\_variable\_type\_linear, then the commitment is modelled as the number of units that are online, but here it is also possible to activate 'fractions' of a unit. This should reduce computational burden compared to unit\_online\_variable\_type\_integer.

If unit\_online\_variable\_type\_none, then the committment is not modelled at all and the unit is assumed to be always online. This reduces the computational burden the most.

+- · SpineOpt.jl

online_variable_type is a method parameter to model the 'commitment' or 'activation' of a unit, that is the situation where the unit becomes online and active in the system. It can take the values "unit_online_variable_type_binary", "unit_online_variable_type_integer", "unit_online_variable_type_linear" and "unit_online_variable_type_none".

If unit\_online\_variable\_type\_binary, then the commitment is modelled as an online/offline decision (classic unit commitment).

If unit\_online\_variable\_type\_integer, then the commitment is modelled as the number of units that are online (clustered unit commitment).

If unit\_online\_variable\_type\_linear, then the commitment is modelled as the number of units that are online, but here it is also possible to activate 'fractions' of a unit. This should reduce computational burden compared to unit\_online\_variable\_type\_integer.

If unit\_online\_variable\_type\_none, then the committment is not modelled at all and the unit is assumed to be always online. This reduces the computational burden the most.

diff --git a/dev/concept_reference/operating_cost/index.html b/dev/concept_reference/operating_cost/index.html index a8f0bd20e7..2ca1d0c3c4 100644 --- a/dev/concept_reference/operating_cost/index.html +++ b/dev/concept_reference/operating_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the operating_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for operating costs associated with that unit over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the operating_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for operating costs associated with that unit over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/operating_points/index.html b/dev/concept_reference/operating_points/index.html index 0dbd0cc7de..4aaa0c9e80 100644 --- a/dev/concept_reference/operating_points/index.html +++ b/dev/concept_reference/operating_points/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If operating_points is defined as an array type on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub operating segment variables, unit_flow_op one for each operating segment, with an additional index, i to reference the specific operating segment. Each value in the array represents the upper bound of the operating segment, normalized on unit_capacity for the corresponding unit__to_node or unit__from_node flow. operating_points is used in conjunction with unit_incremental_heat_rate where the array dimension must match and is used to define the normalized operating point bounds for the corresponding incremental heat rate. operating_points is also used in conjunction with user_constraint where the array dimension must match any corresponding piecewise linear unit_flow_coefficient. Here operating_points is used also to define the normalized operating point bounds for the corresponding unit_flow_coefficients.

Note that operating_points is defined on a capacity-normalized basis and the values represent the upper bound of the corresponding operating segment variable. So if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

+- · SpineOpt.jl

If operating_points is defined as an array type on a certain unit__to_node or unit__from_node flow, the corresponding unit_flow flow variable is decomposed into a number of sub operating segment variables, unit_flow_op one for each operating segment, with an additional index, i to reference the specific operating segment. Each value in the array represents the upper bound of the operating segment, normalized on unit_capacity for the corresponding unit__to_node or unit__from_node flow. operating_points is used in conjunction with unit_incremental_heat_rate where the array dimension must match and is used to define the normalized operating point bounds for the corresponding incremental heat rate. operating_points is also used in conjunction with user_constraint where the array dimension must match any corresponding piecewise linear unit_flow_coefficient. Here operating_points is used also to define the normalized operating point bounds for the corresponding unit_flow_coefficients.

Note that operating_points is defined on a capacity-normalized basis and the values represent the upper bound of the corresponding operating segment variable. So if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

diff --git a/dev/concept_reference/ordered_unit_flow_op/index.html b/dev/concept_reference/ordered_unit_flow_op/index.html index 725427cd5d..34347eb9b6 100644 --- a/dev/concept_reference/ordered_unit_flow_op/index.html +++ b/dev/concept_reference/ordered_unit_flow_op/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If one defines the parameter ordered_unit_flow_op in a unit__from_node or unit__to_node relationship, SpineOpt will create variable unit_flow_op_active to order each unit_flow_op of the unit_flow according to the rank of defined operating_points. This setting is only necessary when the segmental unit_flow_ops are with increasing conversion efficiency. The numerical type of unit_flow_op_active (float, binary, or integer) follows that of variable units_on which can be set via parameter online_variable_type.

Note that this functionality is based on SOS2 constraints so only a MILP configuration, i.e. make variable unit_flow_op_active a binary or integer, guarantees correct performance.

+- · SpineOpt.jl

If one defines the parameter ordered_unit_flow_op in a unit__from_node or unit__to_node relationship, SpineOpt will create variable unit_flow_op_active to order each unit_flow_op of the unit_flow according to the rank of defined operating_points. This setting is only necessary when the segmental unit_flow_ops are with increasing conversion efficiency. The numerical type of unit_flow_op_active (float, binary, or integer) follows that of variable units_on which can be set via parameter online_variable_type.

Note that this functionality is based on SOS2 constraints so only a MILP configuration, i.e. make variable unit_flow_op_active a binary or integer, guarantees correct performance.

diff --git a/dev/concept_reference/outage_variable_type/index.html b/dev/concept_reference/outage_variable_type/index.html index 1018104ad4..82f8c26dd1 100644 --- a/dev/concept_reference/outage_variable_type/index.html +++ b/dev/concept_reference/outage_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

outage_variable_type is a method parameter to model the 'commitment' or 'activation' of unit maintenance outages.

To scheduled maintenance outages, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

+- · SpineOpt.jl

outage_variable_type is a method parameter to model the 'commitment' or 'activation' of unit maintenance outages.

To scheduled maintenance outages, one must activate the units_out\of_service variable. This is done by changing the value of the outage_variable_type to either online_variable_type_integer (for clustered units) or online_variable_type_binary for binary units or unit_online_variable_type_linear for continuous units. Setting outage_variable_type to online_variable_type_none will deactivate the units_out\of_service variable and this is the default value.

diff --git a/dev/concept_reference/output/index.html b/dev/concept_reference/output/index.html index 3344687854..b0bff10826 100644 --- a/dev/concept_reference/output/index.html +++ b/dev/concept_reference/output/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

An output is essentially a handle for a SpineOpt variable and Objective function to be included in a report and written into an output database. Typically, e.g. the unit_flow variables are desired as output from most models, so creating an output object called unit_flow allows one to designate it as something to be written in the desired report. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

+- · SpineOpt.jl

An output is essentially a handle for a SpineOpt variable and Objective function to be included in a report and written into an output database. Typically, e.g. the unit_flow variables are desired as output from most models, so creating an output object called unit_flow allows one to designate it as something to be written in the desired report. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

diff --git a/dev/concept_reference/output_db_url/index.html b/dev/concept_reference/output_db_url/index.html index 87e430a5c3..51a2a0d071 100644 --- a/dev/concept_reference/output_db_url/index.html +++ b/dev/concept_reference/output_db_url/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The output_db_url parameter is the url of the databse to write the results of the model run. It overrides the value of the second argument passed to run_spineopt.

+- · SpineOpt.jl

The output_db_url parameter is the url of the databse to write the results of the model run. It overrides the value of the second argument passed to run_spineopt.

diff --git a/dev/concept_reference/output_resolution/index.html b/dev/concept_reference/output_resolution/index.html index d2b42db2e5..969786966b 100644 --- a/dev/concept_reference/output_resolution/index.html +++ b/dev/concept_reference/output_resolution/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The output_resolution parameter indicates the resolution at which output values should be reported.

If null (the default), then results are reported at the highest available resolution from the model. If output_resolution is a duration value, then results are aggregated at that resolution before being reported. At the moment, the aggregation is simply performed by taking the average value.

+- · SpineOpt.jl

The output_resolution parameter indicates the resolution at which output values should be reported.

If null (the default), then results are reported at the highest available resolution from the model. If output_resolution is a duration value, then results are aggregated at that resolution before being reported. At the moment, the aggregation is simply performed by taking the average value.

diff --git a/dev/concept_reference/overwrite_results_on_rolling/index.html b/dev/concept_reference/overwrite_results_on_rolling/index.html index 6cd60f13ed..39edee41da 100644 --- a/dev/concept_reference/overwrite_results_on_rolling/index.html +++ b/dev/concept_reference/overwrite_results_on_rolling/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The overwrite_results_on_rolling parameter allows one to define whether or not results from further optimisation windows should overwrite those from previous ones. This, of course, is relevant only if optimisation windows overlap, which in turn happens whenever a temporal_block goes beyond the end of the window.

If true (the default) then results are written as a time-series. If false, then results are written as a map from analysis time (i.e., the window start) to time-series.

+- · SpineOpt.jl

The overwrite_results_on_rolling parameter allows one to define whether or not results from further optimisation windows should overwrite those from previous ones. This, of course, is relevant only if optimisation windows overlap, which in turn happens whenever a temporal_block goes beyond the end of the window.

If true (the default) then results are written as a time-series. If false, then results are written as a map from analysis time (i.e., the window start) to time-series.

diff --git a/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html b/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html index 611b345415..92b7c215fb 100644 --- a/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html +++ b/dev/concept_reference/parent_stochastic_scenario__child_stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The parent_stochastic_scenario__child_stochastic_scenario relationship defines how the individual stochastic_scenarios are related to each other, forming what is referred to as the stochastic direct acyclic graph (DAG) in the Stochastic Framework section. It acts as a sort of basis for the stochastic_structures, but doesn't contain any Parameters necessary for describing how it relates to the Temporal Framework or the Objective function.

The parent_stochastic_scenario__child_stochastic_scenario relationship and the stochastic DAG it forms are crucial for Constraint generation with stochastic path indexing. Every finite stochastic DAG has a limited number of unique ways of traversing it, called full stochastic paths, which are used when determining how many different constraints need to be generated over time periods where stochastic_structures branch or converge, or when generating constraints involving different stochastic_structures. See the Stochastic Framework section for more information.

+- · SpineOpt.jl

The parent_stochastic_scenario__child_stochastic_scenario relationship defines how the individual stochastic_scenarios are related to each other, forming what is referred to as the stochastic direct acyclic graph (DAG) in the Stochastic Framework section. It acts as a sort of basis for the stochastic_structures, but doesn't contain any Parameters necessary for describing how it relates to the Temporal Framework or the Objective function.

The parent_stochastic_scenario__child_stochastic_scenario relationship and the stochastic DAG it forms are crucial for Constraint generation with stochastic path indexing. Every finite stochastic DAG has a limited number of unique ways of traversing it, called full stochastic paths, which are used when determining how many different constraints need to be generated over time periods where stochastic_structures branch or converge, or when generating constraints involving different stochastic_structures. See the Stochastic Framework section for more information.

diff --git a/dev/concept_reference/ramp_down_limit/index.html b/dev/concept_reference/ramp_down_limit/index.html index 5e8e9a871d..a99117a78d 100644 --- a/dev/concept_reference/ramp_down_limit/index.html +++ b/dev/concept_reference/ramp_down_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the ramp_down_limit parameter limits the maximum decrease in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

+- · SpineOpt.jl

The definition of the ramp_down_limit parameter limits the maximum decrease in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

diff --git a/dev/concept_reference/ramp_up_limit/index.html b/dev/concept_reference/ramp_up_limit/index.html index 28f81c0b01..602cd7fe9d 100644 --- a/dev/concept_reference/ramp_up_limit/index.html +++ b/dev/concept_reference/ramp_up_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the ramp_up_limit parameter limits the maximum increase in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

+- · SpineOpt.jl

The definition of the ramp_up_limit parameter limits the maximum increase in the unit_flow over a period of time of one duration_unit whenever the unit is online.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified, the limit will not be imposed, which is equivalent to choosing a value of 1.

For a more complete description of how ramping restrictions can be implemented, see Ramping.

diff --git a/dev/concept_reference/report/index.html b/dev/concept_reference/report/index.html index 938fdbfc4a..74ece3091c 100644 --- a/dev/concept_reference/report/index.html +++ b/dev/concept_reference/report/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A report is essentially a group of outputs from a model, that gets written into the output database as a result of running SpineOpt. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

+- · SpineOpt.jl

A report is essentially a group of outputs from a model, that gets written into the output database as a result of running SpineOpt. Note that unless appropriate model__report and report__output relationships are defined, SpineOpt doesn't write any output!

diff --git a/dev/concept_reference/report__output/index.html b/dev/concept_reference/report__output/index.html index 2b90ea8571..ba81457355 100644 --- a/dev/concept_reference/report__output/index.html +++ b/dev/concept_reference/report__output/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/representative_periods_mapping/index.html b/dev/concept_reference/representative_periods_mapping/index.html index 467dbe54da..3f0e3bbc6a 100644 --- a/dev/concept_reference/representative_periods_mapping/index.html +++ b/dev/concept_reference/representative_periods_mapping/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Specifies the names of temporal_block objects to use as representative periods for certain time ranges. This indicates the model to define operational variables only for those representative periods, and map variables from normal periods to representative ones. The idea behind this is to reduce the size of the problem by using a reduced set of variables, when one knows that some reduced set of time periods can be representative for a larger one.

Note that only operational variables other than node_state are sensitive to this parameter. In other words, the model always create node_state variables and investment variables for all time periods, regardless of whether or not representative_periods_mapping is specified for any temporal_block.

To use representative periods in your model, do the following:

  1. Define one temporal_block for the 'normal' periods as you would do if you weren't using representative periods.
  2. Define a set of temporal_block objects, each corresponding to one representative period.
  3. Specify representative_periods_mapping for the 'normal' temporal_block as a map, from consecutive date-time values to the name of a representative temporal_block.
  4. Associate all the above temporal_block objects to elements in your model (e.g., via node__temporal_block and/or units_on__temporal_block relationships), to map their operational variables from normal periods, to the variable from the representative period.

See also Representative days with seasonal storages.

+- · SpineOpt.jl

Specifies the names of temporal_block objects to use as representative periods for certain time ranges. This indicates the model to define operational variables only for those representative periods, and map variables from normal periods to representative ones. The idea behind this is to reduce the size of the problem by using a reduced set of variables, when one knows that some reduced set of time periods can be representative for a larger one.

Note that only operational variables other than node_state are sensitive to this parameter. In other words, the model always create node_state variables and investment variables for all time periods, regardless of whether or not representative_periods_mapping is specified for any temporal_block.

To use representative periods in your model, do the following:

  1. Define one temporal_block for the 'normal' periods as you would do if you weren't using representative periods.
  2. Define a set of temporal_block objects, each corresponding to one representative period.
  3. Specify representative_periods_mapping for the 'normal' temporal_block as a map, from consecutive date-time values to the name of a representative temporal_block.
  4. Associate all the above temporal_block objects to elements in your model (e.g., via node__temporal_block and/or units_on__temporal_block relationships), to map their operational variables from normal periods, to the variable from the representative period.

See also Representative days with seasonal storages.

diff --git a/dev/concept_reference/reserve_procurement_cost/index.html b/dev/concept_reference/reserve_procurement_cost/index.html index 08690d2e44..fcd83c813d 100644 --- a/dev/concept_reference/reserve_procurement_cost/index.html +++ b/dev/concept_reference/reserve_procurement_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the reserve_procurement_cost parameter for a specific unit__to_node or unit__from_node relationship, a cost term will be added to the objective function whenever that unit is used over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the reserve_procurement_cost parameter for a specific unit__to_node or unit__from_node relationship, a cost term will be added to the objective function whenever that unit is used over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/resolution/index.html b/dev/concept_reference/resolution/index.html index 9ddaad4141..b2f0237098 100644 --- a/dev/concept_reference/resolution/index.html +++ b/dev/concept_reference/resolution/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run. Generally speaking, variables and constraints are generated for each timestep of an optimization. For example, the nodal balance constraint must hold for each timestep.

An array of duration values can be used to have a resolution that varies with time itself. It can for example be used when uncertainty in one of the inputs rises as the optimization moves away from the model start. Think of a forecast of for instance wind power generation, which might be available in quarter hourly detail for one day in the future, and in hourly detail for the next two days. It is possible to take a quarter hourly resolution for the full horizon of three days. However, by lowering the temporal resolution after the first day, the computational burden is lowered substantially.

+- · SpineOpt.jl

This parameter specifies the resolution of the temporal block, or in other words: the length of the timesteps used in the optimization run. Generally speaking, variables and constraints are generated for each timestep of an optimization. For example, the nodal balance constraint must hold for each timestep.

An array of duration values can be used to have a resolution that varies with time itself. It can for example be used when uncertainty in one of the inputs rises as the optimization moves away from the model start. Think of a forecast of for instance wind power generation, which might be available in quarter hourly detail for one day in the future, and in hourly detail for the next two days. It is possible to take a quarter hourly resolution for the full horizon of three days. However, by lowering the temporal resolution after the first day, the computational burden is lowered substantially.

diff --git a/dev/concept_reference/right_hand_side/index.html b/dev/concept_reference/right_hand_side/index.html index bf11e2f18b..d3ebeef78f 100644 --- a/dev/concept_reference/right_hand_side/index.html +++ b/dev/concept_reference/right_hand_side/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/roll_forward/index.html b/dev/concept_reference/roll_forward/index.html index cced9938f0..41d33c9c37 100644 --- a/dev/concept_reference/roll_forward/index.html +++ b/dev/concept_reference/roll_forward/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In a rolling horizon optimization, the model is split in windows that are optimized iteratively; roll_forward indicates how much the window should roll forward after each iteration. Overlap between consecutive optimization windows is possible. In the practical approaches presented in Temporal Framework, the rolling window optimization will be explained in more detail. The default value of this parameter is the entire model time horizon, which leads to a single optimization for the entire time horizon.

In case you want your model to roll a different amount of time after each iteration, you can specify an array of durations for roll_forward. Position ith in this array indicates how much the model should roll after iteration i. This allows you to perform a rolling horizon optimization over a selection of disjoint representative periods as if they were contiguous.

+- · SpineOpt.jl

This parameter defines how much the optimization window rolls forward in a rolling horizon optimization and should be expressed as a duration. In a rolling horizon optimization, the model is split in windows that are optimized iteratively; roll_forward indicates how much the window should roll forward after each iteration. Overlap between consecutive optimization windows is possible. In the practical approaches presented in Temporal Framework, the rolling window optimization will be explained in more detail. The default value of this parameter is the entire model time horizon, which leads to a single optimization for the entire time horizon.

In case you want your model to roll a different amount of time after each iteration, you can specify an array of durations for roll_forward. Position ith in this array indicates how much the model should roll after iteration i. This allows you to perform a rolling horizon optimization over a selection of disjoint representative periods as if they were contiguous.

diff --git a/dev/concept_reference/shut_down_cost/index.html b/dev/concept_reference/shut_down_cost/index.html index 2d6316e360..a7c70f7a15 100644 --- a/dev/concept_reference/shut_down_cost/index.html +++ b/dev/concept_reference/shut_down_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the shut_down_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit shuts down over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the shut_down_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit shuts down over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/shut_down_limit/index.html b/dev/concept_reference/shut_down_limit/index.html index 3ed2f2fc79..1de320523e 100644 --- a/dev/concept_reference/shut_down_limit/index.html +++ b/dev/concept_reference/shut_down_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the shut_down_limit parameter sets an upper bound on the unit_flow variable for the timestep right before a shutdown.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

+- · SpineOpt.jl

The definition of the shut_down_limit parameter sets an upper bound on the unit_flow variable for the timestep right before a shutdown.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

diff --git a/dev/concept_reference/start_up_cost/index.html b/dev/concept_reference/start_up_cost/index.html index f7ba1c46ee..57066dde77 100644 --- a/dev/concept_reference/start_up_cost/index.html +++ b/dev/concept_reference/start_up_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the start_up_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit starts up over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the start_up_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit starts up over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/start_up_limit/index.html b/dev/concept_reference/start_up_limit/index.html index a82d6c6453..909da3e334 100644 --- a/dev/concept_reference/start_up_limit/index.html +++ b/dev/concept_reference/start_up_limit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The definition of the start_up_limit parameter sets an upper bound on the unit_flow variable for the timestep right after a startup.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

+- · SpineOpt.jl

The definition of the start_up_limit parameter sets an upper bound on the unit_flow variable for the timestep right after a startup.

It can be defined for unit__to_node or unit__from_node relationships, as well as their counterparts for node groups. It will then impose restrictions on the unit_flow variables that indicate flows between the two members of the relationship for which the parameter is defined. The parameter is given as a fraction of the unit_capacity parameter. When the parameter is not specified the limit will not be imposed, which is equivalent to choosing a value of 1.

diff --git a/dev/concept_reference/state_coeff/index.html b/dev/concept_reference/state_coeff/index.html index 6fa44fefa5..a31956956f 100644 --- a/dev/concept_reference/state_coeff/index.html +++ b/dev/concept_reference/state_coeff/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The state_coeff parameter acts as a coefficient for the node_state variable in the node injection constraint. Essentially, it tells how the node_state variable should be treated in relation to the commodity flows and demand, and can be used for e.g. scaling or unit conversions. For most use-cases a state_coeff parameter value of 1.0 should suffice, e.g. having a MWh storage connected to MW flows in a model with hour as the basic unit of time.

Note that in order for the state_coeff parameter to have an impact, the node must first have a node_state variable to begin with, defined using the has_state parameter. By default, the state_coeff is set to zero as a precaution, so that the user always has to set its value explicitly for it to have an impact on the model.

+- · SpineOpt.jl

The state_coeff parameter acts as a coefficient for the node_state variable in the node injection constraint. Essentially, it tells how the node_state variable should be treated in relation to the commodity flows and demand, and can be used for e.g. scaling or unit conversions. For most use-cases a state_coeff parameter value of 1.0 should suffice, e.g. having a MWh storage connected to MW flows in a model with hour as the basic unit of time.

Note that in order for the state_coeff parameter to have an impact, the node must first have a node_state variable to begin with, defined using the has_state parameter. By default, the state_coeff is set to zero as a precaution, so that the user always has to set its value explicitly for it to have an impact on the model.

diff --git a/dev/concept_reference/stochastic_scenario/index.html b/dev/concept_reference/stochastic_scenario/index.html index a295740dfa..cc47cd360a 100644 --- a/dev/concept_reference/stochastic_scenario/index.html +++ b/dev/concept_reference/stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Essentially, a stochastic_scenario is a label for an alternative period of time, describing one possibility of what might come to pass. They are the basic building blocks of the scenario-based Stochastic Framework in SpineOpt.jl, but aren't really meaningful on their own. Only when combined into a stochastic_structure using the stochastic_structure__stochastic_scenario and parent_stochastic_scenario__child_stochastic_scenario relationships, along with Parameters like the weight_relative_to_parents and stochastic_scenario_end, they become meaningful.

+- · SpineOpt.jl

Essentially, a stochastic_scenario is a label for an alternative period of time, describing one possibility of what might come to pass. They are the basic building blocks of the scenario-based Stochastic Framework in SpineOpt.jl, but aren't really meaningful on their own. Only when combined into a stochastic_structure using the stochastic_structure__stochastic_scenario and parent_stochastic_scenario__child_stochastic_scenario relationships, along with Parameters like the weight_relative_to_parents and stochastic_scenario_end, they become meaningful.

diff --git a/dev/concept_reference/stochastic_scenario_end/index.html b/dev/concept_reference/stochastic_scenario_end/index.html index 275a198da3..0289012d33 100644 --- a/dev/concept_reference/stochastic_scenario_end/index.html +++ b/dev/concept_reference/stochastic_scenario_end/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_scenario_end is a Duration-type parameter, defining when a stochastic_scenario ends relative to the start of the current optimization. As it is a parameter for the stochastic_structure__stochastic_scenario relationship, different stochastic_structures can have different values for the same stochastic_scenario, making it possible to define slightly different stochastic_structures using the same stochastic_scenarios. See the Stochastic Framework section for more information about how different stochastic_structures interact in SpineOpt.jl.

When a stochastic_scenario ends at the point in time defined by the stochastic_scenario_end parameter, it spawns its children according to the parent_stochastic_scenario__child_stochastic_scenario relationship. Note that the children will be inherently assumed to belong to the same stochastic_structure their parent belonged to, even without explicit stochastic_structure__stochastic_scenario relationships! Thus, you might need to define the weight_relative_to_parents parameter for the children.

If no stochastic_scenario_end is defined, the stochastic_scenario is assumed to go on indefinitely.

+- · SpineOpt.jl

The stochastic_scenario_end is a Duration-type parameter, defining when a stochastic_scenario ends relative to the start of the current optimization. As it is a parameter for the stochastic_structure__stochastic_scenario relationship, different stochastic_structures can have different values for the same stochastic_scenario, making it possible to define slightly different stochastic_structures using the same stochastic_scenarios. See the Stochastic Framework section for more information about how different stochastic_structures interact in SpineOpt.jl.

When a stochastic_scenario ends at the point in time defined by the stochastic_scenario_end parameter, it spawns its children according to the parent_stochastic_scenario__child_stochastic_scenario relationship. Note that the children will be inherently assumed to belong to the same stochastic_structure their parent belonged to, even without explicit stochastic_structure__stochastic_scenario relationships! Thus, you might need to define the weight_relative_to_parents parameter for the children.

If no stochastic_scenario_end is defined, the stochastic_scenario is assumed to go on indefinitely.

diff --git a/dev/concept_reference/stochastic_structure/index.html b/dev/concept_reference/stochastic_structure/index.html index f74ca18507..e0235116ab 100644 --- a/dev/concept_reference/stochastic_structure/index.html +++ b/dev/concept_reference/stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_structure is the key component of the scenario-based Stochastic Framework in SpineOpt.jl, and essentially represents a group of stochastic_scenarios with set Parameters. The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structures, and the weight_relative_to_parents and stochastic_scenario_end Parameters define the exact shape and impact of the stochastic_structure, along with the parent_stochastic_scenario__child_stochastic_scenario relationship.

The main reason as to why stochastic_structures are so important is, that they act as handles connecting the Stochastic Framework to the modelled system. This is handled using the Structural relationship classes e.g. node__stochastic_structure, which define the stochastic_structure applied to each object describing the modelled system. Connecting each system object to the appropriate stochastic_structure individually can be a bit bothersome at times, so there are also a number of convenience Meta relationship classes like the model__default_stochastic_structure, which allow setting model-wide defaults to be used whenever specific definitions are missing.

+- · SpineOpt.jl

The stochastic_structure is the key component of the scenario-based Stochastic Framework in SpineOpt.jl, and essentially represents a group of stochastic_scenarios with set Parameters. The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structures, and the weight_relative_to_parents and stochastic_scenario_end Parameters define the exact shape and impact of the stochastic_structure, along with the parent_stochastic_scenario__child_stochastic_scenario relationship.

The main reason as to why stochastic_structures are so important is, that they act as handles connecting the Stochastic Framework to the modelled system. This is handled using the Structural relationship classes e.g. node__stochastic_structure, which define the stochastic_structure applied to each object describing the modelled system. Connecting each system object to the appropriate stochastic_structure individually can be a bit bothersome at times, so there are also a number of convenience Meta relationship classes like the model__default_stochastic_structure, which allow setting model-wide defaults to be used whenever specific definitions are missing.

diff --git a/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html b/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html index 0e1c398857..4d818b5251 100644 --- a/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html +++ b/dev/concept_reference/stochastic_structure__stochastic_scenario/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structure, as well as holds the stochastic_scenario_end and weight_relative_to_parents Parameters defining how the stochastic_structure interacts with the Temporal Framework and the Objective function. Along with parent_stochastic_scenario__child_stochastic_scenario, this relationship is used to define the exact properties of each stochastic_structure, which are then applied to the objects describing the modelled system according to the Structural relationship classes, like the node__stochastic_structure relationship.

+- · SpineOpt.jl

The stochastic_structure__stochastic_scenario relationship defines which stochastic_scenarios are included in which stochastic_structure, as well as holds the stochastic_scenario_end and weight_relative_to_parents Parameters defining how the stochastic_structure interacts with the Temporal Framework and the Objective function. Along with parent_stochastic_scenario__child_stochastic_scenario, this relationship is used to define the exact properties of each stochastic_structure, which are then applied to the objects describing the modelled system according to the Structural relationship classes, like the node__stochastic_structure relationship.

diff --git a/dev/concept_reference/storage_investment_cost/index.html b/dev/concept_reference/storage_investment_cost/index.html index e757f86eed..e75c7759a1 100644 --- a/dev/concept_reference/storage_investment_cost/index.html +++ b/dev/concept_reference/storage_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the storage_investment_cost parameter for a specific node, a cost term will be added to the objective function whenever a storage investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the storage_investment_cost parameter for a specific node, a cost term will be added to the objective function whenever a storage investment is made during the current optimization window.

diff --git a/dev/concept_reference/storage_investment_lifetime/index.html b/dev/concept_reference/storage_investment_lifetime/index.html index 726c08b20c..93a8182cb7 100644 --- a/dev/concept_reference/storage_investment_lifetime/index.html +++ b/dev/concept_reference/storage_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Duration parameter that determines the minimum duration of storage investment decisions. Once a storage has been invested-in, it must remain invested-in for storage_investment_lifetime. Note that storage_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_storages

+- · SpineOpt.jl

Duration parameter that determines the minimum duration of storage investment decisions. Once a storage has been invested-in, it must remain invested-in for storage_investment_lifetime. Note that storage_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_storages

diff --git a/dev/concept_reference/storage_investment_variable_type/index.html b/dev/concept_reference/storage_investment_variable_type/index.html index cb693926b8..11dbac99d4 100644 --- a/dev/concept_reference/storage_investment_variable_type/index.html +++ b/dev/concept_reference/storage_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem storage_investment_variable_type determines the storage investment decision variable type. Since a node's node_state will be limited to the product of the investment variable and the corresponding node_state_cap and since candidate_storages represents the upper bound of the storage investment decision variable, storage_investment_variable_type thus determines what the investment decision represents. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages that may be invested-in. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a capacity with node_state_cap being analagous to a scaling parameter. For example, if storage_investment_variable_type = integer, candidate_storages = 4 and node_state_cap = 1000 MWh, then the investment decision is how many 1000h MW storages to build. If storage_investment_variable_type = continuous, candidate_storages = 1000 and node_state_cap = 1 MWh, then the investment decision is how much storage capacity to build. Finally, if storage_investment_variable_type = integer, candidate_storages = 10 and node_state_cap = 100 MWh, then the investment decision is how many 100MWh storage blocks to build.

See also Investment Optimization and candidate_storages.

+- · SpineOpt.jl

Within an investments problem storage_investment_variable_type determines the storage investment decision variable type. Since a node's node_state will be limited to the product of the investment variable and the corresponding node_state_cap and since candidate_storages represents the upper bound of the storage investment decision variable, storage_investment_variable_type thus determines what the investment decision represents. If storage_investment_variable_type is integer or binary, then candidate_storages represents the maximum number of discrete storages that may be invested-in. If storage_investment_variable_type is continuous, candidate_storages is more analagous to a capacity with node_state_cap being analagous to a scaling parameter. For example, if storage_investment_variable_type = integer, candidate_storages = 4 and node_state_cap = 1000 MWh, then the investment decision is how many 1000h MW storages to build. If storage_investment_variable_type = continuous, candidate_storages = 1000 and node_state_cap = 1 MWh, then the investment decision is how much storage capacity to build. Finally, if storage_investment_variable_type = integer, candidate_storages = 10 and node_state_cap = 100 MWh, then the investment decision is how many 100MWh storage blocks to build.

See also Investment Optimization and candidate_storages.

diff --git a/dev/concept_reference/storages_invested_avaiable_coefficient/index.html b/dev/concept_reference/storages_invested_avaiable_coefficient/index.html index 590af24f8d..3ec2cfbd2d 100644 --- a/dev/concept_reference/storages_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/storages_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/storages_invested_big_m_mga/index.html b/dev/concept_reference/storages_invested_big_m_mga/index.html index c5bb8e04ac..33837e4b36 100644 --- a/dev/concept_reference/storages_invested_big_m_mga/index.html +++ b/dev/concept_reference/storages_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The storages_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_storages could suffice.)

+- · SpineOpt.jl

The storages_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_storages could suffice.)

diff --git a/dev/concept_reference/storages_invested_coefficient/index.html b/dev/concept_reference/storages_invested_coefficient/index.html index a20a38c2c7..ba6c5b4c1d 100644 --- a/dev/concept_reference/storages_invested_coefficient/index.html +++ b/dev/concept_reference/storages_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/storages_invested_mga/index.html b/dev/concept_reference/storages_invested_mga/index.html index 2c148564de..a15d9e6b3d 100644 --- a/dev/concept_reference/storages_invested_mga/index.html +++ b/dev/concept_reference/storages_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The storages_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of storages_invested_mga is set to true, investment decisions in this connection, or group of storages, will be included in the MGA algorithm.

+- · SpineOpt.jl

The storages_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of storages_invested_mga is set to true, investment decisions in this connection, or group of storages, will be included in the MGA algorithm.

diff --git a/dev/concept_reference/tax_in_unit_flow/index.html b/dev/concept_reference/tax_in_unit_flow/index.html index 4057f02eef..a179b7c99f 100644 --- a/dev/concept_reference/tax_in_unit_flow/index.html +++ b/dev/concept_reference/tax_in_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_in_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction to_node over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the tax_in_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction to_node over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/tax_net_unit_flow/index.html b/dev/concept_reference/tax_net_unit_flow/index.html index 091cd24424..92e1b071e2 100644 --- a/dev/concept_reference/tax_net_unit_flow/index.html +++ b/dev/concept_reference/tax_net_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_net_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with the net total of all unit_flow variables with the direction to_node for this specific node minus all unit_flow variables with direction from_node.

+- · SpineOpt.jl

By defining the tax_net_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with the net total of all unit_flow variables with the direction to_node for this specific node minus all unit_flow variables with direction from_node.

diff --git a/dev/concept_reference/tax_out_unit_flow/index.html b/dev/concept_reference/tax_out_unit_flow/index.html index ab1fdb1da4..02de126544 100644 --- a/dev/concept_reference/tax_out_unit_flow/index.html +++ b/dev/concept_reference/tax_out_unit_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the tax_out_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction from_node over the course of the operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the tax_out_unit_flow parameter for a specific node, a cost term will be added to the objective function to account the taxes associated with all unit_flow variables with direction from_node over the course of the operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/temporal_block/index.html b/dev/concept_reference/temporal_block/index.html index 827dcb9097..aab2dfc4b1 100644 --- a/dev/concept_reference/temporal_block/index.html +++ b/dev/concept_reference/temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A temporal block defines the temporal properties of the optimization that is to be solved in the current window. It is the key building block of the Temporal Framework. Most importantly, it holds the necessary information about the resolution and horizon of the optimization. A single model can have multiple temporal blocks, which is one of the main sources of temporal flexibility in Spine: by linking different parts of the model to different temporal blocks, a single model can contain aspects that are solved with different temporal resolutions or time horizons.

+- · SpineOpt.jl

A temporal block defines the temporal properties of the optimization that is to be solved in the current window. It is the key building block of the Temporal Framework. Most importantly, it holds the necessary information about the resolution and horizon of the optimization. A single model can have multiple temporal blocks, which is one of the main sources of temporal flexibility in Spine: by linking different parts of the model to different temporal blocks, a single model can contain aspects that are solved with different temporal resolutions or time horizons.

diff --git a/dev/concept_reference/the_basics/index.html b/dev/concept_reference/the_basics/index.html index 38b90f4fb3..0cbfacd5bc 100644 --- a/dev/concept_reference/the_basics/index.html +++ b/dev/concept_reference/the_basics/index.html @@ -1,2 +1,2 @@ -Basics of the model structure · SpineOpt.jl

Basics of the model structure

In SpineOpt.jl, the model structure is generated based on the input data, allowing it to be used for a multitude of different problems. Here, we aim to provide you with a basic understanding of the SpineOpt.jl model and data structure, while the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections provide more in-depth explanations of each concept.

Introduction to object classes

Essentially, Object Classes represents different types of objects or entities that make up the model. For example, every power plant in the model is represented as an object of the object class unit, every power line as an object of the object class connection, and so forth. In order to add any new entity to a model, a new object has to be added to desired object class in the input data.

Each object class has a very specific purpose in SpineOpt.jl, so understanding their differences is key. The Object Classes can be roughly divided into three distinctive groups, namely Systemic object classes, Structural object classes, and Meta object classes.

Systemic object classes

As the name implies, system Object Classes are used to describe the system to be modelled. Essentially, they define what you want to model. These include:

  • commodity represents different goods to be generated, consumed, transported, etc.
  • connection handles the transfer of commodities between nodes.
  • node ensures the balance of the commodity flows, and can be used to store commodities as well.
  • unit handles the generation and consumption of commodities.

Structural object classes

Structural Object Classes are used to define the temporal and stochastic structure of the modelled problem, as well as custom User Constraints. Unlike the above system Object Classes, the structural Object Classes are more about how you want to model, instead of strictly what you want to model. These include:

Meta object classes

Meta Object Classes are used for defining things on the level of models or above, like model output and even multiple models for problem decompositions. These include:

  • model represents an individual model, grouping together all the things relevant for itself.
  • output defines which Variables are output from the model.
  • report groups together multiple output objects.

Introduction to relationship classes

While Object Classes define all the objects or entities that make up a model, Relationship Classes define how those entities are related to each other. Thus, Relationship Classes hold no meaning on their own, and always include at least one object class.

Similar to Object Classes, each relationship class has a very specific purpose in SpineOpt.jl, and understanding the purpose of each relationship class is paramount. The Relationship Classes can be roughly divided into Systemic relationship classes, Structural relationship classes, and Meta relationship classes, again similar to Object Classes.

Systemic relationship classes

Systemic Relationship Classes define how Systemic object classes are related to each other, thus helping define the system to be modelled. Most of these relationships deal with which units and connections interact with which nodes, and how those interactions work. This essentially defines the possible commodity flows to be modelled. Systemic Relationship Classes include:

Structural relationship classes

Structural Relationship Classes primarily relate Structural object classes to Systemic object classes, defining what structures the individual parts of the system use. These are mostly used to determine the temporal and stochastic structures to be used in different parts of the modelled system, or custom User Constraints.

SpineOpt.jl has a very flexible temporal and stochastic structure, explained in detail in the Temporal Framework and Stochastic Framework sections of the documentation. Unfortunately, this flexibility requires quite a few different structural Relationship Classes, the most important of which are the following basic structural Relationship Classes:

Furthermore, there are also a number of advanced structural Relationship Classes, which are only necessary when using some of the optional features of SpineOpt.jl. For Investment Optimization, the following relationships control the stochastic and temporal structures of the investment variables:

For User Constraints, which are essentially generic data-driven custom constraints, the following relationships are used to control which variables are included and with what coefficients:

Meta relationship classes

Meta Relationship Classes are used for defining model-level settings, like which temporal blocks or stochastic structures are active, and what the model output is. These include:

Introduction to parameters

While the primary function of Object Classes and Relationship Classes is to define the system to be modelled and it's structure, Parameters exist to constrain them. Every parameter is attributed to at least one object class or relationship class, but some appear in many classes whenever they serve a similar purpose.

Parameters accept different types of values depending on their purpose, e.g. whether they act as a flag for some specific functionality or appear as a coefficient in Constraints, so understanding each parameter is key. Most coefficient-type Parameters accept constant, time series, and even stochastic time series form input, but there are some exceptions. Most flag-type Parameters, on the other hand, have a restricted list of acceptable values defined by their Parameter Value Lists.

The existence of some Constraints is controlled based on if the relevant Parameters are defined. As a rule-of-thumb, a constraint only gets generated if at least one of the Parameters appearing in it is defined, but one should refer to the appropriate Constraints and Parameters sections when in doubt.

Introduction to groups of objects

Groups of objects are used within SpineOpt for different purposes. To create a group of objects, simply right-click the corresponding Object Class in the Spine Toolbox database editor and select Add object group. Groups are essentially special objects, that act as a single handle for all of its members.

On the one hand, groups can be used in order to impose constraints on the aggregation of a variable, e.g. on the sum of multiple unit_flow variables. Constraints based on parameters associated with the unit__node__node, unit__to_node, unit__from_node, connection__node__node, connection__to_node, connection__from_node can generally be used for this kind of flow aggregation by defining the parameters on groups of objects, typically node groups. (with the exception of variable fixing parameters, e.g. fix_unit_flow, fix_connection_flow etc.). See for instance constraint_unit_flow_capacity.

On the other hand, a node group can be used to for PTDF based powerflows. Here a node group is used to enforce a nodal balance on system level, while suppressing the node balances at individual nodes. See also balance_type and the node balance constraint.

+Basics of the model structure · SpineOpt.jl

Basics of the model structure

In SpineOpt.jl, the model structure is generated based on the input data, allowing it to be used for a multitude of different problems. Here, we aim to provide you with a basic understanding of the SpineOpt.jl model and data structure, while the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections provide more in-depth explanations of each concept.

Introduction to object classes

Essentially, Object Classes represents different types of objects or entities that make up the model. For example, every power plant in the model is represented as an object of the object class unit, every power line as an object of the object class connection, and so forth. In order to add any new entity to a model, a new object has to be added to desired object class in the input data.

Each object class has a very specific purpose in SpineOpt.jl, so understanding their differences is key. The Object Classes can be roughly divided into three distinctive groups, namely Systemic object classes, Structural object classes, and Meta object classes.

Systemic object classes

As the name implies, system Object Classes are used to describe the system to be modelled. Essentially, they define what you want to model. These include:

  • commodity represents different goods to be generated, consumed, transported, etc.
  • connection handles the transfer of commodities between nodes.
  • node ensures the balance of the commodity flows, and can be used to store commodities as well.
  • unit handles the generation and consumption of commodities.

Structural object classes

Structural Object Classes are used to define the temporal and stochastic structure of the modelled problem, as well as custom User Constraints. Unlike the above system Object Classes, the structural Object Classes are more about how you want to model, instead of strictly what you want to model. These include:

Meta object classes

Meta Object Classes are used for defining things on the level of models or above, like model output and even multiple models for problem decompositions. These include:

  • model represents an individual model, grouping together all the things relevant for itself.
  • output defines which Variables are output from the model.
  • report groups together multiple output objects.

Introduction to relationship classes

While Object Classes define all the objects or entities that make up a model, Relationship Classes define how those entities are related to each other. Thus, Relationship Classes hold no meaning on their own, and always include at least one object class.

Similar to Object Classes, each relationship class has a very specific purpose in SpineOpt.jl, and understanding the purpose of each relationship class is paramount. The Relationship Classes can be roughly divided into Systemic relationship classes, Structural relationship classes, and Meta relationship classes, again similar to Object Classes.

Systemic relationship classes

Systemic Relationship Classes define how Systemic object classes are related to each other, thus helping define the system to be modelled. Most of these relationships deal with which units and connections interact with which nodes, and how those interactions work. This essentially defines the possible commodity flows to be modelled. Systemic Relationship Classes include:

Structural relationship classes

Structural Relationship Classes primarily relate Structural object classes to Systemic object classes, defining what structures the individual parts of the system use. These are mostly used to determine the temporal and stochastic structures to be used in different parts of the modelled system, or custom User Constraints.

SpineOpt.jl has a very flexible temporal and stochastic structure, explained in detail in the Temporal Framework and Stochastic Framework sections of the documentation. Unfortunately, this flexibility requires quite a few different structural Relationship Classes, the most important of which are the following basic structural Relationship Classes:

Furthermore, there are also a number of advanced structural Relationship Classes, which are only necessary when using some of the optional features of SpineOpt.jl. For Investment Optimization, the following relationships control the stochastic and temporal structures of the investment variables:

For User Constraints, which are essentially generic data-driven custom constraints, the following relationships are used to control which variables are included and with what coefficients:

Meta relationship classes

Meta Relationship Classes are used for defining model-level settings, like which temporal blocks or stochastic structures are active, and what the model output is. These include:

Introduction to parameters

While the primary function of Object Classes and Relationship Classes is to define the system to be modelled and it's structure, Parameters exist to constrain them. Every parameter is attributed to at least one object class or relationship class, but some appear in many classes whenever they serve a similar purpose.

Parameters accept different types of values depending on their purpose, e.g. whether they act as a flag for some specific functionality or appear as a coefficient in Constraints, so understanding each parameter is key. Most coefficient-type Parameters accept constant, time series, and even stochastic time series form input, but there are some exceptions. Most flag-type Parameters, on the other hand, have a restricted list of acceptable values defined by their Parameter Value Lists.

The existence of some Constraints is controlled based on if the relevant Parameters are defined. As a rule-of-thumb, a constraint only gets generated if at least one of the Parameters appearing in it is defined, but one should refer to the appropriate Constraints and Parameters sections when in doubt.

Introduction to groups of objects

Groups of objects are used within SpineOpt for different purposes. To create a group of objects, simply right-click the corresponding Object Class in the Spine Toolbox database editor and select Add object group. Groups are essentially special objects, that act as a single handle for all of its members.

On the one hand, groups can be used in order to impose constraints on the aggregation of a variable, e.g. on the sum of multiple unit_flow variables. Constraints based on parameters associated with the unit__node__node, unit__to_node, unit__from_node, connection__node__node, connection__to_node, connection__from_node can generally be used for this kind of flow aggregation by defining the parameters on groups of objects, typically node groups. (with the exception of variable fixing parameters, e.g. fix_unit_flow, fix_connection_flow etc.). See for instance constraint_unit_flow_capacity.

On the other hand, a node group can be used to for PTDF based powerflows. Here a node group is used to enforce a nodal balance on system level, while suppressing the node balances at individual nodes. See also balance_type and the node balance constraint.

diff --git a/dev/concept_reference/unit/index.html b/dev/concept_reference/unit/index.html index 72280bab59..435bfbff98 100644 --- a/dev/concept_reference/unit/index.html +++ b/dev/concept_reference/unit/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

A unit represents an energy conversion process, where energy of one commodity can be converted into energy of another commodity. For example, a gas turbine, a power plant, or even a load, can be modelled using a unit.

A unit always takes energy from one or more nodes, and releases energy to one or more (possibly the same) nodes. The former are specificed through the unit__from_node relationship, and the latter through unit__to_node. Every unit has a temporal and stochastic structures given by the units_on__temporal_block and [units_on__stochastic_structure] relationships. The model will generate unit_flow variables for every combination of unit, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the unit is specified through a number of parameter values. For example, the capacity of the unit, as the maximum amount of energy that can enter or leave it, is given by unit_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_unit_flow, max_ratio_out_in_unit_flow, and min_ratio_out_in_unit_flow. The variable operating cost is given by vom_cost.

+- · SpineOpt.jl

A unit represents an energy conversion process, where energy of one commodity can be converted into energy of another commodity. For example, a gas turbine, a power plant, or even a load, can be modelled using a unit.

A unit always takes energy from one or more nodes, and releases energy to one or more (possibly the same) nodes. The former are specificed through the unit__from_node relationship, and the latter through unit__to_node. Every unit has a temporal and stochastic structures given by the units_on__temporal_block and [units_on__stochastic_structure] relationships. The model will generate unit_flow variables for every combination of unit, node, direction (from node or to node), time slice, and stochastic scenario, according to the above relationships.

The operation of the unit is specified through a number of parameter values. For example, the capacity of the unit, as the maximum amount of energy that can enter or leave it, is given by unit_capacity. The conversion ratio of input to output can be specified using any of fix_ratio_out_in_unit_flow, max_ratio_out_in_unit_flow, and min_ratio_out_in_unit_flow. The variable operating cost is given by vom_cost.

diff --git a/dev/concept_reference/unit__commodity/index.html b/dev/concept_reference/unit__commodity/index.html index 38e80a8ad2..906dc1ba4a 100644 --- a/dev/concept_reference/unit__commodity/index.html +++ b/dev/concept_reference/unit__commodity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To impose a limit on the cumulative amount of commodity flows, the max_cum_in_unit_flow_bound can be imposed on a unit__commodity relationship. This can be very helpful, e.g. if a certain amount of emissions should not be surpased throughout the optimization.

Note that, next to the unit__commodity relationship, also the nodes connected to the units need to be associated with their corresponding commodities, see node__commodity.

+- · SpineOpt.jl

To impose a limit on the cumulative amount of commodity flows, the max_cum_in_unit_flow_bound can be imposed on a unit__commodity relationship. This can be very helpful, e.g. if a certain amount of emissions should not be surpased throughout the optimization.

Note that, next to the unit__commodity relationship, also the nodes connected to the units need to be associated with their corresponding commodities, see node__commodity.

diff --git a/dev/concept_reference/unit__from_node/index.html b/dev/concept_reference/unit__from_node/index.html index 4e6b9c0152..e71ead84b1 100644 --- a/dev/concept_reference/unit__from_node/index.html +++ b/dev/concept_reference/unit__from_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__from_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flows, cost terms, such as fuel_costs and vom_costs, can be included for the unit__from_node relationship.

It is important to note, that the parameters associated with the unit__from_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

+- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__from_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flows, cost terms, such as fuel_costs and vom_costs, can be included for the unit__from_node relationship.

It is important to note, that the parameters associated with the unit__from_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

diff --git a/dev/concept_reference/unit__from_node__unit_constraint/index.html b/dev/concept_reference/unit__from_node__unit_constraint/index.html index 02e4fe2608..8e66f4c257 100644 --- a/dev/concept_reference/unit__from_node__unit_constraint/index.html +++ b/dev/concept_reference/unit__from_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__from_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable to the specified unit from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__from_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

+- · SpineOpt.jl

unit__from_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable to the specified unit from the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__from_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/unit__investment_stochastic_structure/index.html b/dev/concept_reference/unit__investment_stochastic_structure/index.html index 8535d1f567..c93d1b00b3 100644 --- a/dev/concept_reference/unit__investment_stochastic_structure/index.html +++ b/dev/concept_reference/unit__investment_stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/unit__investment_temporal_block/index.html b/dev/concept_reference/unit__investment_temporal_block/index.html index ec7ee6ee10..cdb95520f2 100644 --- a/dev/concept_reference/unit__investment_temporal_block/index.html +++ b/dev/concept_reference/unit__investment_temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__investment_temporal_block is a two-dimensional relationship between a unit and a temporal_block. This relationship defines the temporal resolution and scope of a unit's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no unit__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if unit__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified unit.

See also Investment Optimization

+- · SpineOpt.jl

unit__investment_temporal_block is a two-dimensional relationship between a unit and a temporal_block. This relationship defines the temporal resolution and scope of a unit's investment decision. Note that in a decomposed investments problem with two model objects, one for the master problem model and another for the operations problem model, the link to the specific model is made indirectly through the model__temporal_block relationship. If a model__default_investment_temporal_block is specified and no unit__investment_temporal_block relationship is specified, the model__default_investment_temporal_block relationship will be used. Conversely if unit__investment_temporal_block is specified along with model__temporal_block, this will override model__default_investment_temporal_block for the specified unit.

See also Investment Optimization

diff --git a/dev/concept_reference/unit__node__node/index.html b/dev/concept_reference/unit__node__node/index.html index 2040fff87e..b66885750c 100644 --- a/dev/concept_reference/unit__node__node/index.html +++ b/dev/concept_reference/unit__node__node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

While the relationships unit__to_node and unit__to_node take care of the automatic generation of the unit_flow variables, the unit__node__node relationships hold the information how the different commodity flows of a unit interact. Only through this relationship and the associated parameters, the topology of a unit, i.e. which intakes lead to which products etc., becomes unambiguous.

In almost all cases, at least one of the ..._ratio_... parameters will be defined, e.g. to set a fixed ratio between outgoing and incoming commodity flows of unit (see also e.g. fix_ratio_out_in_unit_flow). Note that the parameters can also be defined on a relationship between groups of objects, e.g. to force a fixed ratio between a group of nodes. In the triggered constraints, this will lead to an aggregation of the individual unit flows.

+- · SpineOpt.jl

While the relationships unit__to_node and unit__to_node take care of the automatic generation of the unit_flow variables, the unit__node__node relationships hold the information how the different commodity flows of a unit interact. Only through this relationship and the associated parameters, the topology of a unit, i.e. which intakes lead to which products etc., becomes unambiguous.

In almost all cases, at least one of the ..._ratio_... parameters will be defined, e.g. to set a fixed ratio between outgoing and incoming commodity flows of unit (see also e.g. fix_ratio_out_in_unit_flow). Note that the parameters can also be defined on a relationship between groups of objects, e.g. to force a fixed ratio between a group of nodes. In the triggered constraints, this will lead to an aggregation of the individual unit flows.

diff --git a/dev/concept_reference/unit__to_node/index.html b/dev/concept_reference/unit__to_node/index.html index 6cec961270..81d3cbe924 100644 --- a/dev/concept_reference/unit__to_node/index.html +++ b/dev/concept_reference/unit__to_node/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__to_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flow, cost terms, such as fuel_costs and vom_costs, can be included for the unit__to_node relationship.

It is important to note, that the parameters associated with the unit__to_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

+- · SpineOpt.jl

The unit__to_node and unit__from_node unit relationships are core elements of SpineOpt. For each unit__to_node or unit__from_node, a unit_flow variable is automatically added to the model, i.e. a commodity flow of a unit to or from a specific node, respectively.

Various parameters can be defined on the unit__to_node relationship, in order to constrain the associated unit flows. In most cases a unit_capacity will be defined for an upper bound on the commodity flows. Apart from that, ramping abilities of a unit can be defined. For further details on ramps see Ramping.

To associate costs with a certain commodity flow, cost terms, such as fuel_costs and vom_costs, can be included for the unit__to_node relationship.

It is important to note, that the parameters associated with the unit__to_node can be defined either for a specific node, or for a group of nodes. Grouping nodes for the described parameters will result in an aggregation of the unit flows for the triggered constraint, e.g. the definition of the unit_capacity on a group of nodes will result in an upper bound on the sum of all individual unit_flows.

diff --git a/dev/concept_reference/unit__to_node__unit_constraint/index.html b/dev/concept_reference/unit__to_node__unit_constraint/index.html index c80b78f428..a852dcb53d 100644 --- a/dev/concept_reference/unit__to_node__unit_constraint/index.html +++ b/dev/concept_reference/unit__to_node__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__to_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable from the specified unit to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__to_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

+- · SpineOpt.jl

unit__to_node__user_constraint is a three-dimensional relationship between a unit, a node and a user_constraint. The relationship specifies that the unit_flow variable from the specified unit to the specified node is involved in the specified user_constraint. Parameters on this relationship generally apply to this specific unit_flow variable. For example the parameter unit_flow_coefficient defined on unit__to_node__user_constraint represents the coefficient on the specific unit_flow variable in the specified user_constraint

diff --git a/dev/concept_reference/unit__unit_constraint/index.html b/dev/concept_reference/unit__unit_constraint/index.html index 8046d474d1..df4a0062df 100644 --- a/dev/concept_reference/unit__unit_constraint/index.html +++ b/dev/concept_reference/unit__unit_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit__user_constraint is a two-dimensional relationship between a unit and a user_constraint. The relationship specifies that a variable or variable(s) associated only with the unit (not a unit_flow for example) are involved in the constraint. For example, the units_on_coefficient defined on unit__user_constraint specifies the coefficient of the unit's units_on variable in the specified user_constraint.

See also user_constraint

+- · SpineOpt.jl

unit__user_constraint is a two-dimensional relationship between a unit and a user_constraint. The relationship specifies that a variable or variable(s) associated only with the unit (not a unit_flow for example) are involved in the constraint. For example, the units_on_coefficient defined on unit__user_constraint specifies the coefficient of the unit's units_on variable in the specified user_constraint.

See also user_constraint

diff --git a/dev/concept_reference/unit_availability_factor/index.html b/dev/concept_reference/unit_availability_factor/index.html index e741053e26..092a379fdd 100644 --- a/dev/concept_reference/unit_availability_factor/index.html +++ b/dev/concept_reference/unit_availability_factor/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

To indicate that a unit is only available to a certain extent or at certain times of the optimization, the unit_availability_factor can be used. A typical use case could be an availability timeseries for a variable renewable energy source. By default the availability factor is set to 1. The availability is, among others, used in the constraint_units_available.

+- · SpineOpt.jl

To indicate that a unit is only available to a certain extent or at certain times of the optimization, the unit_availability_factor can be used. A typical use case could be an availability timeseries for a variable renewable energy source. By default the availability factor is set to 1. The availability is, among others, used in the constraint_units_available.

diff --git a/dev/concept_reference/unit_capacity/index.html b/dev/concept_reference/unit_capacity/index.html index 7663b3dd22..9d070b9205 100644 --- a/dev/concept_reference/unit_capacity/index.html +++ b/dev/concept_reference/unit_capacity/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/unit_conv_cap_to_flow/index.html b/dev/concept_reference/unit_conv_cap_to_flow/index.html index 14ca80f894..98089ca044 100644 --- a/dev/concept_reference/unit_conv_cap_to_flow/index.html +++ b/dev/concept_reference/unit_conv_cap_to_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit_conv_cap_to_flow, as defined for a unit__to_node or unit__from_node, allows the user to align between unit_flow variables and the unit_capacity parameter, which may be expressed in different units. An example would be when the unit_capacity is expressed in GWh, while the demand on the node is expressed in MWh. In that case, a unit_conv_cap_to_flow parameter of 1000 would be applicable.

+- · SpineOpt.jl

The unit_conv_cap_to_flow, as defined for a unit__to_node or unit__from_node, allows the user to align between unit_flow variables and the unit_capacity parameter, which may be expressed in different units. An example would be when the unit_capacity is expressed in GWh, while the demand on the node is expressed in MWh. In that case, a unit_conv_cap_to_flow parameter of 1000 would be applicable.

diff --git a/dev/concept_reference/unit_flow_coefficient/index.html b/dev/concept_reference/unit_flow_coefficient/index.html index 1d892c192b..b2f211ae20 100644 --- a/dev/concept_reference/unit_flow_coefficient/index.html +++ b/dev/concept_reference/unit_flow_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The unit_flow_coefficient is an optional parameter that can be used to include the unit_flow or unit_flow_op variables from or to a node in a user_constraint via the unit__from_node__user_constraint and unit__to_node__user_constraint relationships. Essentially, unit_flow_coefficient appears as a coefficient for the unit_flow and unit_flow_op variables from or to the node in the user constraint.

Note that the unit_flow_op variables are a bit of a special case, defined using the operating_points parameter.

+- · SpineOpt.jl

The unit_flow_coefficient is an optional parameter that can be used to include the unit_flow or unit_flow_op variables from or to a node in a user_constraint via the unit__from_node__user_constraint and unit__to_node__user_constraint relationships. Essentially, unit_flow_coefficient appears as a coefficient for the unit_flow and unit_flow_op variables from or to the node in the user constraint.

Note that the unit_flow_op variables are a bit of a special case, defined using the operating_points parameter.

diff --git a/dev/concept_reference/unit_idle_heat_rate/index.html b/dev/concept_reference/unit_idle_heat_rate/index.html index 10ada6de1d..faf3eda549 100644 --- a/dev/concept_reference/unit_idle_heat_rate/index.html +++ b/dev/concept_reference/unit_idle_heat_rate/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to implement the no-load or idle heat rate of a unit. This is the y-axis offset of the heat rate function and is the fuel consumed per unit time when a unit is online and that results in no additional output. This is defined on the unit__node__node relationship and it is assumed that the input flow from node 1 represents fuel consumption and the output flow to node 2 is the elecrical output. While the units depend on the data, unit_idle_heat_rate is generally expressed in GJ/hr. Used in conjunction with unit_incremental_heat_rate. unit_idle_heat_rate is only currently considered if unit_incremental_heat_rate is specified. A trivial unit_incremental_heat_rate of zero can be defined if there is no incremental heat rate.

+- · SpineOpt.jl

Used to implement the no-load or idle heat rate of a unit. This is the y-axis offset of the heat rate function and is the fuel consumed per unit time when a unit is online and that results in no additional output. This is defined on the unit__node__node relationship and it is assumed that the input flow from node 1 represents fuel consumption and the output flow to node 2 is the elecrical output. While the units depend on the data, unit_idle_heat_rate is generally expressed in GJ/hr. Used in conjunction with unit_incremental_heat_rate. unit_idle_heat_rate is only currently considered if unit_incremental_heat_rate is specified. A trivial unit_incremental_heat_rate of zero can be defined if there is no incremental heat rate.

diff --git a/dev/concept_reference/unit_incremental_heat_rate/index.html b/dev/concept_reference/unit_incremental_heat_rate/index.html index 6ab007c621..7389498438 100644 --- a/dev/concept_reference/unit_incremental_heat_rate/index.html +++ b/dev/concept_reference/unit_incremental_heat_rate/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to implement simple or piecewise linear incremental heat rate functions. Used in the constraint unit_pw_heat_rate - the input fuel flow at node 1 is the sum of the electrical MW output at node 2 times the incremental heat rate over all heat rate segments, plus the unit_idle_heat_rate. The units are detmerined by the data, but generally, incremental heat rates are given in GJ/MWh. Note that the formulation assumes a convex, monitonically increasing heat rate function. The formulation relies on optimality to load the heat rate segments in the correct order and no additional integer variables are created to enforce the correct loading order. The heat rate segment MW operating points are defined by operating_points.

To implement a simple incremental heat rate function,unit_incremental_heat_rate should be given as a simple scalar representing the incremental heat rate over the entire operating range of the unit. To implement a piecewise linear heat rate function, unit_incremental_heat_rate should be specified as an array type. It is then used in conjunction with the unit parameter operating_points which should also be defined as an array type of equal dimension. When defined as an array type unit_incremental_heat_rate[i] is the effective incremental heat rate between operating_points [i-1] (or zero if i=1) and operating_points[i]. Note that operating_points is defined on a capacity-normalized basis so if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

+- · SpineOpt.jl

Used to implement simple or piecewise linear incremental heat rate functions. Used in the constraint unit_pw_heat_rate - the input fuel flow at node 1 is the sum of the electrical MW output at node 2 times the incremental heat rate over all heat rate segments, plus the unit_idle_heat_rate. The units are detmerined by the data, but generally, incremental heat rates are given in GJ/MWh. Note that the formulation assumes a convex, monitonically increasing heat rate function. The formulation relies on optimality to load the heat rate segments in the correct order and no additional integer variables are created to enforce the correct loading order. The heat rate segment MW operating points are defined by operating_points.

To implement a simple incremental heat rate function,unit_incremental_heat_rate should be given as a simple scalar representing the incremental heat rate over the entire operating range of the unit. To implement a piecewise linear heat rate function, unit_incremental_heat_rate should be specified as an array type. It is then used in conjunction with the unit parameter operating_points which should also be defined as an array type of equal dimension. When defined as an array type unit_incremental_heat_rate[i] is the effective incremental heat rate between operating_points [i-1] (or zero if i=1) and operating_points[i]. Note that operating_points is defined on a capacity-normalized basis so if operating_points is specified as [0.5, 1], this creates two operating segments, one from zero to 50% of the corresponding unit_capacity and a second from 50% to 100% of the corresponding unit_capacity.

diff --git a/dev/concept_reference/unit_investment_cost/index.html b/dev/concept_reference/unit_investment_cost/index.html index 922b2d347f..236bd02cf0 100644 --- a/dev/concept_reference/unit_investment_cost/index.html +++ b/dev/concept_reference/unit_investment_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the unit_investment_cost parameter for a specific unit, a cost term will be added to the objective function whenever a unit investment is made during the current optimization window.

+- · SpineOpt.jl

By defining the unit_investment_cost parameter for a specific unit, a cost term will be added to the objective function whenever a unit investment is made during the current optimization window.

diff --git a/dev/concept_reference/unit_investment_lifetime/index.html b/dev/concept_reference/unit_investment_lifetime/index.html index 3b1db2482e..9fc93c85b3 100644 --- a/dev/concept_reference/unit_investment_lifetime/index.html +++ b/dev/concept_reference/unit_investment_lifetime/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Duration parameter that determines the minimum duration of unit investment decisions. Once a unit has been invested-in, it must remain invested-in for unit_investment_lifetime. Note that unit_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_units

+- · SpineOpt.jl

Duration parameter that determines the minimum duration of unit investment decisions. Once a unit has been invested-in, it must remain invested-in for unit_investment_lifetime. Note that unit_investment_lifetime is a dynamic parameter that will impact the amount of solution history that must remain available to the optimisation in each step - this may impact performance.

See also Investment Optimization and candidate_units

diff --git a/dev/concept_reference/unit_investment_variable_type/index.html b/dev/concept_reference/unit_investment_variable_type/index.html index 7dc383423f..c166f65f02 100644 --- a/dev/concept_reference/unit_investment_variable_type/index.html +++ b/dev/concept_reference/unit_investment_variable_type/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Within an investments problem unit_investment_variable_type determines the unit investment decision variable type. Since the unit_flows will be limited to the product of the investment variable and the corresponding unit_capacity for each unit_flow and since candidate_units represents the upper bound of the investment decision variable, unit_investment_variable_type thus determines what the investment decision represents. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested. If unit_investment_variable_type is continuous, candidate_units is more analagous to a capacity with unit_capacity being analagous to a scaling parameter. For example, if unit_investment_variable_type = integer, candidate_units = 4 and unit_capacity for a particular unit_flow = 400 MW, then the investment decision is how many 400 MW units to build. If unit_investment_variable_type = continuous, candidate_units = 400 and unit_capacity for a particular unit_flow = 1 MW, then the investment decision is how much capacity if this particular unit to build. Finally, if unit_investment_variable_type = integer, candidate_units = 10 and unit_capacity for a particular unit_flow = 50 MW, then the investment decision is many 50MW blocks of capacity of this particular unit to build.

See also Investment Optimization and candidate_units

+- · SpineOpt.jl

Within an investments problem unit_investment_variable_type determines the unit investment decision variable type. Since the unit_flows will be limited to the product of the investment variable and the corresponding unit_capacity for each unit_flow and since candidate_units represents the upper bound of the investment decision variable, unit_investment_variable_type thus determines what the investment decision represents. If unit_investment_variable_type is integer or binary, then candidate_units represents the maximum number of discrete units that may be invested. If unit_investment_variable_type is continuous, candidate_units is more analagous to a capacity with unit_capacity being analagous to a scaling parameter. For example, if unit_investment_variable_type = integer, candidate_units = 4 and unit_capacity for a particular unit_flow = 400 MW, then the investment decision is how many 400 MW units to build. If unit_investment_variable_type = continuous, candidate_units = 400 and unit_capacity for a particular unit_flow = 1 MW, then the investment decision is how much capacity if this particular unit to build. Finally, if unit_investment_variable_type = integer, candidate_units = 10 and unit_capacity for a particular unit_flow = 50 MW, then the investment decision is many 50MW blocks of capacity of this particular unit to build.

See also Investment Optimization and candidate_units

diff --git a/dev/concept_reference/unit_investment_variable_type_list/index.html b/dev/concept_reference/unit_investment_variable_type_list/index.html index 851df3cba8..378e05bf90 100644 --- a/dev/concept_reference/unit_investment_variable_type_list/index.html +++ b/dev/concept_reference/unit_investment_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit_investment_variable_type_list holds the possible values for the type of a unit's investment variable which may be chosen from integer, binary or continuous.

+- · SpineOpt.jl

unit_investment_variable_type_list holds the possible values for the type of a unit's investment variable which may be chosen from integer, binary or continuous.

diff --git a/dev/concept_reference/unit_online_variable_type_list/index.html b/dev/concept_reference/unit_online_variable_type_list/index.html index 0f1e5bb6ef..84aa13eb90 100644 --- a/dev/concept_reference/unit_online_variable_type_list/index.html +++ b/dev/concept_reference/unit_online_variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

unit_online_variable_type_list holds the possible values for the type of a unit's commitment status variable which may be chosen from binary, integer, or linear.

+- · SpineOpt.jl

unit_online_variable_type_list holds the possible values for the type of a unit's commitment status variable which may be chosen from binary, integer, or linear.

diff --git a/dev/concept_reference/unit_start_flow/index.html b/dev/concept_reference/unit_start_flow/index.html index 593cc2332e..6564beebf1 100644 --- a/dev/concept_reference/unit_start_flow/index.html +++ b/dev/concept_reference/unit_start_flow/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

Used to implement unit startup fuel consumption where node 1 is assumed to be input fuel and node 2 is assumed to be output elecrical energy. This is a flow from node 1 that is incurred when the value of the variable unitsstartedup is 1 in the corresponding time period. This flow does not result in additional output flow at node 2. Used in conjunction with unit_incremental_heat_rate. unit_start_flow is only currently considered if unit_incremental_heat_rate is specified. A trivial unit_incremental_heat_rate of zero can be defined if there is no incremental heat rate.

+- · SpineOpt.jl

Used to implement unit startup fuel consumption where node 1 is assumed to be input fuel and node 2 is assumed to be output elecrical energy. This is a flow from node 1 that is incurred when the value of the variable unitsstartedup is 1 in the corresponding time period. This flow does not result in additional output flow at node 2. Used in conjunction with unit_incremental_heat_rate. unit_start_flow is only currently considered if unit_incremental_heat_rate is specified. A trivial unit_incremental_heat_rate of zero can be defined if there is no incremental heat rate.

diff --git a/dev/concept_reference/units_invested_avaiable_coefficient/index.html b/dev/concept_reference/units_invested_avaiable_coefficient/index.html index 5433357082..21b2d24a99 100644 --- a/dev/concept_reference/units_invested_avaiable_coefficient/index.html +++ b/dev/concept_reference/units_invested_avaiable_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_invested_big_m_mga/index.html b/dev/concept_reference/units_invested_big_m_mga/index.html index 5eb086f214..022b67bae7 100644 --- a/dev/concept_reference/units_invested_big_m_mga/index.html +++ b/dev/concept_reference/units_invested_big_m_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_units could suffice.)

+- · SpineOpt.jl

The units_invested_big_m_mga parameter is used in combination with the MGA algorithm (see mga-advanced). It defines an upper bound on the maximum difference between any MGA iteration. The big M should be chosen always sufficiently large. (Typically, a value equivalent to candidate_units could suffice.)

diff --git a/dev/concept_reference/units_invested_coefficient/index.html b/dev/concept_reference/units_invested_coefficient/index.html index ee96e55890..fd28118ecb 100644 --- a/dev/concept_reference/units_invested_coefficient/index.html +++ b/dev/concept_reference/units_invested_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_invested_mga/index.html b/dev/concept_reference/units_invested_mga/index.html index 34472f09f1..070501192f 100644 --- a/dev/concept_reference/units_invested_mga/index.html +++ b/dev/concept_reference/units_invested_mga/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of units_invested_mga is set to true, investment decisions in this connection, or group of units, will be included in the MGA algorithm.

+- · SpineOpt.jl

The units_invested_mga is a boolean parameter that can be used in combination with the MGA algorithm (see mga-advanced). As soon as the value of units_invested_mga is set to true, investment decisions in this connection, or group of units, will be included in the MGA algorithm.

diff --git a/dev/concept_reference/units_on__stochastic_structure/index.html b/dev/concept_reference/units_on__stochastic_structure/index.html index 0a47c970fc..6153cda83f 100644 --- a/dev/concept_reference/units_on__stochastic_structure/index.html +++ b/dev/concept_reference/units_on__stochastic_structure/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_on__stochastic_structure relationship defines the stochastic_structure used by the units_on variable. Essentially, this relationship permits defining a different stochastic_structure for the online decisions regarding the units_on variable, than what is used for the production unit_flow variables. A common use-case is e.g. using only one units_on variable across multiple stochastic_scenarios for the unit_flow variables. Note that only one units_on__stochastic_structure relationship can be defined per unit per model, as interpreted by the units_on__stochastic_structure and model__stochastic_structure relationships.

The units_on__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

+- · SpineOpt.jl

The units_on__stochastic_structure relationship defines the stochastic_structure used by the units_on variable. Essentially, this relationship permits defining a different stochastic_structure for the online decisions regarding the units_on variable, than what is used for the production unit_flow variables. A common use-case is e.g. using only one units_on variable across multiple stochastic_scenarios for the unit_flow variables. Note that only one units_on__stochastic_structure relationship can be defined per unit per model, as interpreted by the units_on__stochastic_structure and model__stochastic_structure relationships.

The units_on__stochastic_structure relationship uses the model__default_stochastic_structure relationship if not specified.

diff --git a/dev/concept_reference/units_on__temporal_block/index.html b/dev/concept_reference/units_on__temporal_block/index.html index 86414374bb..d0b914e912 100644 --- a/dev/concept_reference/units_on__temporal_block/index.html +++ b/dev/concept_reference/units_on__temporal_block/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

units_on__temporal_block is a relationship linking the units_on variable of a unit to a specific temporal_block object. As such, this relationship will determine which temporal block governs the on- and offline status of the unit. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

+- · SpineOpt.jl

units_on__temporal_block is a relationship linking the units_on variable of a unit to a specific temporal_block object. As such, this relationship will determine which temporal block governs the on- and offline status of the unit. The temporal block holds information on the temporal scope and resolution for which the variable should be optimized.

diff --git a/dev/concept_reference/units_on_coefficient/index.html b/dev/concept_reference/units_on_coefficient/index.html index 8f92c3a079..cbb84608d4 100644 --- a/dev/concept_reference/units_on_coefficient/index.html +++ b/dev/concept_reference/units_on_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_on_cost/index.html b/dev/concept_reference/units_on_cost/index.html index a3112ab6f8..104e002683 100644 --- a/dev/concept_reference/units_on_cost/index.html +++ b/dev/concept_reference/units_on_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the units_on_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit is online over the current optimization window. It can be used to represent an idling cost or any fixed cost incurred when a unit is online.

+- · SpineOpt.jl

By defining the units_on_cost parameter for a specific unit, a cost term will be added to the objective function whenever this unit is online over the current optimization window. It can be used to represent an idling cost or any fixed cost incurred when a unit is online.

diff --git a/dev/concept_reference/units_on_non_anticipativity_time/index.html b/dev/concept_reference/units_on_non_anticipativity_time/index.html index 909c2afa3a..b607b6b7c7 100644 --- a/dev/concept_reference/units_on_non_anticipativity_time/index.html +++ b/dev/concept_reference/units_on_non_anticipativity_time/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The units_on_non_anticipativity_time parameter defines the duration, starting from the begining of the optimisation window, where units_on variables need to be fixed to the result of the previous window.

This is intended to model "slow" units whose commitment decision needs to be taken in advance, e.g., in "day-ahead" mode, and cannot be changed afterwards.

+- · SpineOpt.jl

The units_on_non_anticipativity_time parameter defines the duration, starting from the begining of the optimisation window, where units_on variables need to be fixed to the result of the previous window.

This is intended to model "slow" units whose commitment decision needs to be taken in advance, e.g., in "day-ahead" mode, and cannot be changed afterwards.

diff --git a/dev/concept_reference/units_started_up_coefficient/index.html b/dev/concept_reference/units_started_up_coefficient/index.html index d24a010b2c..307ba31f83 100644 --- a/dev/concept_reference/units_started_up_coefficient/index.html +++ b/dev/concept_reference/units_started_up_coefficient/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/units_unavailable/index.html b/dev/concept_reference/units_unavailable/index.html index 0dc82d87d4..d23c8371cf 100644 --- a/dev/concept_reference/units_unavailable/index.html +++ b/dev/concept_reference/units_unavailable/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

For clustered units, defines how many members of that unit are out of service, generally, or at a particular time. This can be used to, for example, to model maintenance outages. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor, and number_of_units, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable).

It is possible to allow the model to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 0.

+- · SpineOpt.jl

For clustered units, defines how many members of that unit are out of service, generally, or at a particular time. This can be used to, for example, to model maintenance outages. Typically this parameter takes a binary (UC) or integer (clustered UC) value. Together with the unit_availability_factor, and number_of_units, this will determine the maximum number of members that can be online at any given time. (Thus restricting the units_on variable).

It is possible to allow the model to schedule maintenance outages using outage_variable_type and scheduled_outage_duration.

The default value for this parameter is 0.

diff --git a/dev/concept_reference/upward_reserve/index.html b/dev/concept_reference/upward_reserve/index.html index fde11780c0..85d919fcc5 100644 --- a/dev/concept_reference/upward_reserve/index.html +++ b/dev/concept_reference/upward_reserve/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

+- · SpineOpt.jl

If a node has a true is_reserve_node parameter, it will be treated as a reserve node in the model. To define whether the node corresponds to an upward or downward reserve commodity, the upward_reserve or the downward_reserve parameter needs to be set to true, respectively.

diff --git a/dev/concept_reference/user_constraint/index.html b/dev/concept_reference/user_constraint/index.html index 103aea9a56..2669f286e8 100644 --- a/dev/concept_reference/user_constraint/index.html +++ b/dev/concept_reference/user_constraint/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The user_constraint is a generic data-driven custom constraint, which allows for defining constraints involving multiple units, nodes, or connections. The constraint_sense parameter changes the sense of the user_constraint, while the right_hand_side parameter allows for defining the constant terms of the constraint.

Coefficients for the different variables appearing in the user_constraint are defined using relationships, like e.g. unit__from_node__user_constraint and connection__to_node__user_constraint for unit_flow and connection_flow variables, or unit__user_constraint and node__user_constraint for units_on, units_started_up, and node_state variables.

For more information, see the dedicated article on User Constraints

+- · SpineOpt.jl

The user_constraint is a generic data-driven custom constraint, which allows for defining constraints involving multiple units, nodes, or connections. The constraint_sense parameter changes the sense of the user_constraint, while the right_hand_side parameter allows for defining the constant terms of the constraint.

Coefficients for the different variables appearing in the user_constraint are defined using relationships, like e.g. unit__from_node__user_constraint and connection__to_node__user_constraint for unit_flow and connection_flow variables, or unit__user_constraint and node__user_constraint for units_on, units_started_up, and node_state variables.

For more information, see the dedicated article on User Constraints

diff --git a/dev/concept_reference/variable_type_list/index.html b/dev/concept_reference/variable_type_list/index.html index 1c803bcea8..05a2069724 100644 --- a/dev/concept_reference/variable_type_list/index.html +++ b/dev/concept_reference/variable_type_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl
+- · SpineOpt.jl
diff --git a/dev/concept_reference/vom_cost/index.html b/dev/concept_reference/vom_cost/index.html index f35db3036c..3a97a48e99 100644 --- a/dev/concept_reference/vom_cost/index.html +++ b/dev/concept_reference/vom_cost/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

By defining the vom_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for the variable operation and maintenance costs associated with that unit over the course of its operational dispatch during the current optimization window.

+- · SpineOpt.jl

By defining the vom_cost parameter for a specific unit, node, and direction, a cost term will be added to the objective function to account for the variable operation and maintenance costs associated with that unit over the course of its operational dispatch during the current optimization window.

diff --git a/dev/concept_reference/weight/index.html b/dev/concept_reference/weight/index.html index f11b0ba5e6..3ccd44b332 100644 --- a/dev/concept_reference/weight/index.html +++ b/dev/concept_reference/weight/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The weight variable, defined for a temporal_block object can be used to assign different weights to different temporal periods that are modeled. It basically determines how important a certain temporal period is in the total cost, as it enters the Objective function. The main use of this parameter is for representative periods, where each representative period represents a specific fraction of a year or so.

+- · SpineOpt.jl

The weight variable, defined for a temporal_block object can be used to assign different weights to different temporal periods that are modeled. It basically determines how important a certain temporal period is in the total cost, as it enters the Objective function. The main use of this parameter is for representative periods, where each representative period represents a specific fraction of a year or so.

diff --git a/dev/concept_reference/weight_relative_to_parents/index.html b/dev/concept_reference/weight_relative_to_parents/index.html index 426bd3f849..9af5c3510f 100644 --- a/dev/concept_reference/weight_relative_to_parents/index.html +++ b/dev/concept_reference/weight_relative_to_parents/index.html @@ -5,4 +5,4 @@ # If not a root `stochastic_scenario` -weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

The above calculation is performed starting from the roots, generation by generation, until the leaves of the stochastic DAG. Thus, the final weight of each stochastic_scenario is dependent on the weight_relative_to_parents Parameters of all its ancestors.

+weight(scenario) = sum([weight(parent) * weight_relative_to_parents(scenario)] for parent in parents)

The above calculation is performed starting from the roots, generation by generation, until the leaves of the stochastic DAG. Thus, the final weight of each stochastic_scenario is dependent on the weight_relative_to_parents Parameters of all its ancestors.

diff --git a/dev/concept_reference/window_weight/index.html b/dev/concept_reference/window_weight/index.html index 6360bb9c6b..0a5dc03e72 100644 --- a/dev/concept_reference/window_weight/index.html +++ b/dev/concept_reference/window_weight/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

The window_weight parameter, defined for a model object, is used in the Benders decomposition algorithm with representative periods. In this setup, the subproblem rolls over a series of possibly disconnected windows, corresponding to the representative periods. Each of these windows can have a different weight, for example, equal to the fraction of the full model horizon that it represents. Chosing a good weigth can help the solution be more accurate.

To use weighted rolling representative periods Benders, do the following.

  • Specify roll_forward as an array of n duration values, so the subproblem rolls over representative periods.
  • Specify window_weight as an array of n + 1 floating point values, representing the weight of each window.

Note that it the problem rolls n times, then you have n + 1 windows.

+- · SpineOpt.jl

The window_weight parameter, defined for a model object, is used in the Benders decomposition algorithm with representative periods. In this setup, the subproblem rolls over a series of possibly disconnected windows, corresponding to the representative periods. Each of these windows can have a different weight, for example, equal to the fraction of the full model horizon that it represents. Chosing a good weigth can help the solution be more accurate.

To use weighted rolling representative periods Benders, do the following.

  • Specify roll_forward as an array of n duration values, so the subproblem rolls over representative periods.
  • Specify window_weight as an array of n + 1 floating point values, representing the weight of each window.

Note that it the problem rolls n times, then you have n + 1 windows.

diff --git a/dev/concept_reference/write_lodf_file/index.html b/dev/concept_reference/write_lodf_file/index.html index 150af1eb4f..66058d726b 100644 --- a/dev/concept_reference/write_lodf_file/index.html +++ b/dev/concept_reference/write_lodf_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network line outage distributions factors in CSV format will be written to the current directory.

+- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network line outage distributions factors in CSV format will be written to the current directory.

diff --git a/dev/concept_reference/write_mps_file/index.html b/dev/concept_reference/write_mps_file/index.html index 70a44e3ab0..2e07074fed 100644 --- a/dev/concept_reference/write_mps_file/index.html +++ b/dev/concept_reference/write_mps_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter is deprecated and will be removed in a future version.

This parameter controls when to write a diagnostic model file in MPS format. If set to write_mps_always, the model will always be written in MPS format to the current directory. If set to write\_mps\_on\_no\_solve, the MPS file will be written when the model solve terminates with a status of false. If set to write\_mps\_never, no file will be written

+- · SpineOpt.jl

This parameter is deprecated and will be removed in a future version.

This parameter controls when to write a diagnostic model file in MPS format. If set to write_mps_always, the model will always be written in MPS format to the current directory. If set to write\_mps\_on\_no\_solve, the MPS file will be written when the model solve terminates with a status of false. If set to write\_mps\_never, no file will be written

diff --git a/dev/concept_reference/write_mps_file_list/index.html b/dev/concept_reference/write_mps_file_list/index.html index 2179393782..aae7354208 100644 --- a/dev/concept_reference/write_mps_file_list/index.html +++ b/dev/concept_reference/write_mps_file_list/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

This parameter value list is deprecated and will be removed in a future version.

Houses the different values for the write_mps_file parameter. Possible values include write_mps_always, write\_mps\_on\_no\_solve, and write\_mps\_never.

+- · SpineOpt.jl

This parameter value list is deprecated and will be removed in a future version.

Houses the different values for the write_mps_file parameter. Possible values include write_mps_always, write\_mps\_on\_no\_solve, and write\_mps\_never.

diff --git a/dev/concept_reference/write_ptdf_file/index.html b/dev/concept_reference/write_ptdf_file/index.html index 526e5fdd3a..7ab4df517b 100644 --- a/dev/concept_reference/write_ptdf_file/index.html +++ b/dev/concept_reference/write_ptdf_file/index.html @@ -1,2 +1,2 @@ -- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network power transfer distributions factors in CSV format will be written to the current directory.

+- · SpineOpt.jl

If this parameter value is set to true, a diagnostics file containing all the network power transfer distributions factors in CSV format will be written to the current directory.

diff --git a/dev/getting_started/archetypes/index.html b/dev/getting_started/archetypes/index.html index 7c89bc1257..99b796e243 100644 --- a/dev/getting_started/archetypes/index.html +++ b/dev/getting_started/archetypes/index.html @@ -1,2 +1,2 @@ -Archetypes · SpineOpt.jl

Archetypes

Archetypes are essentially ready-made templates for different aspects of SpineOpt.jl. They are intended to serve both as examples for how the data structure in SpineOpt.jl works, as well as pre-made modular parts that can be imported on top of existing model input data.

The templates/models/basic_model_template.json contains a ready-made template for simple energy system models, with uniform time resolution and deterministic stochastic structure. Essentially, it serves as a basis for testing how the modelled system is set up, without having to worry about setting up the temporal and stochastic structures.

The rest of the different archetypes are included under templates/archetypes in the SpineOpt.jl repository. Each archetype is stored as a .json file containing the necessary objects, relationships, and parameters to form a functioning pre-made part for a SpineOpt.jl model. The archetypes aren't completely plug-and-play, as there are always some relationships required to connect the archetype to the other input data correctly. Regardless, the following sections explain the different archetypes included in the SpineOpt.jl repository, as well as what steps the user needs to take to connect said archetype to their input data correctly.

Loading the SpineOpt Template and Archetypes into Your Model

To load the latest version of the SpineOpt template, in the Spine DB Editor, from the menu (three hirzontal bars in the top right), click on import as follows:

importing the SpineOpt Template

Change the file type to JSON and click on spineopt_template.json as follows:

importing the SpineOpt Template

Click on spineopttemplate.json and press Open. If you don't see spineopttemplate.json make sure you have navigated to Spine\SpineOpt.jl\templates.

Loading the latest version of the SpineOpt template in this way will update your datastore with the latest version of the data structure.

Branching Stochastic Tree

templates/archetypes/branching_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called branching, representing a branching scenario tree. The stochastic_structure starts out as a single stochastic_scenario called realistic, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. This archetype is the final product of following the steps in the Example of branching stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Converging Stochastic Tree

templates/archetypes/converging_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called converging, representing a converging scenario tree (technically a directed acyclic graph DAG). The stochastic_structure starts out as a single stochastic_scenario called realization, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. Then, after 24 hours (1 day), these three forecasts converge into a single stochastic_scenario called converged_forecast. This archetype is the final product of following the steps in the Example of converging stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Deterministic Stochastic Structure

templates/archetypes/deterministic_stochastic_structure.json

This archetype contains the definitions required for an example stochastic_structure called deterministic, representing a simple deterministic modelling case. The stochastic_structure contains only a single stochastic_scenario called realization, which continues indefinitely. This archetype is the final product of following the steps in the Example of deterministic stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

+Archetypes · SpineOpt.jl

Archetypes

Archetypes are essentially ready-made templates for different aspects of SpineOpt.jl. They are intended to serve both as examples for how the data structure in SpineOpt.jl works, as well as pre-made modular parts that can be imported on top of existing model input data.

The templates/models/basic_model_template.json contains a ready-made template for simple energy system models, with uniform time resolution and deterministic stochastic structure. Essentially, it serves as a basis for testing how the modelled system is set up, without having to worry about setting up the temporal and stochastic structures.

The rest of the different archetypes are included under templates/archetypes in the SpineOpt.jl repository. Each archetype is stored as a .json file containing the necessary objects, relationships, and parameters to form a functioning pre-made part for a SpineOpt.jl model. The archetypes aren't completely plug-and-play, as there are always some relationships required to connect the archetype to the other input data correctly. Regardless, the following sections explain the different archetypes included in the SpineOpt.jl repository, as well as what steps the user needs to take to connect said archetype to their input data correctly.

Loading the SpineOpt Template and Archetypes into Your Model

To load the latest version of the SpineOpt template, in the Spine DB Editor, from the menu (three hirzontal bars in the top right), click on import as follows:

importing the SpineOpt Template

Change the file type to JSON and click on spineopt_template.json as follows:

importing the SpineOpt Template

Click on spineopttemplate.json and press Open. If you don't see spineopttemplate.json make sure you have navigated to Spine\SpineOpt.jl\templates.

Loading the latest version of the SpineOpt template in this way will update your datastore with the latest version of the data structure.

Branching Stochastic Tree

templates/archetypes/branching_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called branching, representing a branching scenario tree. The stochastic_structure starts out as a single stochastic_scenario called realistic, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. This archetype is the final product of following the steps in the Example of branching stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Converging Stochastic Tree

templates/archetypes/converging_stochastic_tree.json

This archetype contains the definitions required for an example stochastic_structure called converging, representing a converging scenario tree (technically a directed acyclic graph DAG). The stochastic_structure starts out as a single stochastic_scenario called realization, which then branches out into three roughly equiprobable stochastic_scenarios called forecast1, forecast2, and forecast3 after 6 hours. Then, after 24 hours (1 day), these three forecasts converge into a single stochastic_scenario called converged_forecast. This archetype is the final product of following the steps in the Example of converging stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

Deterministic Stochastic Structure

templates/archetypes/deterministic_stochastic_structure.json

This archetype contains the definitions required for an example stochastic_structure called deterministic, representing a simple deterministic modelling case. The stochastic_structure contains only a single stochastic_scenario called realization, which continues indefinitely. This archetype is the final product of following the steps in the Example of deterministic stochastics part of the Stochastic Framework section.

Importing this archetype into an input datastore only creates the stochastic_structure, which needs to be connected to the rest of your model using either the model__default_stochastic_structure relationship for a model-wide default, or the other relevant Structural relationship classes. Note that the model-wide default gets superceded by any conflicting definitions via e.g. the node__stochastic_structure.

diff --git a/dev/getting_started/creating_your_own_model/index.html b/dev/getting_started/creating_your_own_model/index.html index d223d27c67..e233b1dd6d 100644 --- a/dev/getting_started/creating_your_own_model/index.html +++ b/dev/getting_started/creating_your_own_model/index.html @@ -1,2 +1,2 @@ -Creating Your Own Model · SpineOpt.jl

Creating Your Own Model

This part of the guide shows first an example how to insert objects and their parameter data. Then it shows what other objects, relationships and parameter data needs to be added for a very basic model. Lastly, the model instance is run.

This section explains the process of creating a SpineOpt.jl model from scratch in order to give you an understanding of the underlying principles of the data structure, etc. If you simply want to try something out quickly to see results, check out the Example Models section. Furthermore, if you're in a hurry, the Archetypes section provides you with some pre-made templates for the different parts of a SpineOpt.jl model to get you started quickly.

Creating a SpineOpt model instance

  • First, open the database editor by double-clicking the Input DB.
  • Right click on model in the Object tree.
  • Choose Add objects.
  • Then, add a model object by writing a name to the object name field. You can use e.g. instance.
  • Click ok.
  • The model object in SpineOpt is an abstraction that represents the model itself. Every SpineOpt database needs to have at least one model object.
  • The model object holds general information about the optimization. The whole range of functionalities is explained in Advanced Concepts chapter - in here a minimal set of parameters is used.

image

image

Add parameter values to the model instance

  • Select the model object instance from the object tree.
  • Go to the Object parameter value tab.
  • Every parameter value belongs to a specific alternative. This allows to hold multiple values for the same parameter of a particular object. The alternative values are used to create scenarios. Choose, Base for all parameter values (Base is required in Spine Toolbox - all other alternatives can be chosen freely).
  • Then define a model_start time and a model_end time.
    • Double-click on the empty row under parameter_name and select model_start.
    • A None should appear in value column.
    • To asign a start date value, right-click on None and open the editor (cannot be entered directly, since the datatype needs to be changed).
    • The parameter type of model_start is of type Datetime.
    • Set the value to e.g. 2019-01-01T00:00:00.
    • Proceed accordingly for the model_end.

image

Further reading on adding parameter values can be found here.

Add other necessary objects and parameter data for the objects.

  • Add all objects and their parameter data by replicating what has been done in the picture below. Do it the same way as explained above with the following caveats.
  • Whilst most object names can be freely defined by the user, there is one object name in the example below that needs to be written exactly since it is used internally by SpineOpt: unit_flow.
  • The parameter_name can be selected from a drop down menu.
  • The date time and time series parameter data can be added by using right-click to access the Edit... dialog. When creating the time series, use the fixed resolution with Start time of the model run and with 1h resolution. Then only values need to be entered (or copy pasted) and time stamps come automatically.
  • Parameter balance_type needs to have value balance_type_none in the gas node, since it allows the node to create energy (natural gas) against a price and therefore the energy balance is not maintained.

image

Define temporal and stochastic structures

  • To specify the temporal structure for SpineOpt, you need to define temporal_block objects. Think of a temporal_block as a distinctive way of 'slicing' time across the model horizon.
  • To link the temporal structure to the spatial structure, you need to specify node__temporal_block relationships, establishing which temporal__block applies to each node. This relationship is added by right-clicking the node__temporal_block in the relationship tree and then using the add relationships... dialog. Double clicking on an empty cell gives you the list of valid objects. The relationship name is automatically formed, but you can change it if that is desirable.
  • To keep things simple at this point, let's just define one temporal_block for our model and apply it to all nodes. We add the object hourly_temporal_block of type temporal_block following the same procedure as before and establish node__temporal_block relationships between node_gas and hourly_temporal_block, and electricity_node and hourly_temporal_block.
  • In practical terms, the above means that there energy flows over gas_node and electricity_node for each 'time-slice' comprised in hourly_temporal_block.
  • Similarly with the stochastic structure, each node is assigned a deterministic stochastic_structure.

Define the spatial structure

  • To specify the spatial structure for SpineOpt, you will need to use the node, unit, and connection objects added before.
  • Nodes can be understood as spatial aggregators. In combination with units and connections, they form the energy network.
  • Units in SpineOpt represent any kind of conversion process. As one example, a unit can represent a power plant that converts the flow of a commodity fuel into an electricity and/or heat flow.
  • Connections on the other hand describe the transport of goods from one location to another. Electricity lines and gas pipelines are examples of such connections. This example does not use connections.
  • The database should have an object gas_turbine for the unit object class and objects node_gas and node_elec for the node object class.
  • Next, define how the unit and the nodes interact with each other: create a unit__from_node relationship between gas_turbine and node_gas, and unit__to_node relationships between gas_turbine and node_elec.
  • In practical terms, the above means that there is an energy flow going from node_gas into node_elec, through the gas_turbine.

Add remaining relationships and parameter data for the relationships.

  • Similar to adding the objects and their parameter data, add the relationships and their parameter data based on the picture below.
  • The capacity of the gas_turbine has to be sufficient to meet the highest demand for electricity, otherwise the model will be infeasible (it is possible to set penalty values, but they are not included in this example).
  • The parameter fix_ratio_in_out_unit_flow forces the ratio between an input and output flow to be a constant. This is one way to establish an efficiency for a conversion process.

image

Run the model

  • Select SpineOpt
  • Press Execute selection.

image

If it fails

  • Double-check that the data is correct
  • Try to see what the problem might be
  • Ask help from the discussion forum

Explore the results

  • Double-clicking the Results database.

image

Create and run scenarios and build the model further

  • Create a new alternative
  • Add parameter data for the new alternative
  • Connect alternatives under a scenario. Toolbox modifies Base data with the data from the alternatives in the same scenario.
  • Execute multiple scenarios in parallel. First run in a new Julia instance will need to compile SpineOpt taking some time.

image

image

+Creating Your Own Model · SpineOpt.jl

Creating Your Own Model

This part of the guide shows first an example how to insert objects and their parameter data. Then it shows what other objects, relationships and parameter data needs to be added for a very basic model. Lastly, the model instance is run.

This section explains the process of creating a SpineOpt.jl model from scratch in order to give you an understanding of the underlying principles of the data structure, etc. If you simply want to try something out quickly to see results, check out the Example Models section. Furthermore, if you're in a hurry, the Archetypes section provides you with some pre-made templates for the different parts of a SpineOpt.jl model to get you started quickly.

Creating a SpineOpt model instance

  • First, open the database editor by double-clicking the Input DB.
  • Right click on model in the Object tree.
  • Choose Add objects.
  • Then, add a model object by writing a name to the object name field. You can use e.g. instance.
  • Click ok.
  • The model object in SpineOpt is an abstraction that represents the model itself. Every SpineOpt database needs to have at least one model object.
  • The model object holds general information about the optimization. The whole range of functionalities is explained in Advanced Concepts chapter - in here a minimal set of parameters is used.

image

image

Add parameter values to the model instance

  • Select the model object instance from the object tree.
  • Go to the Object parameter value tab.
  • Every parameter value belongs to a specific alternative. This allows to hold multiple values for the same parameter of a particular object. The alternative values are used to create scenarios. Choose, Base for all parameter values (Base is required in Spine Toolbox - all other alternatives can be chosen freely).
  • Then define a model_start time and a model_end time.
    • Double-click on the empty row under parameter_name and select model_start.
    • A None should appear in value column.
    • To asign a start date value, right-click on None and open the editor (cannot be entered directly, since the datatype needs to be changed).
    • The parameter type of model_start is of type Datetime.
    • Set the value to e.g. 2019-01-01T00:00:00.
    • Proceed accordingly for the model_end.

image

Further reading on adding parameter values can be found here.

Add other necessary objects and parameter data for the objects.

  • Add all objects and their parameter data by replicating what has been done in the picture below. Do it the same way as explained above with the following caveats.
  • Whilst most object names can be freely defined by the user, there is one object name in the example below that needs to be written exactly since it is used internally by SpineOpt: unit_flow.
  • The parameter_name can be selected from a drop down menu.
  • The date time and time series parameter data can be added by using right-click to access the Edit... dialog. When creating the time series, use the fixed resolution with Start time of the model run and with 1h resolution. Then only values need to be entered (or copy pasted) and time stamps come automatically.
  • Parameter balance_type needs to have value balance_type_none in the gas node, since it allows the node to create energy (natural gas) against a price and therefore the energy balance is not maintained.

image

Define temporal and stochastic structures

  • To specify the temporal structure for SpineOpt, you need to define temporal_block objects. Think of a temporal_block as a distinctive way of 'slicing' time across the model horizon.
  • To link the temporal structure to the spatial structure, you need to specify node__temporal_block relationships, establishing which temporal__block applies to each node. This relationship is added by right-clicking the node__temporal_block in the relationship tree and then using the add relationships... dialog. Double clicking on an empty cell gives you the list of valid objects. The relationship name is automatically formed, but you can change it if that is desirable.
  • To keep things simple at this point, let's just define one temporal_block for our model and apply it to all nodes. We add the object hourly_temporal_block of type temporal_block following the same procedure as before and establish node__temporal_block relationships between node_gas and hourly_temporal_block, and electricity_node and hourly_temporal_block.
  • In practical terms, the above means that there energy flows over gas_node and electricity_node for each 'time-slice' comprised in hourly_temporal_block.
  • Similarly with the stochastic structure, each node is assigned a deterministic stochastic_structure.

Define the spatial structure

  • To specify the spatial structure for SpineOpt, you will need to use the node, unit, and connection objects added before.
  • Nodes can be understood as spatial aggregators. In combination with units and connections, they form the energy network.
  • Units in SpineOpt represent any kind of conversion process. As one example, a unit can represent a power plant that converts the flow of a commodity fuel into an electricity and/or heat flow.
  • Connections on the other hand describe the transport of goods from one location to another. Electricity lines and gas pipelines are examples of such connections. This example does not use connections.
  • The database should have an object gas_turbine for the unit object class and objects node_gas and node_elec for the node object class.
  • Next, define how the unit and the nodes interact with each other: create a unit__from_node relationship between gas_turbine and node_gas, and unit__to_node relationships between gas_turbine and node_elec.
  • In practical terms, the above means that there is an energy flow going from node_gas into node_elec, through the gas_turbine.

Add remaining relationships and parameter data for the relationships.

  • Similar to adding the objects and their parameter data, add the relationships and their parameter data based on the picture below.
  • The capacity of the gas_turbine has to be sufficient to meet the highest demand for electricity, otherwise the model will be infeasible (it is possible to set penalty values, but they are not included in this example).
  • The parameter fix_ratio_in_out_unit_flow forces the ratio between an input and output flow to be a constant. This is one way to establish an efficiency for a conversion process.

image

Run the model

  • Select SpineOpt
  • Press Execute selection.

image

If it fails

  • Double-check that the data is correct
  • Try to see what the problem might be
  • Ask help from the discussion forum

Explore the results

  • Double-clicking the Results database.

image

Create and run scenarios and build the model further

  • Create a new alternative
  • Add parameter data for the new alternative
  • Connect alternatives under a scenario. Toolbox modifies Base data with the data from the alternatives in the same scenario.
  • Execute multiple scenarios in parallel. First run in a new Julia instance will need to compile SpineOpt taking some time.

image

image

diff --git a/dev/getting_started/installation/index.html b/dev/getting_started/installation/index.html index 97f708d22a..df452ef7d1 100644 --- a/dev/getting_started/installation/index.html +++ b/dev/getting_started/installation/index.html @@ -2,4 +2,4 @@ Installation · SpineOpt.jl

Installation

Compatibility

This package requires Julia 1.2 or later.

Some of the development of SpineOpt depends on the development of SpineInterface and vice versa. At some points in time that can create an incompatibility between the two. It might just be a matter of time before the projects are updated. In the meanwhile you can check the issues on github whether someone has already reported the out-of-sync issue or otherwise create the issue yourself. In the meanwhile you can try another version. One option is to update the packages directly from the github repository instead of the julia registry. Another option is to use the developer version. The installation procedure for both of these options are described in the readme of the github repository.

Installation

If you haven't yet installed the tools yet, please follow the installation guides:

If you are not sure whether you have the latest version, please upgrade to ensure compatibility with this guide.

  • For Spine Toolbox:
- If installed with pipx, then use `python -m pipx upgrade spinetoolbox`
 - If installed from sources using git, then<br>
     `git pull`<br>
-	`python -m pip install -U -r requirements.txt`<br>
  • For SpineOpt: https://github.com/spine-tools/SpineOpt.jl#upgrading
+ `python -m pip install -U -r requirements.txt`<br> diff --git a/dev/getting_started/output_data/index.html b/dev/getting_started/output_data/index.html index 75fe2c2b34..aa2a37e428 100644 --- a/dev/getting_started/output_data/index.html +++ b/dev/getting_started/output_data/index.html @@ -1,2 +1,2 @@ -Managing Outputs · SpineOpt.jl

Managing Output Data

Once a model is created and successfully run, it will hopefully produce results and output data. This section covers how the writing of output data is controlled and managed.

Specifying Your Output Data Store

In your workflow (for more details see Setting up a workflow for SpineOpt in Spine Toolbox) you will normally have a output datastore connected to your RunSpineOpt workflow tool. This is where your output data will be written. If no output datastore is specified, the results will be written by default to the input datastore. However, it is generally preferable to define a separate output data store for results. See Setting up a workflow for SpineOpt in Spine Toolbox for the steps to add an output datastore to your workflow)

Specifying Outputs to Write

Outputting of results to the output datastore is controlled using the output and report object classes. To output a specific variable to the output datastore, we need to create an output object of the same name. For example, to output the unit_flow variable, we must create an output object named unit_flow. The SpineOpt template contains output objects for most problem variables and importing or re-importing the SpineOpt template will add these to your input datastore. So it is probable these output objects will exist already in your input datastore. Once the output objects exist in your model, they must then be added to a report object by creating an report__output relationship

Creating Reports

Reports are essentially a collection of outputs that can be written to an output datastore. Any number of report objects can be created. We add output items to a report by creating report__output relationships between the output objects we want included and the desired report object. Finally, to write a specic report to the output database, we must create a model__report relationship for each report object we want included in the output datastore.

Reporting of Input Parameters

In addition to writing results as outputs to a datastore, SpineOpt can also report input parameter data. To allow specific input parameters to be included in a report, they must be first added as output objects with a name corresponding exactly to the parameter name. For example, to allow the demand parameter to be included in a report, there must be a correspondingly named output object called demand. Similarly to outputs, to include an input parameter in a report, we must create a report__output relationship between the output object representing the input parameter (e.g. demand) and the desired report object.

Reporting of Dual Values

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's marginal value to be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced_cost() for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of unitson in the final fixed LP to be written to the output db. Finally, if any constraint duals or reducedcost values are requested via a report, calculate_duals is set to true and the final fixed LP solve is triggered.

Output Data Temporal Resolution

To control the resolution of report data (both output data and input data appearing in reports), we use the output_resolution output parameter. For the specific output (or input), this indicates the resolution at which the values should be reported. If output_resolution is null (the default), results are reported at the highest available resolution that will follow from the temporal structure of the model. If output_resolution is a duration value, then the average value is reported.

Output Data Structure

The structure of the output data will follow the structure of the input data with the inclusion of additional dimensions as described below:

  • The report object to which the output data items belong will be added as a dimension
  • The relevant stochastic scenario will be added as a dimension to all output data items. This allows for stochastic data to be written to the output datastore. However, in deterministic models, the single deterministic scenario will still appear as an additional dimension
  • For unit flows, the flow direction is added as a dimension to the output.

Example: unit_flow

For example, consider the unit_flow) optimisation variable. This variable is dimensioned on the unit__to_node and unit__from_node relationships. In the output datastore, the report, stochastic_scenario and flow direction are added as additional dimensions. Therefore, unit__to_node values will appear in the output datastore as timeseries parameters associated with the report__unit__node__direction__stochastic_scenario relationship as shown below.

image

To view the data, simply double-click on the timeseries value

Example: units_on

Consider the units_on) optimisation variable. This variable is dimensioned on the unit object class. In the output datastore, the report and stochastic_scenario are added as additional dimensions. Therefore, units_on values will appear in the output datastore as timeseries parameters associated with the report__unit__stochastic_scenario relationship as shown below.

image

To view the data, simply double-click on the timeseries value

Alternatives and Multiple Model Runs

  • All outputs from a single run of a model will be tagged with a unique "alternative". Alternatives allow multiple values to be specified for the same parameter. If a model is run multiple times, the results will be appended to the output datastore with a new alternative which uniquely identifies the scenario and model run. This is convenient as it allows results from multiple runs and for multiple scenarios to be viewed and compared simultaneously. If a specific altnernative is not selected (the default condition) the results for all alternatives will be visible. If a single altnerative is selected or multiple alternatives are selected in the altnerative tree, then only the results for the selected alternatives will be shown.

In the example below, the relationship class report__unit__stochastic_scenario is selected in the relationship treem therefore results for that relationship class are showing in the relationship parameter pane. Furthermore, in the alternative tree, the alternative 10h TP Load _Reun SpineOpt... is selected, meaning only results for that alternative are being displayed.

image

Output Writing Summary

  • We need an output object in our intput datastore for each variable or marginal value we want included in a report
  • Inputs data can also be reported. As above, we need to create an output object named after the input parameter we want reported
  • We need to create a report object to contain our desired outputs (or input parameters) which are added to our report via report__output relationships
  • We need to create a model__report object to write a specific report to the output datastore.
  • The temporal resolution of outputs (which may also be input parameters) is controlled by the output_resolution output duration parameter. If null, the highest available resolution is reported, otherwise the average is reported over the desired duration.
  • Additional dimensions are added to the output data such as the report object, stochastic_scenario and, in the case of unit_flow, the flow direction.
  • Model outputs are tagged with altnernatives that are unique to the model run and scenario that generated them
+Managing Outputs · SpineOpt.jl

Managing Output Data

Once a model is created and successfully run, it will hopefully produce results and output data. This section covers how the writing of output data is controlled and managed.

Specifying Your Output Data Store

In your workflow (for more details see Setting up a workflow for SpineOpt in Spine Toolbox) you will normally have a output datastore connected to your RunSpineOpt workflow tool. This is where your output data will be written. If no output datastore is specified, the results will be written by default to the input datastore. However, it is generally preferable to define a separate output data store for results. See Setting up a workflow for SpineOpt in Spine Toolbox for the steps to add an output datastore to your workflow)

Specifying Outputs to Write

Outputting of results to the output datastore is controlled using the output and report object classes. To output a specific variable to the output datastore, we need to create an output object of the same name. For example, to output the unit_flow variable, we must create an output object named unit_flow. The SpineOpt template contains output objects for most problem variables and importing or re-importing the SpineOpt template will add these to your input datastore. So it is probable these output objects will exist already in your input datastore. Once the output objects exist in your model, they must then be added to a report object by creating an report__output relationship

Creating Reports

Reports are essentially a collection of outputs that can be written to an output datastore. Any number of report objects can be created. We add output items to a report by creating report__output relationships between the output objects we want included and the desired report object. Finally, to write a specic report to the output database, we must create a model__report relationship for each report object we want included in the output datastore.

Reporting of Input Parameters

In addition to writing results as outputs to a datastore, SpineOpt can also report input parameter data. To allow specific input parameters to be included in a report, they must be first added as output objects with a name corresponding exactly to the parameter name. For example, to allow the demand parameter to be included in a report, there must be a correspondingly named output object called demand. Similarly to outputs, to include an input parameter in a report, we must create a report__output relationship between the output object representing the input parameter (e.g. demand) and the desired report object.

Reporting of Dual Values

To report the dual of a constraint, one can add an output item with the corresponding constraint name (e.g. constraint_nodal_balance) and add that to a report. This will cause the corresponding constraint's marginal value to be reported in the output DB. When adding a constraint name as an output we need to preface the actual constraint name with constraint_ to avoid ambiguity with variable names (e.g. units_available). So to report the marginal value of units_available we add an output object called constraint_units_available.

To report the reduced_cost() for a variable which is the marginal value of the associated active bound or fix constraints on that variable, one can add an output object with the variable name prepended by bound_. So, to report the unitson reducedcost value, one would create an output item called bound_units_on. If added to a report, this will cause the reduced cost of unitson in the final fixed LP to be written to the output db. Finally, if any constraint duals or reducedcost values are requested via a report, calculate_duals is set to true and the final fixed LP solve is triggered.

Output Data Temporal Resolution

To control the resolution of report data (both output data and input data appearing in reports), we use the output_resolution output parameter. For the specific output (or input), this indicates the resolution at which the values should be reported. If output_resolution is null (the default), results are reported at the highest available resolution that will follow from the temporal structure of the model. If output_resolution is a duration value, then the average value is reported.

Output Data Structure

The structure of the output data will follow the structure of the input data with the inclusion of additional dimensions as described below:

  • The report object to which the output data items belong will be added as a dimension
  • The relevant stochastic scenario will be added as a dimension to all output data items. This allows for stochastic data to be written to the output datastore. However, in deterministic models, the single deterministic scenario will still appear as an additional dimension
  • For unit flows, the flow direction is added as a dimension to the output.

Example: unit_flow

For example, consider the unit_flow) optimisation variable. This variable is dimensioned on the unit__to_node and unit__from_node relationships. In the output datastore, the report, stochastic_scenario and flow direction are added as additional dimensions. Therefore, unit__to_node values will appear in the output datastore as timeseries parameters associated with the report__unit__node__direction__stochastic_scenario relationship as shown below.

image

To view the data, simply double-click on the timeseries value

Example: units_on

Consider the units_on) optimisation variable. This variable is dimensioned on the unit object class. In the output datastore, the report and stochastic_scenario are added as additional dimensions. Therefore, units_on values will appear in the output datastore as timeseries parameters associated with the report__unit__stochastic_scenario relationship as shown below.

image

To view the data, simply double-click on the timeseries value

Alternatives and Multiple Model Runs

  • All outputs from a single run of a model will be tagged with a unique "alternative". Alternatives allow multiple values to be specified for the same parameter. If a model is run multiple times, the results will be appended to the output datastore with a new alternative which uniquely identifies the scenario and model run. This is convenient as it allows results from multiple runs and for multiple scenarios to be viewed and compared simultaneously. If a specific altnernative is not selected (the default condition) the results for all alternatives will be visible. If a single altnerative is selected or multiple alternatives are selected in the altnerative tree, then only the results for the selected alternatives will be shown.

In the example below, the relationship class report__unit__stochastic_scenario is selected in the relationship treem therefore results for that relationship class are showing in the relationship parameter pane. Furthermore, in the alternative tree, the alternative 10h TP Load _Reun SpineOpt... is selected, meaning only results for that alternative are being displayed.

image

Output Writing Summary

  • We need an output object in our intput datastore for each variable or marginal value we want included in a report
  • Inputs data can also be reported. As above, we need to create an output object named after the input parameter we want reported
  • We need to create a report object to contain our desired outputs (or input parameters) which are added to our report via report__output relationships
  • We need to create a model__report object to write a specific report to the output datastore.
  • The temporal resolution of outputs (which may also be input parameters) is controlled by the output_resolution output duration parameter. If null, the highest available resolution is reported, otherwise the average is reported over the desired duration.
  • Additional dimensions are added to the output data such as the report object, stochastic_scenario and, in the case of unit_flow, the flow direction.
  • Model outputs are tagged with altnernatives that are unique to the model run and scenario that generated them
diff --git a/dev/getting_started/setup_workflow/index.html b/dev/getting_started/setup_workflow/index.html index 5c05631b50..d020fd3694 100644 --- a/dev/getting_started/setup_workflow/index.html +++ b/dev/getting_started/setup_workflow/index.html @@ -1,4 +1,4 @@ Setting up a workflow · SpineOpt.jl

Setting up a workflow for SpineOpt in Spine Toolbox

The next steps will set up a SpineOpt specific input database by creating a new Spine database, loading a blank SpineOpt template, connecting it to a SpineOpt instance and setting up a database for model results.

  • Create a new Spine Toolbox project in an empty folder of your choice: File –> New project...
  • Create the input database
    • Drag an empty Data store from the toolbar to the Design View.
    • Give it a name like "Input DB".
    • Select SQL database dialect (sqlite is a local file and works without a server).
    • Click New Spine DB in the Data Store Properties window and create a new database (and save it, if it's sqlite).
    • For more information about creating and managing Spine Toolbox database, see the documentation

image

image

  • Fill the Input DB with SpineOpt data format either by:
    • Drag a tool Load template from the SpineOpt ribbon to the Design View.
    • Connect an arrow from the Load template to the new Input DB.
    • Make sure the Load template item from the Design view is selected (then you can edit the properties of that workflow item in the Tool properties window.
    • Add the url link in Available resources to the Tool arguments - you are passing the database address as a command line argument to the load_template.jl script so that it knows where to store the output.
    • Then execute the Load template tool. Please note that this process uses SpineOpt to generate the data structure. It takes time, since everything is compiled when running a tool in Julia for the first time in each Julia session. You may also see lot of messages and warnings concernging the compilation, but they should be benign.

image

image

image

image

  • ...or by:
    • Start Julia (you can start a separate Julia console in Spine Toolbox: go to Consoles –> Start Julia Console).
    • Copy the URL address of the Data Store from the 'Data Store Properties' –> a copy icon at the bottom.
    • Then run the following script with the right URL address pasted. The process uses SpineOpt itself to build the database structure. Please note that 'using SpineOpt' for the first time for each Julia session takes time - everything is being compiled.
      • Known issue: On Windows, the backslash between directories need to be changed to a double forward slash.
julia> using SpineOpt
 
-julia> SpineOpt.import_data("copied URL address, inside these quotes", SpineOpt.template(), "Load SpineOpt template")
  • Drag SpineOpt tool icon to the Design view.
  • Connect an arrow from the Input DB to SpineOpt.

image

  • Create a database for results
    • Drag a new Data store from the toolbar to the Design View.
    • You can rename it to e.g. Results. Select SQL database dialect (sqlite is a local file and works without a server).
    • Click New Spine DB in the Data Store Properties window and create a new database (and save it, if it's sqlite).
    • Connect an arrow from the SpineOpt to Results.

image

  • Select SpineOpt tool in the Design view.
  • Add the url link for the input data store and the output data store from Available resources to the Tool arguments (in that order).

image

SpineOpt would be ready to run, but for the Input DB, which is empty of content (it's just a template that contains a SpineOpt specific data structure). The next step goes through setting up and running a simple toy model.

+julia> SpineOpt.import_data("copied URL address, inside these quotes", SpineOpt.template(), "Load SpineOpt template")

image

image

image

SpineOpt would be ready to run, but for the Input DB, which is empty of content (it's just a template that contains a SpineOpt specific data structure). The next step goes through setting up and running a simple toy model.

diff --git a/dev/how_to/change_the_solver/index.html b/dev/how_to/change_the_solver/index.html index faebe2ac55..3a8b292501 100644 --- a/dev/how_to/change_the_solver/index.html +++ b/dev/how_to/change_the_solver/index.html @@ -1,2 +1,2 @@ -Change the solver · SpineOpt.jl

If you want to change the solver for your optimization problem in SpineOpt, here is some guidance:

  • You can change the solvers in your input datastore using the db_lp_solver and db_mip_solver parameter values of the model object.
  • You can specify solver options via the db_lp_solver_options and db_mip_solver_options parameters respectively. These are map parameters where the first key is the solver name exactly as the db_mip_solver or db_lp_solver name, the second key is the solver option name and the value is the option value.
  • You can get a head start by copying the default map values for db_lp_solver_options and db_mip_solver_options. You can access the default values by clicking on the 'Object parameter definition' tab.
  • If you were trying to change the solver using the arguments to run_spineopt(), this is not the recommended way and will soon be deprecated.
  • The solver name corresponds to the name of the Julia package that you will need to install. Some like HiGHs.jl are self contained and include the binaries. Others like CPLEX.jl and Gurobi.jl you will need to point the package to your locally installed binaries - the julia packages have the instructions to do this.

The first option is the easiest. The more advanced way of using the solver options is illustrated below.

Set the model parameter values to choose the solvers and set the solver options:

image

This is what the solver options map parameter value looks like:

image

To get a head start with solver options, you can copy their default map values from the parameter definition tab like this:

image

+Change the solver · SpineOpt.jl

If you want to change the solver for your optimization problem in SpineOpt, here is some guidance:

  • You can change the solvers in your input datastore using the db_lp_solver and db_mip_solver parameter values of the model object.
  • You can specify solver options via the db_lp_solver_options and db_mip_solver_options parameters respectively. These are map parameters where the first key is the solver name exactly as the db_mip_solver or db_lp_solver name, the second key is the solver option name and the value is the option value.
  • You can get a head start by copying the default map values for db_lp_solver_options and db_mip_solver_options. You can access the default values by clicking on the 'Object parameter definition' tab.
  • If you were trying to change the solver using the arguments to run_spineopt(), this is not the recommended way and will soon be deprecated.
  • The solver name corresponds to the name of the Julia package that you will need to install. Some like HiGHs.jl are self contained and include the binaries. Others like CPLEX.jl and Gurobi.jl you will need to point the package to your locally installed binaries - the julia packages have the instructions to do this.

The first option is the easiest. The more advanced way of using the solver options is illustrated below.

Set the model parameter values to choose the solvers and set the solver options:

image

This is what the solver options map parameter value looks like:

image

To get a head start with solver options, you can copy their default map values from the parameter definition tab like this:

image

diff --git a/dev/how_to/define_an_efficiency/index.html b/dev/how_to/define_an_efficiency/index.html index 7538db975b..a86d7fe437 100644 --- a/dev/how_to/define_an_efficiency/index.html +++ b/dev/how_to/define_an_efficiency/index.html @@ -1,2 +1,2 @@ -Define an efficiency · SpineOpt.jl

How to define an efficiency

relationships between the inputs and outputs of a unit

The image below shows an overview of the possible relationships between the inputs and outputs of a unit.

image

image

The key capability requirements are:

  • Easily define arbitrary numbers of input and output flows
  • Easily create piecewise affine linear relationships between any two flows
  • Anything more complicated can be done via user_constraints

unit__node__node relationship

image

The unit__node__node relationship allows you to constrain two nodes to each other via a number of different parameters.:

  • unit_incremental_heat_rate: Input_flow = unit_incremental_heat_rate * output_flow + unit_idle_heat_rate * units_on. It can be piecewise linear, used in conjunction with operating_points with monotonically increasing coefficients (not enforced). Used in conjunction with unit_idle_heat_rate triggers a fixed flow when the unit is online and unit_start_flow triggers a flow on a unit start (start fuel consumption)
  • fix_ratio_out_in_unit_flow: equivalent to an efficiency. Output_flow = fix_ratio_out_in_unit_flow x output_flow + fix_units_on_coefficient_out_in * units_on. Ordering of the nodes in the unit__node__node relationship matters. The first node will be the output flow and the second node will be treated as the input flow (consistently with the out_in in the parameter name. A units_on coefficient is added with fix_units_on_coefficient_out_in,
  • In addition to fix_ratio_out_in_unit_flow you have [constraint]_ratio_[direction1]_[direction2]_unit_flow where constraint can be min, max or fix and determines the sense of the constraint (max: <, min: >, fix: =) while direction1 and direction2 are used to interpret the direction of the flows involved. In signifies an input flow to the unit while out signifies an output flow from the unit. For each of these parameters, there is a corresponding [constraint]_[direction1]_[direction2]_units_on_coefficient. For example: max_ratio_in_out_unit_flow creates the following constraint:

input_flow < max_ratio_in_out_unit_flow * output_flow + max_units_on_coefficient_in_out * units_on

real world example: Compressed Air Energy Storage

To give a feeling for why these functionalities are useful, consider the following real world example for Compressed Air Energy Storage:

image

known issues

That does not mean that this implementation is perfect; there are some known issues:

  • Multiple ways to do the same thing (kind of)
  • The ordering of nodes in unit__node__node relationship matters and this can be confusing
  • When specifying a unit__node__node relationship, currently toolbox doesn’t constrain a user to choosing nodes that are connected to the unit. It’s possible to create a unit__node__node relationship between a unit and nodes where there are no flows. We actually need to define a relationship between two flows, which is really a relationship between a unit__[to/from]_node relationship and a unit__[to/from]_node relationship.
  • There is a long list of parameters (24 in total) [fix/max/min]_ratio_[in/out]_[in/out]_[unit_flow/units_on_coefficient]
  • Incremental_heat_rate supports piecewise linear but the ratio constraints don’t
+Define an efficiency · SpineOpt.jl

How to define an efficiency

relationships between the inputs and outputs of a unit

The image below shows an overview of the possible relationships between the inputs and outputs of a unit.

image

image

The key capability requirements are:

  • Easily define arbitrary numbers of input and output flows
  • Easily create piecewise affine linear relationships between any two flows
  • Anything more complicated can be done via user_constraints

unit__node__node relationship

image

The unit__node__node relationship allows you to constrain two nodes to each other via a number of different parameters.:

  • unit_incremental_heat_rate: Input_flow = unit_incremental_heat_rate * output_flow + unit_idle_heat_rate * units_on. It can be piecewise linear, used in conjunction with operating_points with monotonically increasing coefficients (not enforced). Used in conjunction with unit_idle_heat_rate triggers a fixed flow when the unit is online and unit_start_flow triggers a flow on a unit start (start fuel consumption)
  • fix_ratio_out_in_unit_flow: equivalent to an efficiency. Output_flow = fix_ratio_out_in_unit_flow x output_flow + fix_units_on_coefficient_out_in * units_on. Ordering of the nodes in the unit__node__node relationship matters. The first node will be the output flow and the second node will be treated as the input flow (consistently with the out_in in the parameter name. A units_on coefficient is added with fix_units_on_coefficient_out_in,
  • In addition to fix_ratio_out_in_unit_flow you have [constraint]_ratio_[direction1]_[direction2]_unit_flow where constraint can be min, max or fix and determines the sense of the constraint (max: <, min: >, fix: =) while direction1 and direction2 are used to interpret the direction of the flows involved. In signifies an input flow to the unit while out signifies an output flow from the unit. For each of these parameters, there is a corresponding [constraint]_[direction1]_[direction2]_units_on_coefficient. For example: max_ratio_in_out_unit_flow creates the following constraint:

input_flow < max_ratio_in_out_unit_flow * output_flow + max_units_on_coefficient_in_out * units_on

real world example: Compressed Air Energy Storage

To give a feeling for why these functionalities are useful, consider the following real world example for Compressed Air Energy Storage:

image

known issues

That does not mean that this implementation is perfect; there are some known issues:

  • Multiple ways to do the same thing (kind of)
  • The ordering of nodes in unit__node__node relationship matters and this can be confusing
  • When specifying a unit__node__node relationship, currently toolbox doesn’t constrain a user to choosing nodes that are connected to the unit. It’s possible to create a unit__node__node relationship between a unit and nodes where there are no flows. We actually need to define a relationship between two flows, which is really a relationship between a unit__[to/from]_node relationship and a unit__[to/from]_node relationship.
  • There is a long list of parameters (24 in total) [fix/max/min]_ratio_[in/out]_[in/out]_[unit_flow/units_on_coefficient]
  • Incremental_heat_rate supports piecewise linear but the ratio constraints don’t
diff --git a/dev/how_to/print_the_model/index.html b/dev/how_to/print_the_model/index.html index 6cac2e9754..21dcb4eb1b 100644 --- a/dev/how_to/print_the_model/index.html +++ b/dev/how_to/print_the_model/index.html @@ -5,4 +5,4 @@ raw"sqlite:///C:\path\to\your\outputdb.sqlite"; optimize=false ) -write_model_file(m; file_name="<path-with-file-name>")

The resulting file has the extension *.so_model in the especified path.

Note

If running the previous code gives you an error, please try replacing the last line with SpineOpt.write_model_file(m; file_name="<path-with-file-name>"). This error might appear in previous versions of SpineOpt where the write_model_file was not exported as part of the SpineOpt package.

In either case, here are some tips if you are using this file for debugging. The file can be very large so often it is helpful to create a minimum example of your model with only one or two timesteps. In addition, in the call to run_spineopt() you can add the keyword argument optimize=false, as in the example above, so it will just build the model and not attempt to solve it.

+write_model_file(m; file_name="<path-with-file-name>")

The resulting file has the extension *.so_model in the especified path.

Note

If running the previous code gives you an error, please try replacing the last line with SpineOpt.write_model_file(m; file_name="<path-with-file-name>"). This error might appear in previous versions of SpineOpt where the write_model_file was not exported as part of the SpineOpt package.

In either case, here are some tips if you are using this file for debugging. The file can be very large so often it is helpful to create a minimum example of your model with only one or two timesteps. In addition, in the call to run_spineopt() you can add the keyword argument optimize=false, as in the example above, so it will just build the model and not attempt to solve it.

diff --git a/dev/implementation_details/documentation/index.html b/dev/implementation_details/documentation/index.html index b97a4eec8d..c4dd25ec7a 100644 --- a/dev/implementation_details/documentation/index.html +++ b/dev/implementation_details/documentation/index.html @@ -24,4 +24,4 @@ expand_tags!(objective_function_lines, docstrings) open(joinpath(mathpath, "objective_function_automatically_generated.md"), "w") do file write(file, join(objective_function_lines, "\n")) -end

To deactivate the functionality, just remove the code and replace the tags in your .md file.

It is also possible to introduce this feature over time. Anytime you want to add the documentation of a constraint to the docstring you need to follow a few steps:

  1. For the docstring
    1. add @doc raw before the docstring (that allows to write latex in the docstring)
  2. For the .md file
    1. cut the description and mathematical formulation and paste them in the corresponding function's docstring
    2. add the tag to pull the above from the docstring

An example of both the docstring and the instruction file have already been shown above.

Drag and drop

There is also a drag-and-drop feature for select chapters (e.g. the how to section). For those chapters you can simply add your markdown file to the folder of the chapter and it will be automatically added to the documentation. To allow both manually composed chapters and automatically generated chapter, the functionality is only activated for empty chapters (of the structure "chapter name" => []).

The drag-and-drop function assumes a specific structure for the documentation files.

+end

To deactivate the functionality, just remove the code and replace the tags in your .md file.

It is also possible to introduce this feature over time. Anytime you want to add the documentation of a constraint to the docstring you need to follow a few steps:

  1. For the docstring
    1. add @doc raw before the docstring (that allows to write latex in the docstring)
  2. For the .md file
    1. cut the description and mathematical formulation and paste them in the corresponding function's docstring
    2. add the tag to pull the above from the docstring

An example of both the docstring and the instruction file have already been shown above.

Drag and drop

There is also a drag-and-drop feature for select chapters (e.g. the how to section). For those chapters you can simply add your markdown file to the folder of the chapter and it will be automatically added to the documentation. To allow both manually composed chapters and automatically generated chapter, the functionality is only activated for empty chapters (of the structure "chapter name" => []).

The drag-and-drop function assumes a specific structure for the documentation files.

diff --git a/dev/implementation_details/how_does_the_model_update_itself/index.html b/dev/implementation_details/how_does_the_model_update_itself/index.html index 666efec0df..629cdeed3c 100644 --- a/dev/implementation_details/how_does_the_model_update_itself/index.html +++ b/dev/implementation_details/how_does_the_model_update_itself/index.html @@ -1,3 +1,3 @@ How does the model update itself · SpineOpt.jl

How does the model update itself after rolling?

In SpineOpt, constraints, objective and bounds update themselves automatically whenever the model rolls. To picture this, imagine you have a rolling model with two windows, corresponding to the first and second days of 2023, and daily resolution. (In other words, each window consists of a single time-slice that covers the entire day.) Also, imagine you have a node where the demand is a time-series defined as follows:

timestampvalue
2023-01-015
2023-01-0210

To simplify things, let's say the nodal balance constraint in SpineOpt has the following form:

sum of flows entering the node - sum of flows leaving the node == node's demand
-(for each t in the current window)

You would expect the rhs of this constraint to be 5 for the first window, and 10 for the second window. That is indeed the case, but the way this works under the hood is quite 'magical' so to say.

In SpineOpt, the rhs of the above constraint would be written (roughly) using the following julia expression:

demand[(node=n, t=t, more arguments...)]

Notice the brackets ([]) around the named-tuple with the arguments. Without these (i.e., demand(node=n, t=t, more arguments...)) the expression would evaluate to a number, and the constraint would be static (non-self-updating). But with the brackets, instead of a number, the expression evaluates to a special object of type Call. The important thing about the Call is it remembers the arguments, including the t.

Right before the constraint is passed to the solver, SpineOpt 'realizes' the Call with the current value of t, and computes the actual rhs. So for the first window, where t is the first day in 2023, it will be 5.

Now, whenever SpineOpt rolls forward to solve the next window, it updates the value of t by adding the roll_forward value. (This allows SpineOpt to reuse the same time-slices in all the windows.) But when this happens, the Call is also checked to see if it would return something different now that t has been rolled. And if that's the case, the constraint is automatically updated to reflect the change. In our example, the rhs would become 10 because t is now the second day.

In sum, without the brackets, the constraint would be lhs == 5 (and it would never change), whereas with the brackets, the constraint becomes lhs == the demand at the current value of t.

And the above is valid not only for rhs, but also for any coefficient in any constraint or objective, and for any variable bound.

To see how all this is actually implemented, we suggest you to look at the code of SpineInterface. The starting point is the implementation of Base.getindex for the Parameter type so that writing, e.g., demand[...arguments...] returns a Call that remembers the arguments. From then, we proceed to extend JuMP.jl to handle our Call objects within constraints and objective. The last bit is perhaps the most complex, and consists in storing callbacks inside TimeSlice objects whenever they are used to retrieve the value of a Parameter to build a model. The callbacks are carefully crafted to update a specific part of that model (e.g., a variable coefficient, a variable bound, a constraint rhs). Whenever the TimeSlice rolls, depending on how much it rolls, the appropriate callbacks are called resulting in the model being properly updated. That's roughly it! Hopefully this brief introduction helps (but please contact us if you need more guidance).

+(for each t in the current window)

You would expect the rhs of this constraint to be 5 for the first window, and 10 for the second window. That is indeed the case, but the way this works under the hood is quite 'magical' so to say.

In SpineOpt, the rhs of the above constraint would be written (roughly) using the following julia expression:

demand[(node=n, t=t, more arguments...)]

Notice the brackets ([]) around the named-tuple with the arguments. Without these (i.e., demand(node=n, t=t, more arguments...)) the expression would evaluate to a number, and the constraint would be static (non-self-updating). But with the brackets, instead of a number, the expression evaluates to a special object of type Call. The important thing about the Call is it remembers the arguments, including the t.

Right before the constraint is passed to the solver, SpineOpt 'realizes' the Call with the current value of t, and computes the actual rhs. So for the first window, where t is the first day in 2023, it will be 5.

Now, whenever SpineOpt rolls forward to solve the next window, it updates the value of t by adding the roll_forward value. (This allows SpineOpt to reuse the same time-slices in all the windows.) But when this happens, the Call is also checked to see if it would return something different now that t has been rolled. And if that's the case, the constraint is automatically updated to reflect the change. In our example, the rhs would become 10 because t is now the second day.

In sum, without the brackets, the constraint would be lhs == 5 (and it would never change), whereas with the brackets, the constraint becomes lhs == the demand at the current value of t.

And the above is valid not only for rhs, but also for any coefficient in any constraint or objective, and for any variable bound.

To see how all this is actually implemented, we suggest you to look at the code of SpineInterface. The starting point is the implementation of Base.getindex for the Parameter type so that writing, e.g., demand[...arguments...] returns a Call that remembers the arguments. From then, we proceed to extend JuMP.jl to handle our Call objects within constraints and objective. The last bit is perhaps the most complex, and consists in storing callbacks inside TimeSlice objects whenever they are used to retrieve the value of a Parameter to build a model. The callbacks are carefully crafted to update a specific part of that model (e.g., a variable coefficient, a variable bound, a constraint rhs). Whenever the TimeSlice rolls, depending on how much it rolls, the appropriate callbacks are called resulting in the model being properly updated. That's roughly it! Hopefully this brief introduction helps (but please contact us if you need more guidance).

diff --git a/dev/implementation_details/how_to_write_a_constraint/index.html b/dev/implementation_details/how_to_write_a_constraint/index.html index 38c5cdad9e..c26363eac3 100644 --- a/dev/implementation_details/how_to_write_a_constraint/index.html +++ b/dev/implementation_details/how_to_write_a_constraint/index.html @@ -249,4 +249,4 @@ my_unit_flow_capacity(unit = pwrplant, node = elec, direction = to_node, t = 2023-01-01T07:00~>2023-01-01T08:00, t_next = 2023-01-01T08:00~>2023-01-01T09:00, s_path = Object[realisation, forecast2]) : 0 = 0 my_unit_flow_capacity(unit = pwrplant, node = fuel, direction = from_node, t = 2023-01-01T00:00~>2023-01-01T02:00, t_next = 2023-01-01T02:00~>2023-01-01T04:00, s_path = Object[realisation]) : 0 = 0 my_unit_flow_capacity(unit = pwrplant, node = fuel, direction = from_node, t = 2023-01-01T02:00~>2023-01-01T04:00, t_next = 2023-01-01T04:00~>2023-01-01T06:00, s_path = Object[realisation]) : 0 = 0 -

Which looks like we're on to something. Indeed, on the fuel side, s_path is always just [realisation], because both the fuel node and the pwrplant unit have the one_stage stochastic_structure. But on the elec side, at the beginning we have [realisation] and then we start getting [realisation, forecast1] and [realisation, forecast2]. The turning point is exactly at 2023-01-01T06:00, where realisation ends according to the stochastic_scenario_end parameter.

So it's all good!

The function that generates the constraint

Congratulations, you have made it this far. Now we will finally start writing our constraint expression.

Note

I will grab a coffee and be right back.

+

Which looks like we're on to something. Indeed, on the fuel side, s_path is always just [realisation], because both the fuel node and the pwrplant unit have the one_stage stochastic_structure. But on the elec side, at the beginning we have [realisation] and then we start getting [realisation, forecast1] and [realisation, forecast2]. The turning point is exactly at 2023-01-01T06:00, where realisation ends according to the stochastic_scenario_end parameter.

So it's all good!

The function that generates the constraint

Congratulations, you have made it this far. Now we will finally start writing our constraint expression.

Note

I will grab a coffee and be right back.

diff --git a/dev/implementation_details/time_slices/index.html b/dev/implementation_details/time_slices/index.html index 4c0f1d6567..71f4250eae 100644 --- a/dev/implementation_details/time_slices/index.html +++ b/dev/implementation_details/time_slices/index.html @@ -1,2 +1,2 @@ -Time slices · SpineOpt.jl

How does SpineOpt perceive time?

This section answers the following questions:

  1. What are time slices?
  2. What are time slice convenience functions?
  3. How can they be used?

What are time slices?

A TimeSlice is simply a slice of time with a start and an end. We use them in SpineOpt to represent the temporal dimension.

More specifically, we build the model using TimeSlices for the temporal indices. This happens in the run_spineopt function and it's done in two steps:

  1. Generate the temporal structure for the model:
    1. Translate the temporal_blocks in the input DB to a set of TimeSlice objects.
    2. Create relationships between these TimeSlice objects:
      • Relationships between two consecutive time slices (t_before ending right when t_after starts).
      • Relationship between overlapping time slices (t_short contained in t_long).
    3. Store all the above within m.ext[:spineopt].temporal_structure.
  2. Build the model:
    1. Query m.ext[:spineopt].temporal_structure to collect generated TimeSlice objects and relationships.
    2. Use them for indexing variables and generating constraints and objective.

To translate the temporal_blocks into TimeSlice objects, we basically look at the value of model_start and model_end for the model object, as well as the value of the resolution for the different temporal_block objects. Then we build as many TimeSlices as needed to cover the period between model_start and model_end at each resolution.

Note

m is the JuMP.Model object that SpineOpt builds and solves using JuMP. It has a field called ext which is a Dict where one can store custom data. m.ext[:spineopt].temporal_structure is just another Dict where we store data related to the temporal structure.

What are the time slice convenience functions?

To facilitate querying the temporal structure, we have developed the following convenience functions:

Note

To further figure out what the time slice convenience functions do, you can play around with them. To do so, you first need to make a database (e.g. in Spine Toolbox). Then you can call run_spineopt with that database and collect the model m. If you are impatient you do not even need to solve the model, you can pass optimize=false as keyword argument to run_spineopt. And then you can start calling the time slice convenience functions with m (e.g. t_in_t).

How can the time slice convenience functions be used?

When building constraints you typically want to know which TimeSlices come after/before another, overlap another, or contain/are contained in another. You can obtain this type of info by calling the above convenience functions.

For example, say you're generating a constraint at a 3-hour resolution. This means you have a TimeSlice in your constraint index, and that TimeSlice covers 3 hours. Now, say you want to sum a certain variable over those 3 hours in your constraint expression. You need to know all the TimeSlices contained in the one from your constraint index. You can find this out by calling t_in_t with it.

More information can be found in the Write a constraint for SpineOpt section.

Note

A fool proof way of writing a constraint - that may not be the most efficient - is to always take the highest resolution among the overlapping TimeSlices to generate the constraint indices. The other TimeSlices can then be obtained from t_overlaps_t.

+Time slices · SpineOpt.jl

How does SpineOpt perceive time?

This section answers the following questions:

  1. What are time slices?
  2. What are time slice convenience functions?
  3. How can they be used?

What are time slices?

A TimeSlice is simply a slice of time with a start and an end. We use them in SpineOpt to represent the temporal dimension.

More specifically, we build the model using TimeSlices for the temporal indices. This happens in the run_spineopt function and it's done in two steps:

  1. Generate the temporal structure for the model:
    1. Translate the temporal_blocks in the input DB to a set of TimeSlice objects.
    2. Create relationships between these TimeSlice objects:
      • Relationships between two consecutive time slices (t_before ending right when t_after starts).
      • Relationship between overlapping time slices (t_short contained in t_long).
    3. Store all the above within m.ext[:spineopt].temporal_structure.
  2. Build the model:
    1. Query m.ext[:spineopt].temporal_structure to collect generated TimeSlice objects and relationships.
    2. Use them for indexing variables and generating constraints and objective.

To translate the temporal_blocks into TimeSlice objects, we basically look at the value of model_start and model_end for the model object, as well as the value of the resolution for the different temporal_block objects. Then we build as many TimeSlices as needed to cover the period between model_start and model_end at each resolution.

Note

m is the JuMP.Model object that SpineOpt builds and solves using JuMP. It has a field called ext which is a Dict where one can store custom data. m.ext[:spineopt].temporal_structure is just another Dict where we store data related to the temporal structure.

What are the time slice convenience functions?

To facilitate querying the temporal structure, we have developed the following convenience functions:

Note

To further figure out what the time slice convenience functions do, you can play around with them. To do so, you first need to make a database (e.g. in Spine Toolbox). Then you can call run_spineopt with that database and collect the model m. If you are impatient you do not even need to solve the model, you can pass optimize=false as keyword argument to run_spineopt. And then you can start calling the time slice convenience functions with m (e.g. t_in_t).

How can the time slice convenience functions be used?

When building constraints you typically want to know which TimeSlices come after/before another, overlap another, or contain/are contained in another. You can obtain this type of info by calling the above convenience functions.

For example, say you're generating a constraint at a 3-hour resolution. This means you have a TimeSlice in your constraint index, and that TimeSlice covers 3 hours. Now, say you want to sum a certain variable over those 3 hours in your constraint expression. You need to know all the TimeSlices contained in the one from your constraint index. You can find this out by calling t_in_t with it.

More information can be found in the Write a constraint for SpineOpt section.

Note

A fool proof way of writing a constraint - that may not be the most efficient - is to always take the highest resolution among the overlapping TimeSlices to generate the constraint indices. The other TimeSlices can then be obtained from t_overlaps_t.

diff --git a/dev/index.html b/dev/index.html index b76d5047bb..0071797a64 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Introduction · SpineOpt.jl

Introduction

SpineOpt.jl is an integrated energy systems optimization model, striving towards adaptability for a multitude of modelling purposes. The data-driven model structure allows for highly customizable energy system descriptions, as well as flexible temporal and stochastic structures, without the need to alter the model source code directly. The methodology is based on mixed-integer linear programming (MILP), and SpineOpt relies on JuMP.jl for interfacing with the different solvers.

While, in principle, it is possible to run SpineOpt by itself, it has been designed to be used through the Spine toolbox, and take maximum advantage of the data and modelling workflow management tools therein. Thus, we highly recommend installing Spine Toolbox as well, as outlined in the Installation guide.

How the documentation is structured

Having a high-level overview of how this documentation is structured will help you know where to look for certain things.

  • Getting Started contains guides for starting to use SpineOpt.jl. The Installation section links to the guides for how to install SpineOpt.jl and Spine Toolbox on your computer. The Setting up a workflow for SpineOpt in Spine Toolbox section explains how to set up and run SpineOpt.jl from Spine Toolbox. The Creating Your Own Model section explains how to create a new model from scratch. This includes a list of the necessary Object Classes and Relationship Classes, but for more information, you will probably need to consult the Concept Reference chapter.

  • Tutorials provides guided examples for a set of basic use-cases, either as videos, written text and/or example files. The SpineOpt.jl repository includes a folder examples for ready-made example models. Each example is its own sub-folder, where the input data is provided as .json or .sqlite files. This way, you can easily get a feel for how SpineOpt works with pre-made datasets, either through Spine Toolbox, or directly from the Julia REPL.

  • How to provides explanations on how to do specific high-level things that might involve multiple elements (e.g. how to print the model).

  • Concept Reference lists and explains all the important data and model structure related concepts to understand in SpineOpt.jl. For a mathematical modelling point of view, see the Mathematical Formulation chapter instead. The Basics of the model structure section briefly explains the general purpose of the most important concepts, like Object Classes and Relationship Classes. Meanwhile, the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections contain detailed explanations of each and every aspect of SpineOpt.jl, organized into the respective sections for clarity.

  • Mathematical Formulation provides the mathematical view of SpineOpt.jl, as some of the methodology-related aspects of the model are more easily understood as math than Julia code. The Variables section explains the purpose of each variable in the model, as well as how the variables are related to the different Object Classes and Relationship Classes. The Constraints section contains the mathematical formulation of each constraint, as well as explanations to their purpose and how they are controlled via different Parameters. Finally, the Objective section explains the default objective function used in SpineOpt.jl.

  • Advanced Concepts explains some of the more complicated aspects of SpineOpt.jl in more detail, hopefully making it easier for you to better understand and apply them in your own modelling. The first few sections focus on aspects of SpineOpt.jl that most users are likely to use, or which are more or less required to understand for advanced use. The Temporal Framework section explains how defining time works in SpineOpt.jl, and how it can be used for different purposes. The Stochastic Framework section details how different stochastic structures can be defined, how they interact with each other, and how this impacts writing Constraints in SpineOpt.jl. The Unit commitment section explains how clustered unit-commitment is defined, while the Ramping and Reserves sections explain how to enable these operational details in your model. The Investment Optimization section explains how to include investment variables in your models, while the User Constraints section details how to include generic data-driven custom constraints. The last few sections focus on highly specialized use-cases for SpineOpt.jl, which are unlikely to be relevant for simple modelling tasks. The Decomposition section explains the Benders decomposition implementation included in SpineOpt.jl, as well as how to use it. The remaining sections, namely PTDF-Based Powerflow, Pressure driven gas transfer, Lossless nodal DC power flows, and Representative days with seasonal storages, explain various use-case specific modelling approaches supported by SpineOpt.jl.

  • Implementation details explains some parts of the code (for those who are interested in how things work under the hood). Note that this chapter is particularly sensitive to changes in the code and as such might get out of sync. If you do notice a discrepancy, please create an issue in github. That is also the place to be if you don't find what you are looking for in this documentation.

+Introduction · SpineOpt.jl

Introduction

SpineOpt.jl is an integrated energy systems optimization model, striving towards adaptability for a multitude of modelling purposes. The data-driven model structure allows for highly customizable energy system descriptions, as well as flexible temporal and stochastic structures, without the need to alter the model source code directly. The methodology is based on mixed-integer linear programming (MILP), and SpineOpt relies on JuMP.jl for interfacing with the different solvers.

While, in principle, it is possible to run SpineOpt by itself, it has been designed to be used through the Spine toolbox, and take maximum advantage of the data and modelling workflow management tools therein. Thus, we highly recommend installing Spine Toolbox as well, as outlined in the Installation guide.

How the documentation is structured

Having a high-level overview of how this documentation is structured will help you know where to look for certain things.

  • Getting Started contains guides for starting to use SpineOpt.jl. The Installation section links to the guides for how to install SpineOpt.jl and Spine Toolbox on your computer. The Setting up a workflow for SpineOpt in Spine Toolbox section explains how to set up and run SpineOpt.jl from Spine Toolbox. The Creating Your Own Model section explains how to create a new model from scratch. This includes a list of the necessary Object Classes and Relationship Classes, but for more information, you will probably need to consult the Concept Reference chapter.

  • Tutorials provides guided examples for a set of basic use-cases, either as videos, written text and/or example files. The SpineOpt.jl repository includes a folder examples for ready-made example models. Each example is its own sub-folder, where the input data is provided as .json or .sqlite files. This way, you can easily get a feel for how SpineOpt works with pre-made datasets, either through Spine Toolbox, or directly from the Julia REPL.

  • How to provides explanations on how to do specific high-level things that might involve multiple elements (e.g. how to print the model).

  • Concept Reference lists and explains all the important data and model structure related concepts to understand in SpineOpt.jl. For a mathematical modelling point of view, see the Mathematical Formulation chapter instead. The Basics of the model structure section briefly explains the general purpose of the most important concepts, like Object Classes and Relationship Classes. Meanwhile, the Object Classes, Relationship Classes, Parameters, and Parameter Value Lists sections contain detailed explanations of each and every aspect of SpineOpt.jl, organized into the respective sections for clarity.

  • Mathematical Formulation provides the mathematical view of SpineOpt.jl, as some of the methodology-related aspects of the model are more easily understood as math than Julia code. The Variables section explains the purpose of each variable in the model, as well as how the variables are related to the different Object Classes and Relationship Classes. The Constraints section contains the mathematical formulation of each constraint, as well as explanations to their purpose and how they are controlled via different Parameters. Finally, the Objective section explains the default objective function used in SpineOpt.jl.

  • Advanced Concepts explains some of the more complicated aspects of SpineOpt.jl in more detail, hopefully making it easier for you to better understand and apply them in your own modelling. The first few sections focus on aspects of SpineOpt.jl that most users are likely to use, or which are more or less required to understand for advanced use. The Temporal Framework section explains how defining time works in SpineOpt.jl, and how it can be used for different purposes. The Stochastic Framework section details how different stochastic structures can be defined, how they interact with each other, and how this impacts writing Constraints in SpineOpt.jl. The Unit commitment section explains how clustered unit-commitment is defined, while the Ramping and Reserves sections explain how to enable these operational details in your model. The Investment Optimization section explains how to include investment variables in your models, while the User Constraints section details how to include generic data-driven custom constraints. The last few sections focus on highly specialized use-cases for SpineOpt.jl, which are unlikely to be relevant for simple modelling tasks. The Decomposition section explains the Benders decomposition implementation included in SpineOpt.jl, as well as how to use it. The remaining sections, namely PTDF-Based Powerflow, Pressure driven gas transfer, Lossless nodal DC power flows, and Representative days with seasonal storages, explain various use-case specific modelling approaches supported by SpineOpt.jl.

  • Implementation details explains some parts of the code (for those who are interested in how things work under the hood). Note that this chapter is particularly sensitive to changes in the code and as such might get out of sync. If you do notice a discrepancy, please create an issue in github. That is also the place to be if you don't find what you are looking for in this documentation.

diff --git a/dev/library/index.html b/dev/library/index.html index ab6b026217..c38c5ace10 100644 --- a/dev/library/index.html +++ b/dev/library/index.html @@ -5,12 +5,12 @@ raw"sqlite:///C:\path\to\your\output_db.sqlite"; filters=Dict("tool" => "object_activity_control", "scenario" => "scenario_to_run"), alternative="alternative_to_write_results" -)source
run_spineopt(f, url_in, url_out; <keyword arguments>)

Same as run_spineopt(url_in, url_out; kwargs...) but call function f with the SpineOpt model as argument right after its creation (but before building and solving it).

This is intended to be called using do block syntax.

run_spineopt(url_in, url_out) do m
+)
source
run_spineopt(f, url_in, url_out; <keyword arguments>)

Same as run_spineopt(url_in, url_out; kwargs...) but call function f with the SpineOpt model as argument right after its creation (but before building and solving it).

This is intended to be called using do block syntax.

run_spineopt(url_in, url_out) do m
     # Do something with m after its creation
-end  # Building and solving begins after quiting this block
source
SpineOpt.create_modelFunction
create_model(mip_solver, lp_solver, use_direct_model)

A JuMP.Model extended to be used with SpineOpt. mip_solver and lp_solver are 'optimizer factories' to be passed to JuMP.Model or JuMP.direct_model; use_direct_model is a Bool indicating whether JuMP.Model or JuMP.direct_model should be used.

source
SpineOpt.build_model!Function
build_model!(m; log_level)

Build given SpineOpt model:

  • create temporal and stochastic structures
  • add variables
  • add expressions
  • add constraints
  • set objective
  • initialize outputs

Arguments

  • log_level::Int: an integer to control the log level.
source
SpineOpt.solve_model!Function
solve_model!(m; <keyword arguments>)

Solve given SpineOpt model and save outputs.

Arguments

  • log_level::Int=3: an integer to control the log level.
  • update_names::Bool=false: whether or not to update variable and constraint names after the model rolls (expensive).
  • write_as_roll::Int=0: if greater than 0 and the run has a rolling horizon, then write results every that many windows.
  • resume_file_path::String=nothing: only relevant in rolling horizon optimisations with write_as_roll greater or equal than one. If the file at given path contains resume data from a previous run, start the run from that point. Also, save resume data to that same file as the model rolls and results are written to the output database.
  • calculate_duals::Bool=false: whether or not to calculate duals after the model solve.
  • output_suffix::NamedTuple=(;): to add to the outputs.
  • log_prefix::String="": to prepend to log messages.
source
SpineOpt.add_event_handler!Function
add_event_handler!(fn, m, event)

Add an event handler for given model. event must be a Symbol corresponding to an event. fn must be a function callable with the arguments corresponding to that event. Below is a table of events, arguments, and when do they fire.

eventargumentswhen does it fire
:model_builtmRight after model m is built.
:model_about_to_solvemRight before model m is solved.
:model_solvedmRight after model m is solved.
:window_about_to_solve(m, k)Right before window k for model m is solved.
:window_solved(m, k)Right after window k for model m is solved.
:master_model_builtmRight after the Benders master model for model m is built.
:master_model_about_to_solve(m, j)Right before the Benders master model for model m and iteration j is solved.
:master_model_solved(m, j)Right after the Benders master model for model m and iteration j is solved.

Example

run_spineopt("sqlite:///path-to-input-db", "sqlite:///path-to-output-db") do m
+end  # Building and solving begins after quiting this block
source
SpineOpt.create_modelFunction
create_model(mip_solver, lp_solver, use_direct_model)

A JuMP.Model extended to be used with SpineOpt. mip_solver and lp_solver are 'optimizer factories' to be passed to JuMP.Model or JuMP.direct_model; use_direct_model is a Bool indicating whether JuMP.Model or JuMP.direct_model should be used.

source
SpineOpt.build_model!Function
build_model!(m; log_level)

Build given SpineOpt model:

  • create temporal and stochastic structures
  • add variables
  • add expressions
  • add constraints
  • set objective
  • initialize outputs

Arguments

  • log_level::Int: an integer to control the log level.
source
SpineOpt.solve_model!Function
solve_model!(m; <keyword arguments>)

Solve given SpineOpt model and save outputs.

Arguments

  • log_level::Int=3: an integer to control the log level.
  • update_names::Bool=false: whether or not to update variable and constraint names after the model rolls (expensive).
  • write_as_roll::Int=0: if greater than 0 and the run has a rolling horizon, then write results every that many windows.
  • resume_file_path::String=nothing: only relevant in rolling horizon optimisations with write_as_roll greater or equal than one. If the file at given path contains resume data from a previous run, start the run from that point. Also, save resume data to that same file as the model rolls and results are written to the output database.
  • calculate_duals::Bool=false: whether or not to calculate duals after the model solve.
  • output_suffix::NamedTuple=(;): to add to the outputs.
  • log_prefix::String="": to prepend to log messages.
source
SpineOpt.add_event_handler!Function
add_event_handler!(fn, m, event)

Add an event handler for given model. event must be a Symbol corresponding to an event. fn must be a function callable with the arguments corresponding to that event. Below is a table of events, arguments, and when do they fire.

eventargumentswhen does it fire
:model_builtmRight after model m is built.
:model_about_to_solvemRight before model m is solved.
:model_solvedmRight after model m is solved.
:window_about_to_solve(m, k)Right before window k for model m is solved.
:window_solved(m, k)Right after window k for model m is solved.
:master_model_builtmRight after the Benders master model for model m is built.
:master_model_about_to_solve(m, j)Right before the Benders master model for model m and iteration j is solved.
:master_model_solved(m, j)Right after the Benders master model for model m and iteration j is solved.

Example

run_spineopt("sqlite:///path-to-input-db", "sqlite:///path-to-output-db") do m
     add_event_handler!(println, m, :model_built)  # Print the model right after it's built
-end
source
SpineOpt.generate_temporal_structure!Function
generate_temporal_structure!(m)

Create the temporal structure for the given SpineOpt model. After this, you can call the following functions to query the generated structure:

  • time_slice
  • t_before_t
  • t_in_t
  • t_in_t_excl
  • t_overlaps_t
  • to_time_slice
  • current_window
source
SpineOpt.roll_temporal_structure!Function
roll_temporal_structure!(m[, window_number=1]; rev=false)

Roll the temporal structure of given SpineOpt model forward a period of time equal to the value of the roll_forward parameter. If roll_forward is an array, then window_number can be given either as an Integer or a UnitRange indicating the position or successive positions in that array.

If rev is true, then the structure is rolled backwards instead of forward.

source
SpineOpt.rewind_temporal_structure!Function
rewind_temporal_structure!(m)

Rewind the temporal structure of given SpineOpt model back to the first window.

source
SpineOpt.time_sliceFunction
time_slice(m; temporal_block=anything, t=anything)

An Array of TimeSlices in model m.

Arguments

  • temporal_block::Union{Object,Vector{Object}}: only return TimeSlices in these blocks.
  • t::Union{TimeSlice,Vector{TimeSlice}}: only return TimeSlices that are also in this collection.
source
SpineOpt.t_before_tFunction
t_before_t(m; t_before=anything, t_after=anything)

An Array where each element is a Tuple of two consecutive TimeSlices in model m, i.e., the second starting when the first ends.

Arguments

  • t_before: if given, return an Array of TimeSlices that start when t_before ends.
  • t_after: if given, return an Array of TimeSlices that end when t_after starts.
source
SpineOpt.t_in_tFunction
t_in_t(m; t_short=anything, t_long=anything)

An Array where each element is a Tuple of two TimeSlices in model m, the second containing the first.

Keyword arguments

  • t_short: if given, return an Array of TimeSlices that contain t_short.
  • t_long: if given, return an Array of TimeSlices that are contained in t_long.
source
SpineOpt.t_in_t_exclFunction
t_in_t_excl(m; t_short=anything, t_long=anything)

Same as tint but exclude tuples of the same TimeSlice.

Keyword arguments

  • t_short: if given, return an Array of TimeSlices that contain t_short (other than t_short itself).
  • t_long: if given, return an Array of TimeSlices that are contained in t_long (other than t_long itself).
source
SpineOpt.t_overlaps_tFunction
t_overlaps_t(m; t)

An Array of TimeSlices in model m that overlap the given t, where t must be in m.

source
SpineOpt.to_time_sliceFunction
to_time_slice(m; t)

An Array of TimeSlices in model m overlapping the given TimeSlice (where t may not be in m).

source
SpineOpt.current_windowFunction
current_window(m)

A TimeSlice corresponding to the current window of given model.

source
SpineOpt.generate_stochastic_structure!Function
generate_stochastic_structure(m::Model)

Generate the stochastic structure for given SpineOpt model.

The stochastic structure is a directed acyclic graph (DAG) where the vertices are the stochastic_scenario objects, and the edges are given by the parent_stochastic_scenario__child_stochastic_scenario relationships.

After this, you can call active_stochastic_paths to slice the generated structure.

source
SpineOpt.active_stochastic_pathsFunction
active_stochastic_paths(m; stochastic_structure, t)

An Array where each element is itself an Array of stochastic_scenario Objects, corresponding to a path (i.e., a branch) of the stochastic DAG associated to model m.

Arguments

  • stochastic_structure::Union{Object,Vector{Object}}: only return paths of stochastic_scenarios within these structures.
  • t::Union{TimeSlice,Vector{TimeSlice}}: only return paths covering these TimeSlices.
source
SpineOpt.write_model_fileFunction
write_model_file(m; file_name="model")

Write model file for given model.

source
SpineOpt.write_reportFunction
write_report(m, url_out; <keyword arguments>)

Write report(s) from given SpineOpt model to url_out. A new Spine database is created at url_out if one doesn't exist.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • log_level::Int=3: an integer to control the log level.

source
SpineOpt.write_report_from_intermediate_resultsFunction
write_report_from_intermediate_results(intermediate_results_folder, default_url; <keyword arguments>)

Collect results generated on a previous, unsuccessful SpineOpt run from intermediate_results_folder, and write the corresponding report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • log_level::Int=3: an integer to control the log level.

source
SpineOpt.master_modelFunction
master_model(m)

The Benders master model for given model.

source
SpineOpt.stage_modelFunction
stage_model(m, stage_name)

A stage model associated to given model.

source
Missing docstring.

Missing docstring for upgrade_db. Check Documenter's build log for details.

SpineOpt.generate_forced_availability_factorFunction
generate_forced_availability_factor(url_in, url_out; <keyword arguments>)

Generate forced availability factors (due to outages) from the contents of url_in and write them to url_out. At least url_in must point to a valid Spine database. A new Spine database is created at url_out if one doesn't exist.

To generate forced availability factors for an entity, specify mean_time_to_failure and optionally mean_time_to_repair for that entity as a duration in the input DB.

Parameter forced_availability_factor will be written for those entites in the output DB, holding a time series with the availability factor due to forced outages.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • filters::Dict{String,String}=Dict("tool" => "object_activity_control"): a dictionary to specify filters. Possible keys are "tool" and "scenario". Values should be a tool or scenario name in the input DB.

Example

using SpineOpt
+end
source
SpineOpt.generate_temporal_structure!Function
generate_temporal_structure!(m)

Create the temporal structure for the given SpineOpt model. After this, you can call the following functions to query the generated structure:

  • time_slice
  • t_before_t
  • t_in_t
  • t_in_t_excl
  • t_overlaps_t
  • to_time_slice
  • current_window
source
SpineOpt.roll_temporal_structure!Function
roll_temporal_structure!(m[, window_number=1]; rev=false)

Roll the temporal structure of given SpineOpt model forward a period of time equal to the value of the roll_forward parameter. If roll_forward is an array, then window_number can be given either as an Integer or a UnitRange indicating the position or successive positions in that array.

If rev is true, then the structure is rolled backwards instead of forward.

source
SpineOpt.rewind_temporal_structure!Function
rewind_temporal_structure!(m)

Rewind the temporal structure of given SpineOpt model back to the first window.

source
SpineOpt.time_sliceFunction
time_slice(m; temporal_block=anything, t=anything)

An Array of TimeSlices in model m.

Arguments

  • temporal_block::Union{Object,Vector{Object}}: only return TimeSlices in these blocks.
  • t::Union{TimeSlice,Vector{TimeSlice}}: only return TimeSlices that are also in this collection.
source
SpineOpt.t_before_tFunction
t_before_t(m; t_before=anything, t_after=anything)

An Array where each element is a Tuple of two consecutive TimeSlices in model m, i.e., the second starting when the first ends.

Arguments

  • t_before: if given, return an Array of TimeSlices that start when t_before ends.
  • t_after: if given, return an Array of TimeSlices that end when t_after starts.
source
SpineOpt.t_in_tFunction
t_in_t(m; t_short=anything, t_long=anything)

An Array where each element is a Tuple of two TimeSlices in model m, the second containing the first.

Keyword arguments

  • t_short: if given, return an Array of TimeSlices that contain t_short.
  • t_long: if given, return an Array of TimeSlices that are contained in t_long.
source
SpineOpt.t_in_t_exclFunction
t_in_t_excl(m; t_short=anything, t_long=anything)

Same as tint but exclude tuples of the same TimeSlice.

Keyword arguments

  • t_short: if given, return an Array of TimeSlices that contain t_short (other than t_short itself).
  • t_long: if given, return an Array of TimeSlices that are contained in t_long (other than t_long itself).
source
SpineOpt.t_overlaps_tFunction
t_overlaps_t(m; t)

An Array of TimeSlices in model m that overlap the given t, where t must be in m.

source
SpineOpt.to_time_sliceFunction
to_time_slice(m; t)

An Array of TimeSlices in model m overlapping the given TimeSlice (where t may not be in m).

source
SpineOpt.current_windowFunction
current_window(m)

A TimeSlice corresponding to the current window of given model.

source
SpineOpt.generate_stochastic_structure!Function
generate_stochastic_structure(m::Model)

Generate the stochastic structure for given SpineOpt model.

The stochastic structure is a directed acyclic graph (DAG) where the vertices are the stochastic_scenario objects, and the edges are given by the parent_stochastic_scenario__child_stochastic_scenario relationships.

After this, you can call active_stochastic_paths to slice the generated structure.

source
SpineOpt.active_stochastic_pathsFunction
active_stochastic_paths(m; stochastic_structure, t)

An Array where each element is itself an Array of stochastic_scenario Objects, corresponding to a path (i.e., a branch) of the stochastic DAG associated to model m.

Arguments

  • stochastic_structure::Union{Object,Vector{Object}}: only return paths of stochastic_scenarios within these structures.
  • t::Union{TimeSlice,Vector{TimeSlice}}: only return paths covering these TimeSlices.
source
SpineOpt.write_model_fileFunction
write_model_file(m; file_name="model")

Write model file for given model.

source
SpineOpt.write_reportFunction
write_report(m, url_out; <keyword arguments>)

Write report(s) from given SpineOpt model to url_out. A new Spine database is created at url_out if one doesn't exist.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • log_level::Int=3: an integer to control the log level.

source
SpineOpt.write_report_from_intermediate_resultsFunction
write_report_from_intermediate_results(intermediate_results_folder, default_url; <keyword arguments>)

Collect results generated on a previous, unsuccessful SpineOpt run from intermediate_results_folder, and write the corresponding report(s) to url_out. A new Spine database is created at url_out if one doesn't exist.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • log_level::Int=3: an integer to control the log level.

source
SpineOpt.master_modelFunction
master_model(m)

The Benders master model for given model.

source
SpineOpt.stage_modelFunction
stage_model(m, stage_name)

A stage model associated to given model.

source
Missing docstring.

Missing docstring for upgrade_db. Check Documenter's build log for details.

SpineOpt.generate_forced_availability_factorFunction
generate_forced_availability_factor(url_in, url_out; <keyword arguments>)

Generate forced availability factors (due to outages) from the contents of url_in and write them to url_out. At least url_in must point to a valid Spine database. A new Spine database is created at url_out if one doesn't exist.

To generate forced availability factors for an entity, specify mean_time_to_failure and optionally mean_time_to_repair for that entity as a duration in the input DB.

Parameter forced_availability_factor will be written for those entites in the output DB, holding a time series with the availability factor due to forced outages.

Arguments

  • alternative::String="": if non empty, write results to the given alternative in the output DB.

  • filters::Dict{String,String}=Dict("tool" => "object_activity_control"): a dictionary to specify filters. Possible keys are "tool" and "scenario". Values should be a tool or scenario name in the input DB.

Example

using SpineOpt
 m = generate_forced_availability_factor(
     raw"sqlite:///C:\path\to\your\input_db.sqlite", 
     raw"sqlite:///C:\path\to\your\output_db.sqlite"
-)
source
+)source diff --git a/dev/mathematical_formulation/constraints/index.html b/dev/mathematical_formulation/constraints/index.html index 81af5124e5..cd50764882 100644 --- a/dev/mathematical_formulation/constraints/index.html +++ b/dev/mathematical_formulation/constraints/index.html @@ -65,4 +65,4 @@ \cdot \left( v^{connections\_invested\_available}_{(c,s,t)} - p^{connections\_invested\_available}_{(c,s,t)} \right) \\ & + \sum_{n,s,t} p^{storages\_invested\_available\_mv}_{(b,n,s,t)} \cdot \left( v^{storages\_invested\_available}_{(n,s,t)} - p^{storages\_invested\_available}_{(n,s,t)} \right) \\ -\end{aligned}\]

where

+\end{aligned}\]

where

diff --git a/dev/mathematical_formulation/constraints_automatically_generated/index.html b/dev/mathematical_formulation/constraints_automatically_generated/index.html index a7a5fc1b5a..b6ffdb4a9f 100644 --- a/dev/mathematical_formulation/constraints_automatically_generated/index.html +++ b/dev/mathematical_formulation/constraints_automatically_generated/index.html @@ -553,4 +553,4 @@ \cdot \left( v^{connections\_invested\_available}_{(c,s,t)} - p^{connections\_invested\_available}_{(c,s,t)} \right) \\ & + \sum_{n,s,t} p^{storages\_invested\_available\_mv}_{(b,n,s,t)} \cdot \left( v^{storages\_invested\_available}_{(n,s,t)} - p^{storages\_invested\_available}_{(n,s,t)} \right) \\ -\end{aligned}\]

where

+\end{aligned}\]

where

diff --git a/dev/mathematical_formulation/objective_function/index.html b/dev/mathematical_formulation/objective_function/index.html index 34a340d23a..2c3bafb856 100644 --- a/dev/mathematical_formulation/objective_function/index.html +++ b/dev/mathematical_formulation/objective_function/index.html @@ -72,4 +72,4 @@ = \sum_{(n,s,t)} \left(v^{node\_slack\_neg}_{(n, s, t)} - v^{node\_slack\_pos}_{(n, s, t)} \right) \cdot p^{node\_slack\_penalty}_{(n,s,t)} \cdot p^{weight}_{(n,s,t)} \cdot \Delta t \\ -\end{aligned}\]

+\end{aligned}\]

diff --git a/dev/mathematical_formulation/sets/index.html b/dev/mathematical_formulation/sets/index.html index 6251f462ad..b4fbdd87ec 100644 --- a/dev/mathematical_formulation/sets/index.html +++ b/dev/mathematical_formulation/sets/index.html @@ -1,2 +1,2 @@ -Sets · SpineOpt.jl

Sets

ind(*parameter*)

Tuple of all objects, for which the parameter is defined

t_before_t(t_after=t')

Set of timeslices that are directly before timeslice t'.

t_before_t(t_before=t')

Set of timeslices that are directly after timeslice t'.

t_in_t(t_short=t')

Set of timeslices that contain timeslice t'

t_in_t(t_long=t')

Set of timeslices that are contained in timeslice t'

t_overlaps_t(t')

Set of timeslices that overlap with timeslice t'

full_stochastic_paths

Set of all possible scenario branches

active_stochastic_paths(s)

Set of all active scenario branches, based on active scenarios s

+Sets · SpineOpt.jl

Sets

ind(*parameter*)

Tuple of all objects, for which the parameter is defined

t_before_t(t_after=t')

Set of timeslices that are directly before timeslice t'.

t_before_t(t_before=t')

Set of timeslices that are directly after timeslice t'.

t_in_t(t_short=t')

Set of timeslices that contain timeslice t'

t_in_t(t_long=t')

Set of timeslices that are contained in timeslice t'

t_overlaps_t(t')

Set of timeslices that overlap with timeslice t'

full_stochastic_paths

Set of all possible scenario branches

active_stochastic_paths(s)

Set of all active scenario branches, based on active scenarios s

diff --git a/dev/mathematical_formulation/variables/index.html b/dev/mathematical_formulation/variables/index.html index 6d1c73c69a..b9f0c5a967 100644 --- a/dev/mathematical_formulation/variables/index.html +++ b/dev/mathematical_formulation/variables/index.html @@ -1,2 +1,2 @@ -Variables · SpineOpt.jl

Variables

binary_gas_connection_flow

Math symbol: $v^{binary\_gas\_connection\_flow}$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: binary_gas_connection_flow_indices

Binary variable with the indices node $n$ over the connection $conn$ in the direction $to\_node$ for the stochastic scenario $s$ at timestep $t$ describing if the direction of gas flow for a pressure drive gastransfer is in the indicated direction.

connection_flow

Math symbol: $v^{connection\_flow }$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: connection_flow_indices

Commodity flow associated with node $n$ over the connection $conn$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

connection_intact_flow

Math symbol: $v^{connection\_intact\_flow}$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: connection_intact_flow_indices

???

connections_decommissioned

Math symbol: $v^{connections\_decommissioned}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of decomissioned connections $conn$ for the stochastic scenario $s$ at timestep $t$

connections_invested

Math symbol: $v^{connections\_invested}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of connections $conn$ invested at timestep $t$ in for the stochastic scenario $s$

connections_invested_available

Math symbol: $v^{connections\_invested\_available}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of invested connections $conn$ that are available still the stochastic scenario $s$ at timestep $t$

mp_objective_lowerbound_indices

Math symbol: $v^{mp\_objective\_lowerbound\_indices}$

Indices: (t=t)

Indices function: mp_objective_lowerbound_indices

Updating lowerbound for master problem of Benders decomposition

node_injection

Math symbol: $v^{node\_injection}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_injection_indices

Commodity injections at node $n$ for the stochastic scenario $s$ at timestep $t$

node_pressure

Math symbol: $v^{node\_pressure}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_pressure_indices

Pressue at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_pressure

node_slack_neg

Math symbol: $v^{node\_slack\_neg}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_slack_indices

Positive slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

node_slack_pos

Math symbol: $v^{node\_slack\_pos}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_slack_indices

Negative slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

node_state

Math symbol: $v^{node\_state}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_state_indices

Storage state at node $n$ for the stochastic scenario $s$ at timestep $t$

node_voltage_angle

Math symbol: $v^{node\_voltage\_angle}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_voltage_angle_indices

Voltage angle at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_voltage_angle

nonspin_units_shut_down

Math symbol: $v^{nonspin\_units\_shut\_down}$

Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

Indices function: nonspin_units_shut_down_indices

Number of units $u$ held available for non-spinning downward reserve provision via shutdown to node $n$ for the stochastic scenario $s$ at timestep $t$

nonspin_units_started_up

Math symbol: $v^{nonspin\_units\_started\_up}$

Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

Indices function: nonspin_units_started_up_indices

Number of units $u$ held available for non-spinning upward reserve provision via startup to node $n$ for the stochastic scenario $s$ at timestep $t$

storages_decommissioned

Math symbol: $v^{storages\_decommissioned}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of decomissioned storage nodes $n$ for the stochastic scenario $s$ at timestep $t$

storages_invested

Math symbol: $v^{storages\_invested}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of storage nodes $n$ invested in at timestep $t$ for the stochastic scenario $s$

storages_invested_available

Math symbol: $v^{storages\_invested\_available}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of invested storage nodes $n$ that are available still the stochastic scenario $s$ at timestep $t$

unit_flow

Math symbol: $v^{unit\_flow}$

Indices: (unit=u, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: unit_flow_indices

Commodity flow associated with node $n$ over the unit $u$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

unit_flow_op

Math symbol: $v^{unit\_flow\_op}$

Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

Indices function: unit_flow_op_indices

Contribution of the unit flow assocaited with operating point $i$

unit_flow_op_active

Math symbol: $v^{unit\_flow\_op\_active}$

Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

Indices function: unit_flow_op_indices

Control the activation of operating point $i$ of units

units_available

Math symbol: $v^{units\_available}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of available units $u$ for the stochastic scenario $s$ at timestep $t$

units_invested

Math symbol: $v^{units\_invested}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of units $u$ for the stochastic scenario $s$ invested in at timestep $t$

units_invested_available

Math symbol: $v^{units\_invested\_available}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of invested units $u$ that are available still the stochastic scenario $s$ at timestep $t$

units_mothballed

Math symbol: $v^{units\_mothballed}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of units $u$ for the stochastic scenariocenario $s$ mothballed at timestep $t$

units_on

Math symbol: $v^{units\_on}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of online units $u$ for the stochastic scenario $s$ at timestep $t$

units_shut_down

Math symbol: $v^{units\_shut\_down}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of units $u$ for the stochastic scenario $s$ that switched to offline status at timestep $t$

units_started_up

Math symbol: $v^{units\_started\_up}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of units $u$ for the stochastic scenario $s$ that switched to online status at timestep $t$

+Variables · SpineOpt.jl

Variables

binary_gas_connection_flow

Math symbol: $v^{binary\_gas\_connection\_flow}$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: binary_gas_connection_flow_indices

Binary variable with the indices node $n$ over the connection $conn$ in the direction $to\_node$ for the stochastic scenario $s$ at timestep $t$ describing if the direction of gas flow for a pressure drive gastransfer is in the indicated direction.

connection_flow

Math symbol: $v^{connection\_flow }$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: connection_flow_indices

Commodity flow associated with node $n$ over the connection $conn$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

connection_intact_flow

Math symbol: $v^{connection\_intact\_flow}$

Indices: (connection=conn, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: connection_intact_flow_indices

???

connections_decommissioned

Math symbol: $v^{connections\_decommissioned}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of decomissioned connections $conn$ for the stochastic scenario $s$ at timestep $t$

connections_invested

Math symbol: $v^{connections\_invested}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of connections $conn$ invested at timestep $t$ in for the stochastic scenario $s$

connections_invested_available

Math symbol: $v^{connections\_invested\_available}$

Indices: (connection=conn, stochastic_scenario=s, t=t)

Indices function: connections_invested_available_indices

Number of invested connections $conn$ that are available still the stochastic scenario $s$ at timestep $t$

mp_objective_lowerbound_indices

Math symbol: $v^{mp\_objective\_lowerbound\_indices}$

Indices: (t=t)

Indices function: mp_objective_lowerbound_indices

Updating lowerbound for master problem of Benders decomposition

node_injection

Math symbol: $v^{node\_injection}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_injection_indices

Commodity injections at node $n$ for the stochastic scenario $s$ at timestep $t$

node_pressure

Math symbol: $v^{node\_pressure}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_pressure_indices

Pressue at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_pressure

node_slack_neg

Math symbol: $v^{node\_slack\_neg}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_slack_indices

Positive slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

node_slack_pos

Math symbol: $v^{node\_slack\_pos}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_slack_indices

Negative slack variable at node $n$ for the stochastic scenario $s$ at timestep $t$

node_state

Math symbol: $v^{node\_state}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_state_indices

Storage state at node $n$ for the stochastic scenario $s$ at timestep $t$

node_voltage_angle

Math symbol: $v^{node\_voltage\_angle}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: node_voltage_angle_indices

Voltage angle at a node $n$ for a specific stochastic scenario $s$ and timestep $t$. See also: has_voltage_angle

nonspin_units_shut_down

Math symbol: $v^{nonspin\_units\_shut\_down}$

Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

Indices function: nonspin_units_shut_down_indices

Number of units $u$ held available for non-spinning downward reserve provision via shutdown to node $n$ for the stochastic scenario $s$ at timestep $t$

nonspin_units_started_up

Math symbol: $v^{nonspin\_units\_started\_up}$

Indices: (unit=u, node=n, stochastic_scenario=s, t=t)

Indices function: nonspin_units_started_up_indices

Number of units $u$ held available for non-spinning upward reserve provision via startup to node $n$ for the stochastic scenario $s$ at timestep $t$

storages_decommissioned

Math symbol: $v^{storages\_decommissioned}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of decomissioned storage nodes $n$ for the stochastic scenario $s$ at timestep $t$

storages_invested

Math symbol: $v^{storages\_invested}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of storage nodes $n$ invested in at timestep $t$ for the stochastic scenario $s$

storages_invested_available

Math symbol: $v^{storages\_invested\_available}$

Indices: (node=n, stochastic_scenario=s, t=t)

Indices function: storages_invested_available_indices

Number of invested storage nodes $n$ that are available still the stochastic scenario $s$ at timestep $t$

unit_flow

Math symbol: $v^{unit\_flow}$

Indices: (unit=u, node=n, direction=d, stochastic_scenario=s, t=t)

Indices function: unit_flow_indices

Commodity flow associated with node $n$ over the unit $u$ in the direction $d$ for the stochastic scenario $s$ at timestep $t$

unit_flow_op

Math symbol: $v^{unit\_flow\_op}$

Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

Indices function: unit_flow_op_indices

Contribution of the unit flow assocaited with operating point $i$

unit_flow_op_active

Math symbol: $v^{unit\_flow\_op\_active}$

Indices: (unit=u, node=n, direction=d, i=i, stochastic_scenario=s, t=t)

Indices function: unit_flow_op_indices

Control the activation of operating point $i$ of units

units_available

Math symbol: $v^{units\_available}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of available units $u$ for the stochastic scenario $s$ at timestep $t$

units_invested

Math symbol: $v^{units\_invested}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of units $u$ for the stochastic scenario $s$ invested in at timestep $t$

units_invested_available

Math symbol: $v^{units\_invested\_available}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of invested units $u$ that are available still the stochastic scenario $s$ at timestep $t$

units_mothballed

Math symbol: $v^{units\_mothballed}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_invested_available_indices

Number of units $u$ for the stochastic scenariocenario $s$ mothballed at timestep $t$

units_on

Math symbol: $v^{units\_on}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of online units $u$ for the stochastic scenario $s$ at timestep $t$

units_shut_down

Math symbol: $v^{units\_shut\_down}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of units $u$ for the stochastic scenario $s$ that switched to offline status at timestep $t$

units_started_up

Math symbol: $v^{units\_started\_up}$

Indices: (unit=u, stochastic_scenario=s, t=t)

Indices function: units_on_indices

Number of units $u$ for the stochastic scenario $s$ that switched to online status at timestep $t$

diff --git a/dev/tutorial/case_study_a5/index.html b/dev/tutorial/case_study_a5/index.html index d370baf2a0..3146a5e52a 100644 --- a/dev/tutorial/case_study_a5/index.html +++ b/dev/tutorial/case_study_a5/index.html @@ -13,4 +13,4 @@ Granfors_pwr_plant Krångfors_pwr_plant Selsfors_pwr_plant - Kvistforsen_pwr_plant
  • Go to Object tree (on the top left of the window, usually), right-click on unit and select Add objects from the context menu. This will open the Add objects dialog.
  • Select the first cell under the object name column and press Ctrl+V. This will paste the list of plant names from the clipboard into that column; the object class name column will be filled automatically with ‘unit‘. The form should now be looking similar to this: image
  • Click Ok.
  • Back in the Spine DB Editor, under Object tree, double click on unit to confirm that the objects are effectively there.
  • Commit changes with the message ‘Add power plants’.
  • Add discharge and spillway connections by creating objects of class connection with the following names:

    RebnistoBergnäsdisch SadvatoBergnäsdisch BergnästoSlagnäsdisch SlagnästoBastuseldisch BastuseltoGrytforsdisch GrytforstoGallejaurdisch GallejaurtoVargforsdisch VargforstoRengårddisch RengårdtoBåtforsdisch BåtforstoFinnforsdisch FinnforstoGranforsdisch GranforstoKrångforsdisch KrångforstoSelsforsdisch SelsforstoKvistforsendisch Kvistforsentodownstreamdisch RebnistoBergnässpill SadvatoBergnässpill BergnästoSlagnässpill SlagnästoBastuselspill BastuseltoGrytforsspill GrytforstoGallejaurspill GallejaurtoVargforsspill VargforstoRengårdspill RengårdtoBåtforsspill BåtforstoFinnforsspill FinnforstoGranforsspill GranforstoKrångforsspill KrångforstoSelsforsspill SelsforstoKvistforsenspill Kvistforsentodownstreamspill

  • Add water nodes by creating objects of class node with the following names:

    Rebnisupper Sadvaupper Bergnäsupper Slagnäsupper Bastuselupper Grytforsupper Gallejaurupper Vargforsupper Rengårdupper Båtforsupper Finnforsupper Granforsupper Krångforsupper Selsforsupper Kvistforsenupper Rebnislower Sadvalower Bergnäslower Slagnäslower Bastusellower Grytforslower Gallejaurlower Vargforslower Rengårdlower Båtforslower Finnforslower Granforslower Krångforslower Selsforslower Kvistforsenlower

  • Next, create the following objects (all names in lower-case):

  • Finally, create the following objects to get results back from SpineOpt (again, all names in lower-case):

  • Note

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Specifying object parameter values

    Establishing relationships

    Tip

    To enter the same text on several cells, copy the text into the clipboard, then select all target cells and press Ctrl+V.

    Note

    At this point, you might be wondering what's the purpose of the unit__node__node relationship class. Shouldn't it be enough to have unit__from_node and unit__to_node to represent the topology of the system? The answer is yes; but in addition to topology, we also need to represent the conversion process that happens in the unit, where the water from one node is turned into electricty for another node. And for this purpose, we use a relationship parameter value on the unit__node__node relationships (see Specifying relationship parameter values).

    Note

    At this point, you might be wondering what's the purpose of the connection__node__node relationship class. Shouldn't it be enough to have connection__from_node and connection__to_node to represent the topology of the system? The answer is yes; but in addition to topology, we also need to represent the delay in the river branches. And for this purpose, we use a relationship parameter value on the connection__node__node relationships (see Specifying relationship parameter values).

    Specifying parameter values of the relationships

    Using the Importer

    Additional Steps for Project Setup

    Importing the model

    Executing the workflow

    Once the workflow is defined and input data is in place, the project is ready to be executed. Hit the Execute project button image on the tool bar.

    You should see ‘Executing All Directed Acyclic Graphs’ printed in the Event log (on the lower left by default). SpineOpt output messages will appear in the Process Log panel in the middle. After some processing, ‘DAG 1/1 completed successfully’ appears and the execution is complete.

    Examining the results

    Select the output data store and open the Spine DB editor.

    image

    To checkout the flow on the electricity load (i.e., the total electricity production in the system), go to Object tree, expand the unit object class, and select electricity_load, as illustrated in the picture above. Next, go to Relationship parameter value and double-click the first cell under value. The Parameter value editor will pop up. You should see something like this:

    image

    Note

    If you have used the importer to instantiate the model you can easily modify the parameters in the model worksheet, run the project and observe the differences in the results. If you need to make changes directly to the input database, in order for the importer not to overwrite them, you will need to dissassociate the importer from the input DB (right click in the connecting yellow arrow between the two items and click on remove).

    + Kvistforsen_pwr_plant
  • Go to Object tree (on the top left of the window, usually), right-click on unit and select Add objects from the context menu. This will open the Add objects dialog.
  • Select the first cell under the object name column and press Ctrl+V. This will paste the list of plant names from the clipboard into that column; the object class name column will be filled automatically with ‘unit‘. The form should now be looking similar to this: image
  • Click Ok.
  • Back in the Spine DB Editor, under Object tree, double click on unit to confirm that the objects are effectively there.
  • Commit changes with the message ‘Add power plants’.
  • Add discharge and spillway connections by creating objects of class connection with the following names:

    RebnistoBergnäsdisch SadvatoBergnäsdisch BergnästoSlagnäsdisch SlagnästoBastuseldisch BastuseltoGrytforsdisch GrytforstoGallejaurdisch GallejaurtoVargforsdisch VargforstoRengårddisch RengårdtoBåtforsdisch BåtforstoFinnforsdisch FinnforstoGranforsdisch GranforstoKrångforsdisch KrångforstoSelsforsdisch SelsforstoKvistforsendisch Kvistforsentodownstreamdisch RebnistoBergnässpill SadvatoBergnässpill BergnästoSlagnässpill SlagnästoBastuselspill BastuseltoGrytforsspill GrytforstoGallejaurspill GallejaurtoVargforsspill VargforstoRengårdspill RengårdtoBåtforsspill BåtforstoFinnforsspill FinnforstoGranforsspill GranforstoKrångforsspill KrångforstoSelsforsspill SelsforstoKvistforsenspill Kvistforsentodownstreamspill

  • Add water nodes by creating objects of class node with the following names:

    Rebnisupper Sadvaupper Bergnäsupper Slagnäsupper Bastuselupper Grytforsupper Gallejaurupper Vargforsupper Rengårdupper Båtforsupper Finnforsupper Granforsupper Krångforsupper Selsforsupper Kvistforsenupper Rebnislower Sadvalower Bergnäslower Slagnäslower Bastusellower Grytforslower Gallejaurlower Vargforslower Rengårdlower Båtforslower Finnforslower Granforslower Krångforslower Selsforslower Kvistforsenlower

  • Next, create the following objects (all names in lower-case):

  • Finally, create the following objects to get results back from SpineOpt (again, all names in lower-case):

  • Note

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Specifying object parameter values

    Establishing relationships

    Tip

    To enter the same text on several cells, copy the text into the clipboard, then select all target cells and press Ctrl+V.

    Note

    At this point, you might be wondering what's the purpose of the unit__node__node relationship class. Shouldn't it be enough to have unit__from_node and unit__to_node to represent the topology of the system? The answer is yes; but in addition to topology, we also need to represent the conversion process that happens in the unit, where the water from one node is turned into electricty for another node. And for this purpose, we use a relationship parameter value on the unit__node__node relationships (see Specifying relationship parameter values).

    Note

    At this point, you might be wondering what's the purpose of the connection__node__node relationship class. Shouldn't it be enough to have connection__from_node and connection__to_node to represent the topology of the system? The answer is yes; but in addition to topology, we also need to represent the delay in the river branches. And for this purpose, we use a relationship parameter value on the connection__node__node relationships (see Specifying relationship parameter values).

    Specifying parameter values of the relationships

    Using the Importer

    Additional Steps for Project Setup

    Importing the model

    Executing the workflow

    Once the workflow is defined and input data is in place, the project is ready to be executed. Hit the Execute project button image on the tool bar.

    You should see ‘Executing All Directed Acyclic Graphs’ printed in the Event log (on the lower left by default). SpineOpt output messages will appear in the Process Log panel in the middle. After some processing, ‘DAG 1/1 completed successfully’ appears and the execution is complete.

    Examining the results

    Select the output data store and open the Spine DB editor.

    image

    To checkout the flow on the electricity load (i.e., the total electricity production in the system), go to Object tree, expand the unit object class, and select electricity_load, as illustrated in the picture above. Next, go to Relationship parameter value and double-click the first cell under value. The Parameter value editor will pop up. You should see something like this:

    image

    Note

    If you have used the importer to instantiate the model you can easily modify the parameters in the model worksheet, run the project and observe the differences in the results. If you need to make changes directly to the input database, in order for the importer not to overwrite them, you will need to dissassociate the importer from the input DB (right click in the connecting yellow arrow between the two items and click on remove).

    diff --git a/dev/tutorial/ramping/index.html b/dev/tutorial/ramping/index.html index 7714761ef5..d82676283a 100644 --- a/dev/tutorial/ramping/index.html +++ b/dev/tutorial/ramping/index.html @@ -1,2 +1,2 @@ -Ramping constraints · SpineOpt.jl

    Ramping definition tutorial

    This tutorial provides a step-by-step guide to include ramping constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding ramping constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    The ramping constraint limit refers to the maximum rate at which a power unit can increase or decrease its output flow over time. These limits are typically put in place to prevent sudden and destabilizing shifts in power units. However, they may also represent any other physical limitations that a unit may have that is related to changes over time in its output flow.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 3-hour time series instead of a unique value
    • The power_plant_a has the following parameters:
      • Ramp limit of 10% for both up and down
      • Minimum operating point of 10% of its total capacity
      • Startup capacity limit of 10% of its total capacity
      • Shutdown capacity limit of 10% of its total capacity

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the ramping constraints concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Alt + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add ramping constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 3-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below.
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Notice that there is only demand values for 2000-01-01T00:00:00 and 2000-01-01T02:00:00. Therefore, we need to update the start and end of the model. But first, let's change the temporal block.

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h to make easy to follow the results.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [model] class, and select the simple from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Editing the model start and end

    Since the default resolution of the Simple System was 1D, the start and end date of the model needs also to be changed.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, select the model_start parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • Repeat the procedure for the model_end parameter, but now the value is 2000-01-01T03:00:00. The final values should look like that the image below.

    image

    It's important to note that the model must finish in the third hour to account for all the periods of demand in input data, which goes until 2000-01-01T02:00:00.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand in both hours, and then the power_plant_b (i.e., the more expensive unit) has zero production. This is the most economical dispatch since the problem has no extra constraints (so far!).

    Step 2 - Include the ramping limit

    Let's consider the input data where the power_plant_a has a ramping limit of 10% in both directions (i.e., up and down), meaning that the change between two time steps can't be greater than 10MW (since the plant 'a' has a unit capacity of 100MW). The ramping constraints need the following parameters for their definition: minimum operating point, startup limit, and shutdown limit. For more details, please visit the mathematical formulation in the following link

    Adding the new parameters

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table:

      • Select the ramp_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping up limit for power_plant_a.

      • Select the ramp_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping down limit for power_plant_a.

      • Select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point for power_plant_a.

      • Select the start_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the startup capacity limit for power_plant_a.

      • Select the shut_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the shutdown capacity limit for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results with ramp limits

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) output is limited to its ramps limits, therefore it can't follow the demand changes as before. For instance, the unit's power output is 45MW in the first hour, which is lower than the previous result of 50MW in the same hour. This is because the unit needs to gradually decrease its power output and reach 25MW in the last hour. However, due to the imposed ramp-down limit of 10MW, it cannot start from 50MW as before. Therefore, the power_plant_b (i.e., the more expensive unit) must produce to cover the demand that plant 'a' can't due to its ramping limitations. As shown here, the ramping limits might lead to a higher costs in power systems compared to the previous case.

    But...there is something more here...Can you tell what? :anguished:

    It is important to note that the optimal solution we have calculated assumes that the unit 'a' was already producing electricity before the model_start parameter. This is because we have not defined an initial condition for the flow of the unit. Therefore, the flow at the first hour is the most cost-effective solution under this assumption. However, what if we changed this assumption and assumed that the unit had not produced any flow before the model_start parameter? If you are curious to know the answer, join me in the next section.

    Step 3 - Include a initial condition to the flow

    Adding the initial flow

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the initial_unit_flow parameter and the Base alternative, and enter the value 0.0 as seen in the image below. This will set the initial flow for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits with initial conditions

    You know the drill! ;)

    Examining the results with ramp limits with initial conditions

    Create a the Pivot table with the latest results. It will look something like the image below.

    image

    Here, we can see the impact of the initial condition; no longer can the unit have a flow change than its ramp-up limit for the first hour. Therefore, the optimal solution under this assumption changes compared to the previous section.

    This example highlights the importance of considering initial conditions as a crucial assumption in energy system modelling optimization.

    +Ramping constraints · SpineOpt.jl

    Ramping definition tutorial

    This tutorial provides a step-by-step guide to include ramping constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding ramping constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    The ramping constraint limit refers to the maximum rate at which a power unit can increase or decrease its output flow over time. These limits are typically put in place to prevent sudden and destabilizing shifts in power units. However, they may also represent any other physical limitations that a unit may have that is related to changes over time in its output flow.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 3-hour time series instead of a unique value
    • The power_plant_a has the following parameters:
      • Ramp limit of 10% for both up and down
      • Minimum operating point of 10% of its total capacity
      • Startup capacity limit of 10% of its total capacity
      • Shutdown capacity limit of 10% of its total capacity

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the ramping constraints concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Alt + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add ramping constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 3-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below.
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Notice that there is only demand values for 2000-01-01T00:00:00 and 2000-01-01T02:00:00. Therefore, we need to update the start and end of the model. But first, let's change the temporal block.

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h to make easy to follow the results.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [model] class, and select the simple from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Editing the model start and end

    Since the default resolution of the Simple System was 1D, the start and end date of the model needs also to be changed.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, select the model_start parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • Repeat the procedure for the model_end parameter, but now the value is 2000-01-01T03:00:00. The final values should look like that the image below.

    image

    It's important to note that the model must finish in the third hour to account for all the periods of demand in input data, which goes until 2000-01-01T02:00:00.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand in both hours, and then the power_plant_b (i.e., the more expensive unit) has zero production. This is the most economical dispatch since the problem has no extra constraints (so far!).

    Step 2 - Include the ramping limit

    Let's consider the input data where the power_plant_a has a ramping limit of 10% in both directions (i.e., up and down), meaning that the change between two time steps can't be greater than 10MW (since the plant 'a' has a unit capacity of 100MW). The ramping constraints need the following parameters for their definition: minimum operating point, startup limit, and shutdown limit. For more details, please visit the mathematical formulation in the following link

    Adding the new parameters

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table:

      • Select the ramp_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping up limit for power_plant_a.

      • Select the ramp_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the ramping down limit for power_plant_a.

      • Select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point for power_plant_a.

      • Select the start_up_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the startup capacity limit for power_plant_a.

      • Select the shut_down_limit parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the shutdown capacity limit for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results with ramp limits

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    The image above shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) output is limited to its ramps limits, therefore it can't follow the demand changes as before. For instance, the unit's power output is 45MW in the first hour, which is lower than the previous result of 50MW in the same hour. This is because the unit needs to gradually decrease its power output and reach 25MW in the last hour. However, due to the imposed ramp-down limit of 10MW, it cannot start from 50MW as before. Therefore, the power_plant_b (i.e., the more expensive unit) must produce to cover the demand that plant 'a' can't due to its ramping limitations. As shown here, the ramping limits might lead to a higher costs in power systems compared to the previous case.

    But...there is something more here...Can you tell what? :anguished:

    It is important to note that the optimal solution we have calculated assumes that the unit 'a' was already producing electricity before the model_start parameter. This is because we have not defined an initial condition for the flow of the unit. Therefore, the flow at the first hour is the most cost-effective solution under this assumption. However, what if we changed this assumption and assumed that the unit had not produced any flow before the model_start parameter? If you are curious to know the answer, join me in the next section.

    Step 3 - Include a initial condition to the flow

    Adding the initial flow

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the initial_unit_flow parameter and the Base alternative, and enter the value 0.0 as seen in the image below. This will set the initial flow for power_plant_a.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow with ramp limits with initial conditions

    You know the drill! ;)

    Examining the results with ramp limits with initial conditions

    Create a the Pivot table with the latest results. It will look something like the image below.

    image

    Here, we can see the impact of the initial condition; no longer can the unit have a flow change than its ramp-up limit for the first hour. Therefore, the optimal solution under this assumption changes compared to the previous section.

    This example highlights the importance of considering initial conditions as a crucial assumption in energy system modelling optimization.

    diff --git a/dev/tutorial/reserves/index.html b/dev/tutorial/reserves/index.html index d96a66725a..37f780aeda 100644 --- a/dev/tutorial/reserves/index.html +++ b/dev/tutorial/reserves/index.html @@ -1,2 +1,2 @@ -Reserve requirements · SpineOpt.jl

    Reserve definition tutorial

    This tutorial provides a step-by-step guide to include reserve requirements in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding a new reserve node in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Reserves refer to the capacity or energy that is kept as a backup to ensure the power system's reliability. This reserve capacity can be brought online automatically or manually in the event of unforeseen system disruptions such as generation failure, transmission line failure, or a sudden increase in demand. Operating reserves are essential to ensure that there is always enough generation capacity available to meet demand, even in the face of unforeseen system disruptions.

    Model assumptions

    • The reserve node has a requirement of 20MW for upwards reserve
    • Power plants 'a' and 'b' can both provide reserve to this node

    image

    Guide

    Entering input data

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add a new reserve node to the Simple System.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.
    • Enter the names for the new reserve node as seen in the image below, then press Ok. This will create a new object of class node, called upward_reserve_node.

    image

    • Right click on the node class, and select Add object group from the context menu. The Add object group dialog will pop up. In the Group name field write upward_reserve_group to refer to this group. Then, add as a members of the group the nodes electricity_node and upward_reserve_node, as shown in the image below; then press Ok.
    Note

    In SpineOpt, groups of nodes allow the user to create constraints that involve variables from its members. Later in this tutorial, the group named upward_reserve_group will help to link the flow variables for electricity production and reserve procurement.

    image

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into both the upward_reserve_node and the upward_reserve_group.

    image

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and variable_om_costs under output. Repeat the same procedure in the second line to add the res_proc_costs under output as seen in the image below; then press Ok. This will write the total vom_cost and procurement reserve cost values in the objective function to the output database as a part of report1.

    image

    Specifying object parameter values

    • Back to Object tree, expand the node class and select upward_reserve_node.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • demand parameter and the Base alternative, and enter the value 20. This will establish that there's a demand of '20' at the reverse node.
      • is_reserve_node parameter and the Base alternative, and enter the value True. This will establish that it is a reverse node.
      • upward_reserve parameter and the Base alternative, then right-click on the value cell and then, in the context menu, select 'Edit...' and select the option True. This will establish the direction of the reserve is upwards.
      • nodal_balance_sense parameter and the Base alternative, and enter the value $\geq$. This will establish that the total reserve procurement must be greater or equal than the reserve demand.

    image

    • Select upward_reserve_group in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that there is no need to create an extra balance between the members of the group.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity to provide reserve for power_plant_a.

    Note

    The value is equal to the unit capacity defined for the electricity node. However, the value can be lower if the unit cannot provide reserves with its total capacity.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity to provide reserve for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 100. This will set the total capacity for power_plant_a in the group.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 200. This will set the total capacity for power_plant_b in the group.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    As anticipated, the power_plant_b is supplying the necessary reserve due to its surplus capacity, while power_plant_a is operating at full capacity. Additionally, in this model, we have not allocated a cost for reserve procurement. One way to double-check it is by selecting report__model under Relationship tree and look at the costs the Pivot table, see image below.

    image

    So, is it possible to assign costs to this reserve procurement in SpineOpt? Yes, it is indeed possible.

    Specifying a reserve procurement cost value

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 5 as seen in the image below. This will set the cost of providing reserve for power_plant_a.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 35 as seen in the image below. This will set the cost of providing reserve for power_plant_b.

    image

    Don't forget to commit the new changes to the database!

    Executing the worflow and examining the results again

    • Go back to Spine Toolbox's main window, and hit again the Execute project button as before.

    • Select the output data store and open the Spine DB editor. You can inspect results as before, which should look like the image below.

    image

    Since the cost of reserve procurement is way cheaper in power_plant_a than in power_plant_b, then the optimal solution is to reduce the production of electricity in power_plant_a to provide reserve with this unit rather than power_plant_b as before. By looking at the total costs, we can see that the reserve procurement costs are no longer zero.

    image

    +Reserve requirements · SpineOpt.jl

    Reserve definition tutorial

    This tutorial provides a step-by-step guide to include reserve requirements in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding a new reserve node in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Reserves refer to the capacity or energy that is kept as a backup to ensure the power system's reliability. This reserve capacity can be brought online automatically or manually in the event of unforeseen system disruptions such as generation failure, transmission line failure, or a sudden increase in demand. Operating reserves are essential to ensure that there is always enough generation capacity available to meet demand, even in the face of unforeseen system disruptions.

    Model assumptions

    • The reserve node has a requirement of 20MW for upwards reserve
    • Power plants 'a' and 'b' can both provide reserve to this node

    image

    Guide

    Entering input data

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add a new reserve node to the Simple System.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.
    • Enter the names for the new reserve node as seen in the image below, then press Ok. This will create a new object of class node, called upward_reserve_node.

    image

    • Right click on the node class, and select Add object group from the context menu. The Add object group dialog will pop up. In the Group name field write upward_reserve_group to refer to this group. Then, add as a members of the group the nodes electricity_node and upward_reserve_node, as shown in the image below; then press Ok.
    Note

    In SpineOpt, groups of nodes allow the user to create constraints that involve variables from its members. Later in this tutorial, the group named upward_reserve_group will help to link the flow variables for electricity production and reserve procurement.

    image

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into both the upward_reserve_node and the upward_reserve_group.

    image

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and variable_om_costs under output. Repeat the same procedure in the second line to add the res_proc_costs under output as seen in the image below; then press Ok. This will write the total vom_cost and procurement reserve cost values in the objective function to the output database as a part of report1.

    image

    Specifying object parameter values

    • Back to Object tree, expand the node class and select upward_reserve_node.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • demand parameter and the Base alternative, and enter the value 20. This will establish that there's a demand of '20' at the reverse node.
      • is_reserve_node parameter and the Base alternative, and enter the value True. This will establish that it is a reverse node.
      • upward_reserve parameter and the Base alternative, then right-click on the value cell and then, in the context menu, select 'Edit...' and select the option True. This will establish the direction of the reserve is upwards.
      • nodal_balance_sense parameter and the Base alternative, and enter the value $\geq$. This will establish that the total reserve procurement must be greater or equal than the reserve demand.

    image

    • Select upward_reserve_group in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that there is no need to create an extra balance between the members of the group.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity to provide reserve for power_plant_a.

    Note

    The value is equal to the unit capacity defined for the electricity node. However, the value can be lower if the unit cannot provide reserves with its total capacity.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity to provide reserve for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 100. This will set the total capacity for power_plant_a in the group.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_group.

    • In the Relationship parameter table (typically at the bottom-center), select the following parameter as seen in the image below:

      • unit_capacity parameter and the Base alternative, and enter the value 200. This will set the total capacity for power_plant_b in the group.

    image

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.

    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.

    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.

    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.

    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    As anticipated, the power_plant_b is supplying the necessary reserve due to its surplus capacity, while power_plant_a is operating at full capacity. Additionally, in this model, we have not allocated a cost for reserve procurement. One way to double-check it is by selecting report__model under Relationship tree and look at the costs the Pivot table, see image below.

    image

    So, is it possible to assign costs to this reserve procurement in SpineOpt? Yes, it is indeed possible.

    Specifying a reserve procurement cost value

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 5 as seen in the image below. This will set the cost of providing reserve for power_plant_a.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_b | upward_reserve_node.

    • In the Relationship parameter table (typically at the bottom-center), select the reserve_procurement_cost parameter and the Base alternative, and enter the value 35 as seen in the image below. This will set the cost of providing reserve for power_plant_b.

    image

    Don't forget to commit the new changes to the database!

    Executing the worflow and examining the results again

    • Go back to Spine Toolbox's main window, and hit again the Execute project button as before.

    • Select the output data store and open the Spine DB editor. You can inspect results as before, which should look like the image below.

    image

    Since the cost of reserve procurement is way cheaper in power_plant_a than in power_plant_b, then the optimal solution is to reduce the production of electricity in power_plant_a to provide reserve with this unit rather than power_plant_b as before. By looking at the total costs, we can see that the reserve procurement costs are no longer zero.

    image

    diff --git a/dev/tutorial/simple_system/index.html b/dev/tutorial/simple_system/index.html index ee05a02fde..fbcfd0ebc2 100644 --- a/dev/tutorial/simple_system/index.html +++ b/dev/tutorial/simple_system/index.html @@ -1,2 +1,2 @@ -Simple system · SpineOpt.jl

    Simple System tutorial

    Welcome to Spine Toolbox's Simple System tutorial.

    This tutorial provides a step-by-step guide to setup a simple energy system with Spine Toolbox for SpineOpt. Spine Toolbox is used to create a workflow with databases and tools and SpineOpt is the tool that simulates/optimizes the energy system.

    Introduction

    Model assumptions

    • Two power plants take fuel from a source node and release electricity to another node in order to supply a demand.
    • Power plant 'a' has a capacity of 100 MWh, a variable operating cost of 25 euro/fuel unit, and generates 0.7 MWh of electricity per unit of fuel.
    • Power plant 'b' has a capacity of 200 MWh, a variable operating cost of 50 euro/fuel unit, and generates 0.8 MWh of electricity per unit of fuel.
    • The demand at the electricity node is 150 MWh.
    • The fuel node is able to provide infinite energy.

    image

    Installation and upgrades

    If you haven't yet installed the tools or you are not sure whether you have the latest version, please follow the installation/upgrade guides:

    Guide

    Spine Toolbox workflow

    The workflow for this tutorial is quite simple: A SpineOpt tool that reads data from an input database, executes the simulation/optimization and writes the results to an output database.

    To create the workflow, it is almost as simple as dragging these items (i.e. Data Store and Run SpineOpt) to the Design View and connecting them by dragging arrows between the blocks but there are some things that need to be configured:

    • The databases need to be initialised. Once you select a database you see the properties panel. Select the dialect of the database. Here we choose sqlite. Then press the button 'new spine db' to create and save the database on your computer (Spine Toolbox will suggest a good folder).

    • Connecting tools with (yellow) arrows in the Toolbox does not mean that the tools will use these items. The arrows in the Toolbox view make items (databases) available. To let SpineOpt know we want to use these items, we need to go to the properties panel of Run SpineOpt and drag the available items to the tool arguments. The order of the items is first the input, then the output. See below for how the property window should look.

    image

    • (optional) The Spine data stores are quite generic. In order for SpineOpt to be able to read the input database, we need to change its format from the Spine format to the SpineOpt format. Luckily we can use templates for this. One of those templates is made available as an item in Spine Toolbox: Load template. The other option is to load templates into the database using the db editor. The templates can also be used to pre-populate the database with some basic components. Here we briefly explain the use of the Load template block and later we show how to import a template and basic components with the spine db editor. To use the Load template block, drag it to the view and connect it to the input database. Just like the Run SpineOpt block we need to drag the available input database to the tool argument.

    The result should look similar to this (+/- the Load template block):

    image

    That is it for the workflow. Now we can enter the data for the setup of the simple system into the input database, run the workflow and view the results in the output database.

    Entering input data

    Importing the SpineOpt database template

    • Download the SpineOpt database template and the basic SpineOpt model (right click on the links, then select Save link as...)

    • Double click the input Data Store item (or select the 'input' Data Store item in the Design View, go to Data Store Properties and hit Open editor). This will open the newly created database in the Spine DB editor, looking similar to this:

    image

    Note

    The Spine DB editor is a dedicated interface within Spine Toolbox for visualizing and managing Spine databases. The default view shows tables but for viewing energy system configurations it is nice to see a graph. Open the hamburger menu (or press Alt + F) and press the graph button. The graph view only shows what you select in the root menu and what your selected objects or relationships are connected to.

    • To import the templates to the database, click the hamburger menu (or press Alt + F), select File -> Import..., and then select the template file you previously downloaded (spineopt_template.json). The contents of that file will be imported into the current database, and you should then see classes like 'commodity', 'connection' and 'model' under the root node in the Object tree (on the left). Then import the second file (basic_model_template.json).

    • To save our changes, go again to the hamburger menu and select Session -> Commit. Enter a commit message, e.g. 'Import SpineOpt template', in the popup dialog and click Commit.

    Note

    The SpineOpt template contains the fundamental entity classes and parameter definitions that SpineOpt recognizes and expects. The SpineOpt basic model template contains some predefined entities for a common deterministic model with a 'flat' temporal structure.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.

    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.

    • Enter the names for the system nodes as seen in the image below, then press Ok. This will create two objects of class node, called fuel_node and electricity_node.

    image

    • Right click on the unit class, and select Add objects from the context menu. The Add objects dialog will pop up.
    Note

    In SpineOpt, nodes are points where an energy balance takes place, whereas units are energy conversion devices that can take energy from nodes, and release energy to nodes.

    • Enter the names for the system units as seen in the image below, then press Ok. This will create two objects of class unit, called power_plant_a and power_plant_b.

    image

    Note

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.

    • Right click on the unit__from_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    Note

    Alternatively right click the objects in the graph view and add relationships will show the available relationships. Note that this only works when the involved units/nodes/... are visible in the graph view. To make an object visible, simply click on the object in the list of objects/object classes. You can select multiple objects with ctrl or shift.

    • Select the names of the two units and their sending nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b take energy from the fuel_node.

    image

    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into the electricity_node.

    image

    • Right click on the unit__node__node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • For each of the units enter the unit under unit, electricity_node under the first node and fuel_node under the second node. These relationships will define the relation (or behavior) between the output and input of the unit.

    image

    Note

    The unit__node__node relationship is necessary to limit the flow (flows are unbound by default) and to define an efficiency. The order of the nodes is important for that definition (see later on). It may seem unintuitive to define an efficiency through a three-way relationship instead of a property of a unit, but this approach allows you to define efficiencies between any flow(s) coming in and out of the unit (e.g. CHP).

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and unit_flow under output, as seen in the image below; then press Ok. This will tell SpineOpt to write the value of the unit_flow optimization variable to the output database, as part of report1.

    image

    Note

    In SpineOpt, outputs represent optimization variables that can be written to the output database as part of a report.

    Specifying object parameter values

    • Back to Object tree, expand the node class and select electricity_node.

    • Locate the Object parameter table (typically at the top-center).

    • In the Object parameter table (typically at the top-center), select the demand parameter and the Base alternative, and enter the value 150 as seen in the image below. This will establish that there's a demand of '150' at the electricity node.

    image

    Note

    The alternative name is not optional. If you don't select Base (or another name) you will not be able to save your data. Speaking of which, when is the last time you saved/committed?

    • Select fuel_node in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that the fuel node is not balanced, and thus provide as much fuel as needed.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__from_node class and select power_plant_a | fuel_node.

    • In the Relationship parameter table (typically at the bottom-center), select the vom_cost parameter and the Base alternative, and enter the value 25 as seen in the image below. This will set the operating cost for power_plant_a.

    image

    • Select power_plant_b | fuel_node in the Relationship tree.

    • In the Relationship parameter table, select the vom_cost parameter and the Base alternative, and enter the value 50 as seen in the image below. This will set the operating cost for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity for power_plant_a.

    image

    • Select power_plant_b | electricity_node in the Relationship tree.

    • In the Relationship parameter table, select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity for power_plant_b.

    image

    • In Relationship tree, select the unit__node__node class, and come back to the Relationship parameter table.

    • In the Relationship parameter table, select power_plant_a | electricity_node | fuel_node under object name list, fix_ratio_out_in_unit_flow under parameter name, Base under alternative name, and enter 0.7 under value. Repeat the operation for power_plant_b, but this time enter 0.8 under value. This will set the conversion ratio from fuel to electricity for power_plant_a and power_plant_b to 0.7 and 0.8, respectively. It should look like the image below.

    image

    Note

    The order of the nodes is important for the fix_ratio_out_in_unit_flow parameter. If you have swapped the nodes or inverted the efficiency values, the Run SpineOpt tool will run into errors.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.
    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.
    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.
    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.
    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    +Simple system · SpineOpt.jl

    Simple System tutorial

    Welcome to Spine Toolbox's Simple System tutorial.

    This tutorial provides a step-by-step guide to setup a simple energy system with Spine Toolbox for SpineOpt. Spine Toolbox is used to create a workflow with databases and tools and SpineOpt is the tool that simulates/optimizes the energy system.

    Introduction

    Model assumptions

    • Two power plants take fuel from a source node and release electricity to another node in order to supply a demand.
    • Power plant 'a' has a capacity of 100 MWh, a variable operating cost of 25 euro/fuel unit, and generates 0.7 MWh of electricity per unit of fuel.
    • Power plant 'b' has a capacity of 200 MWh, a variable operating cost of 50 euro/fuel unit, and generates 0.8 MWh of electricity per unit of fuel.
    • The demand at the electricity node is 150 MWh.
    • The fuel node is able to provide infinite energy.

    image

    Installation and upgrades

    If you haven't yet installed the tools or you are not sure whether you have the latest version, please follow the installation/upgrade guides:

    Guide

    Spine Toolbox workflow

    The workflow for this tutorial is quite simple: A SpineOpt tool that reads data from an input database, executes the simulation/optimization and writes the results to an output database.

    To create the workflow, it is almost as simple as dragging these items (i.e. Data Store and Run SpineOpt) to the Design View and connecting them by dragging arrows between the blocks but there are some things that need to be configured:

    • The databases need to be initialised. Once you select a database you see the properties panel. Select the dialect of the database. Here we choose sqlite. Then press the button 'new spine db' to create and save the database on your computer (Spine Toolbox will suggest a good folder).

    • Connecting tools with (yellow) arrows in the Toolbox does not mean that the tools will use these items. The arrows in the Toolbox view make items (databases) available. To let SpineOpt know we want to use these items, we need to go to the properties panel of Run SpineOpt and drag the available items to the tool arguments. The order of the items is first the input, then the output. See below for how the property window should look.

    image

    • (optional) The Spine data stores are quite generic. In order for SpineOpt to be able to read the input database, we need to change its format from the Spine format to the SpineOpt format. Luckily we can use templates for this. One of those templates is made available as an item in Spine Toolbox: Load template. The other option is to load templates into the database using the db editor. The templates can also be used to pre-populate the database with some basic components. Here we briefly explain the use of the Load template block and later we show how to import a template and basic components with the spine db editor. To use the Load template block, drag it to the view and connect it to the input database. Just like the Run SpineOpt block we need to drag the available input database to the tool argument.

    The result should look similar to this (+/- the Load template block):

    image

    That is it for the workflow. Now we can enter the data for the setup of the simple system into the input database, run the workflow and view the results in the output database.

    Entering input data

    Importing the SpineOpt database template

    • Download the SpineOpt database template and the basic SpineOpt model (right click on the links, then select Save link as...)

    • Double click the input Data Store item (or select the 'input' Data Store item in the Design View, go to Data Store Properties and hit Open editor). This will open the newly created database in the Spine DB editor, looking similar to this:

    image

    Note

    The Spine DB editor is a dedicated interface within Spine Toolbox for visualizing and managing Spine databases. The default view shows tables but for viewing energy system configurations it is nice to see a graph. Open the hamburger menu (or press Alt + F) and press the graph button. The graph view only shows what you select in the root menu and what your selected objects or relationships are connected to.

    • To import the templates to the database, click the hamburger menu (or press Alt + F), select File -> Import..., and then select the template file you previously downloaded (spineopt_template.json). The contents of that file will be imported into the current database, and you should then see classes like 'commodity', 'connection' and 'model' under the root node in the Object tree (on the left). Then import the second file (basic_model_template.json).

    • To save our changes, go again to the hamburger menu and select Session -> Commit. Enter a commit message, e.g. 'Import SpineOpt template', in the popup dialog and click Commit.

    Note

    The SpineOpt template contains the fundamental entity classes and parameter definitions that SpineOpt recognizes and expects. The SpineOpt basic model template contains some predefined entities for a common deterministic model with a 'flat' temporal structure.

    Creating objects

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.

    • Right click on the [node] class, and select Add objects from the context menu. The Add objects dialog will pop up.

    • Enter the names for the system nodes as seen in the image below, then press Ok. This will create two objects of class node, called fuel_node and electricity_node.

    image

    • Right click on the unit class, and select Add objects from the context menu. The Add objects dialog will pop up.
    Note

    In SpineOpt, nodes are points where an energy balance takes place, whereas units are energy conversion devices that can take energy from nodes, and release energy to nodes.

    • Enter the names for the system units as seen in the image below, then press Ok. This will create two objects of class unit, called power_plant_a and power_plant_b.

    image

    Note

    To modify an object after you enter it, right click on it and select Edit... from the context menu.

    Establishing relationships

    • Always in the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.

    • Right click on the unit__from_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    Note

    Alternatively right click the objects in the graph view and add relationships will show the available relationships. Note that this only works when the involved units/nodes/... are visible in the graph view. To make an object visible, simply click on the object in the list of objects/object classes. You can select multiple objects with ctrl or shift.

    • Select the names of the two units and their sending nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b take energy from the fuel_node.

    image

    • Right click on the unit__to_node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Select the names of the two units and their receiving nodes, as seen in the image below; then press Ok. This will establish that both power_plant_a and power_plant_b release energy into the electricity_node.

    image

    • Right click on the unit__node__node class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • For each of the units enter the unit under unit, electricity_node under the first node and fuel_node under the second node. These relationships will define the relation (or behavior) between the output and input of the unit.

    image

    Note

    The unit__node__node relationship is necessary to limit the flow (flows are unbound by default) and to define an efficiency. The order of the nodes is important for that definition (see later on). It may seem unintuitive to define an efficiency through a three-way relationship instead of a property of a unit, but this approach allows you to define efficiencies between any flow(s) coming in and out of the unit (e.g. CHP).

    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.

    • Enter report1 under report, and unit_flow under output, as seen in the image below; then press Ok. This will tell SpineOpt to write the value of the unit_flow optimization variable to the output database, as part of report1.

    image

    Note

    In SpineOpt, outputs represent optimization variables that can be written to the output database as part of a report.

    Specifying object parameter values

    • Back to Object tree, expand the node class and select electricity_node.

    • Locate the Object parameter table (typically at the top-center).

    • In the Object parameter table (typically at the top-center), select the demand parameter and the Base alternative, and enter the value 150 as seen in the image below. This will establish that there's a demand of '150' at the electricity node.

    image

    Note

    The alternative name is not optional. If you don't select Base (or another name) you will not be able to save your data. Speaking of which, when is the last time you saved/committed?

    • Select fuel_node in the Object tree.

    • In the Object parameter table, select the balance_type parameter and the Base alternative, and enter the value balance_type_none as seen in the image below. This will establish that the fuel node is not balanced, and thus provide as much fuel as needed.

    image

    Specifying relationship parameter values

    • In Relationship tree, expand the unit__from_node class and select power_plant_a | fuel_node.

    • In the Relationship parameter table (typically at the bottom-center), select the vom_cost parameter and the Base alternative, and enter the value 25 as seen in the image below. This will set the operating cost for power_plant_a.

    image

    • Select power_plant_b | fuel_node in the Relationship tree.

    • In the Relationship parameter table, select the vom_cost parameter and the Base alternative, and enter the value 50 as seen in the image below. This will set the operating cost for power_plant_b.

    image

    • In Relationship tree, expand the unit__to_node class and select power_plant_a | electricity_node.

    • In the Relationship parameter table, select the unit_capacity parameter and the Base alternative, and enter the value 100 as seen in the image below. This will set the capacity for power_plant_a.

    image

    • Select power_plant_b | electricity_node in the Relationship tree.

    • In the Relationship parameter table, select the unit_capacity parameter and the Base alternative, and enter the value 200 as seen in the image below. This will set the capacity for power_plant_b.

    image

    • In Relationship tree, select the unit__node__node class, and come back to the Relationship parameter table.

    • In the Relationship parameter table, select power_plant_a | electricity_node | fuel_node under object name list, fix_ratio_out_in_unit_flow under parameter name, Base under alternative name, and enter 0.7 under value. Repeat the operation for power_plant_b, but this time enter 0.8 under value. This will set the conversion ratio from fuel to electricity for power_plant_a and power_plant_b to 0.7 and 0.8, respectively. It should look like the image below.

    image

    Note

    The order of the nodes is important for the fix_ratio_out_in_unit_flow parameter. If you have swapped the nodes or inverted the efficiency values, the Run SpineOpt tool will run into errors.

    When you're ready, save/commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables or use a pivot table.
    • For the pivot table, press Alt + F for the shortcut to the hamburger menu, and select Pivot -> Index.
    • Select report__unit__node__direction__stochastic_scenario under Relationship tree, and the first cell under alternative in the Frozen table.
    • Under alternative in the Frozen table, you can choose results from different runs. Pick the run you want to view. If the workflow has been run several times, the most recent run will usually be found at the bottom.
    • The Pivot table will be populated with results from the SpineOpt run. It will look something like the image below.

    image

    diff --git a/dev/tutorial/temporal_resolution/index.html b/dev/tutorial/temporal_resolution/index.html index 6ce09d3102..b2595f6cd4 100644 --- a/dev/tutorial/temporal_resolution/index.html +++ b/dev/tutorial/temporal_resolution/index.html @@ -20,4 +20,4 @@ `value:` Right click > Edit:\ `Parameter Type: Array`\ `Value type: Duration`\ -`Value: 1, 1, 1, 2`

    The array should look like this:

    image

    In the Object tree window, expand "not_flat"

    `model: simple`

    Now you have seen how to define a varying temporal resolution. You could give "notflat" the attribute of "modeldefaulttemporalblock" to change the entire model to this variable resolution - but instead we're going to assign it to a specific entity to show how you can mix resolutions in the same model.

    Assigning an entity a unique resolution

    In the Object tree window:

    `node: fuel_node`

    Running the model & viewing results

    See how the yellow line (fuel demand of Powerplant A) now ends at a value of 50, which is equal to the last two demand values averaged over the 2hr window (70 + 30) / 2 = 50.

    image

    +`Value: 1, 1, 1, 2`

    The array should look like this:

    image

    In the Object tree window, expand "not_flat"

    `model: simple`

    Now you have seen how to define a varying temporal resolution. You could give "notflat" the attribute of "modeldefaulttemporalblock" to change the entire model to this variable resolution - but instead we're going to assign it to a specific entity to show how you can mix resolutions in the same model.

    Assigning an entity a unique resolution

    In the Object tree window:

    `node: fuel_node`

    Running the model & viewing results

    See how the yellow line (fuel demand of Powerplant A) now ends at a value of 50, which is equal to the last two demand values averaged over the 2hr window (70 + 30) / 2 = 50.

    image

    diff --git a/dev/tutorial/tutorialTwoHydro/index.html b/dev/tutorial/tutorialTwoHydro/index.html index 9845c07289..e6740fce99 100644 --- a/dev/tutorial/tutorialTwoHydro/index.html +++ b/dev/tutorial/tutorialTwoHydro/index.html @@ -1,2 +1,2 @@ -Two hydro plants · SpineOpt.jl

    Hydro Power Planning

    Welcome to this Spine Toolbox tutorial for building hydro power planning models. The tutorial guides you through the implementation of different ways of modelling hydrodologically-coupled hydropower systems.

    Introduction

    This tutorial aims at demonstrating how we can model a hydropower system in Spine (SpineOpt.jl and Spine-Toolbox) with different assumptions and goals. It starts off by setting up a simple model of system of two hydropower plants and gradually introduces additional features. The goal of the model is to capture the combined operation of two hydropower plants (Språnget and Fallet) that operate on the same river as shown in the picture bellow. Each power plant has its own reservoir and generates electricity by discharging water. The plants might need to spill water, i.e., release water from their reservoirs without generating electricity, for various reasons. The water discharged or spilled by the upstream power plant follows the river route and becomes available to the downstream power plant.

    A system of two hydropower plants.

    A system of two hydropower plants

    In order to run this tutorial you must first execute some preliminary steps from the Simple System tutorial. Specifically, execute all steps from the guide, up to and including the step of importing-the-spineopt-database-template. It is advisable to go through the whole tutorial in order to familiarise yourself with Spine.

    Note

    Just remember to give a different name for the Spine Project of the hydropower tutorial (e.g., ‘Two_hydro’) in the corresponding step, so to not mix up the Spine Toolbox projects!

    That is all you need at the moment, you can now start inserting the data.

    Setting up a Basic Hydropower Model

    For creating a SpineOpt model you need to create Objects, Relationships (associating the objects), and in some cases, parameters values accompanying them. To do this, open the input database using the Spine DB Editor (double click on the input database in the Design View pane of Spine Toolbox).

    Note

    To save your work in the Spine DB Editor you need to commit your changes (please check the Simple System tutorial for how to do that). As a good practice, you should commit often as you enter the data in the model to avoid data loss.

    Defining objects

    Commodities

    Since we are modelling a hydropower system we will have to define two commodities, water and electricity. In the Spine DB editor, locate the Object tree, expand the root element if required, right click on the commodity class, and select Add objects from the context menu. In the Add objects dialogue that should pop up, enter the object names for the commodities as you see in the image below and then press Ok.

    image

    Defining commodities.

    Nodes

    Follow a similar path to add nodes, right click on the node class, and select Add objects from the context menu. In the dialogue, enter the node names as shown:

    image

    Defining nodes.

    Nodes in SpineOpt are used to balance commodities. As you noticed, we defined two nodes for each hydropower station (water nodes) and a single electricity node. This is one possible way to model the hydropower plant operation. This will become clearer in the next steps, but in a nutshell, the upper node represents the water arriving at each plant, while the lower node represents the water that is discharged and becomes available to the next plant.

    Connections

    Similarly, add connections, right click on the connection class, select Add objects from the context menu and add the following connections:

    image

    Defining connections.

    Connections enable the nodes to interact. Since, for each plant we need to model the amount of water that is discharged and the amount that is spilled, we must define two connections accordingly. When defining relationships we shall associate the connections with the nodes.

    Units

    To convert from one type of commodity associated with one node to another, you need a unit. You guessed it! Right click on the unit class, select Add objects from the context menu and add the following units:

    image

    Defining units.

    We have defined one unit for each hydropower plant that converts water to electricity and an additional unit that we will use to model the income from selling the electricity production in the electricity market.

    Relationships

    Assinging commodities to nodes

    Since we have defined more than one commodities, we need to assign them to nodes. In the Spine DB editor, locate the Relationship tree, expand the root element if required, right click on the node__commodity class, and select Add relationships from the context menu. In the Add relationships dialogue, enter the following relationships as you see in the image below and then press Ok.

    image

    Introducing node__commodity relationships.

    Associating connections to nodes

    Next step is to define the topology of flows between the nodes. To do that insert the following relationships in the connection__from_node class:

    image

    Introducing connection__from_node relationships.

    as well as the following the following connection__node_node relationships as you see in the figure:

    Introducing connection__node_node relationships.

    Introducing connection__node_node relationships.

    Placing the units in the model

    To define the topology of the units and be able to introduce their parameters later on, you need to define the following relationships in the unit__from_node class:

    Introducing unit__from_node relationships.

    Introducing unit__from_node relationships.

    in the unit__node_node class:

    Introducing unit__node_node relationships.

    Introducing unit__node_node relationships.

    and in the unit__to_node class as you see in the following figure:

    Introducing unit__to_node relationships.

    Introducing unit__to_node relationships.

    Defining the report outputs

    To force Spine to export the optimal values of the optimization variables to the output database you need to specify them in the form of report_output relationships. Add the following relationships to the report_output class:

    Introducing report outputs with report_output relationships.

    Introducing report outputs with report_output relationships.

    Objects and Relationships parameter values

    Defining model parameter values

    The specify modelling properties of both objects and relationships you need to introduce respective parameter values. To introduce object parameter values first select the model class in the Object tree and enter the following values in the Object parameter value pane:

    Defining model execution parameters.

    Defining model execution parameters.

    Observe the difference between the Object parameter value and the Object parameter definition sub-panes of the Object parameter value pane. The first one is for the modeller to introduce values for specific parameters, while the second one holds the definition of all available parameters with their default values (these are overwritten when the user introduces their own values). Feel free to explore the different parameters and their default values. While entering data in each row you will also observe that, in most cases, clicking on each cell activates a drop-down list of elements that the user must choose from. In the case of the value cells, however, unless you need to input a scalar value or a string, you should right-click on the cell and select edit for specifying the data type of the parameter value. As you see in the figure above, for the first duration_unit parameter you is of type string, while the model_start and model_end parameters are of type Date time. The Date time parameters can be edited by right-clicking on the corresponding value cells, selecting Edit, and then inserting the Date time values that you see in the figure above in the Datetime field using the correct format.

    Defining node parameter values

    Going back to hydropower modelling, we need to specify several parameters for the nodes of the systems. In the same pane as before, but this time selecting the node class from the Object tree, we need to add the following entries:

    Defining model execution parameters.

    Defining model execution parameters.

    Before we go through the interpretation of each parameter, click on the following link for each fix_node_state parameter (Node state Språnget, Node state Fallet), select all, copy the data and then paste them directly in the respective parameter value cell. Spine should automatically detect and input the timeseries data as a parameter value. The data type for those entries should be Timeseries as shown in the figure above. Alternatively, you can select the data type as Timeseries and manually insert the data (values with their corresponding datetimes).

    To model the reservoirs of each hydropower plant, we leverage the state feature that a node can have to represent storage capability. We only need to do this for one of the two nodes that we have used to model each plant and we choose the upper level node. To define storage, we set the value of the parameter has_state as True (be careful to not set it as a string but select the boolean true value by right clicking and selecting Edit in the respective cells). This activates the storage capability of the node. Then, we need to set the capacity of the reservoir by setting the node_state_cap parameter value. Finally, we fix the initial and final values of the reservoir by setting the parameter fix_node_state to the respective values (we introduce nan values for the time steps that we don't want to impose such constraints). To model the local inflow we use the demand parameter but using the negated value of the actual inflow, due to the definition of the parameter in Spine as a demand.

    Defining the temporal resolution of the model

    Spine automates the creation of the temporal resolution of the optimization model and even supports different temporal resolutions for different parts of the model. To define a model with an hourly resolution we select the temporal_block class in the Object tree and we set the resolution parameter value to 1h as shown in the figure:

    Setting the temporal resolution of the model.

    Setting the temporal resolution of the model.

    Defining connection parameter values

    The water that is discharged from Språnget will flow from Språnget_lower node to Fallet_upper through the Språnget_to_Fallet_disc connection, while the water that is spilled will flow from Språnget_upper directly to to Fallet_upper through the Språnget_to_Fallet_spill connection. To model this we need to select the connection__node_node class in the Relationship tree and add the following entries in the Relationship parameter value pane, as shown next:

    Defining discharge and spillage ratio flows.

    Defining discharge and spillage ratio flows.

    Defining unit parameter values

    Similarly, for each one of the unit__from_node, unit__node_node, and unit__to_node relationship classes we need to add the the maximal water that can be discharged by each hydropower plant:

    Setting the maximal water discharge of each plant.

    Setting the maximal water discharge of each plant.

    To define the income from selling the produced electricity we use the vom_cost parameter and negate the values of the electricity prices. To automatically insert the timeseries data in Spine, click on the Electricity prices timeseries, select all values, copy, and paste them, after having selected the value cell of the corresponding row. You can plot and edit the timeseries data by double clicking on the same cell afterwards:

    Previewing and editing the electricity prices timeseries.

    Previewing and editing the electricity prices timeseries.

    Carrying on with our hydropower model we must define the conversion ratios between the nodes. Assuming that water is not "lost" from the upper node toward the lower node and electricity is produced with the discharged water with a given efficiency we define the following parameter values for each hydropower plant, in the unit__node_node class:

    Defining conversion efficiencies.

    Defining conversion efficiencies.

    Lastly, we can define the maximal electricity production of each plant by inserting the following unit__to_node relationship parameter values:

    Setting the maximal electricity production of each plant.

    Setting the maximal electricity production of each plant.

    Hooray! You can now commit the database, close the Spine DB Editor and run your model! Go to the main Spine window and click on Execute image.

    Examining the results

    Select the output data store and open the Spine DB editor. To quickly plot some results, you can expand the unit class in the Object tree and select the electricity_load unit. In the Relationship parameter value pane double click on the value cell of

    report1 | electricity_load | electricity_node | from_node | realization

    object name. This will open a plotting window from were you can also examine closer and retrieve the data, as shown in the next figure. The unit_flow variable of the electricity_load unit represents the total electricity production in the system:

    Total electricity produced in the system.

    Total electricity produced in the system.

    Now, take to a minute to reflect on how you could retrieve the data representing the water that is discharged by each hydropower plant as shown in the next figure:

    Water discharge of Språnget hydropower plant.

    Water discharge of Språnget hydropower plant.

    The right answer is that you need to select some hydropower plant (e.g., Språnget) and then double-click on the value cell of the object name

    report1 | Språnget_pwr_plant | Språnget_lower | to_node | realization

    or

    report1 | Språnget_pwr_plant | Språnget_upper | from_node | realization

    It could be useful to also reflect on why these objects give the same results, and what do the results from the third element represent. (Hint: observe the to_ or from_ directions in the object names). As an exercise, you can try to retrieve the timeseries data for spilled water as well as the water levels at the reservoir of each hydropower plant.

    You can further explore the model, or make changes in the input database to observe how these affect the results, e.g., you can use different electricity prices, values for the reservoir capacity (and initialization points), as well as change the temporal resolution of the model. All you need to do is commit the changes and run your model. Every time that you run the model, your results are appended in the output database with an execution timestamp. You can however filter your results per execution, by selecting the Alternative that you want from the Alternative/Scenario tree pane. You can use the exporter too to export specific variables in an Excel sheet. Alternatively, you can export all the data of the output database by going to the main menu (Press Alt + F to display it), selecting File -> Export, then select the items that you want, click ok and export the data in Excel, or json format.

    In the following, we extend this simple hydropower system to include more elaborate modelling choices.

    Note

    In each of the next sections, we perform incremental changes to the initial simple hydropower model. If you want to keep the database that you created, you can duplicate the database file (right-click on the input database and select Duplicate and duplicate files) and perform the changes in the new database. You need to configure the workflow accordingly in order to run the database you want (please check the Simple System tutorial for how to do that).

    Maximisation of Stored Water

    Instead of fixing the water content of the reservoirs at the end of the planning period, we can consider that the remaining water in the reservoirs has a value and then maximize the value along with the revenues for producing electricity within the planning horizon. This objective term is often called the Value of stored water and we can approximate it by assuming that this water will be used to generate electricity in the future that would be sold at a forecasted price. The water stored in the upstream hydropower plant will become also available to the downstream plant and this should be taken into account.

    To model the value of stored water we need to make some additions and modifications to the initial model.

    • First, add a new node (see adding nodes) and give it a name (e.g., stored_water). This node will accumulate the water stored in the reservoirs at the end of the planning horizon. Associate the node with the water commodity (see node__commodity).

    • Add three more units (see adding units); two will transfer the water at the end of the planning horizon in the new node that we just added (e.g., Språnget_stored_water, Fallet_stored_water), and one will be used as a sink introducing the value of stored water in the objective function (e.g., value_stored_water).

    • To establish the topology of the new units and nodes (see adding unit relationships):

      • add one unit__from_node relationship, between the value_stored_water unit from the stored_water node, another one between the Språnget_stored_water unit from the Språnget_upper node and one for Faller_stored_water from Fallet_upper.
      • add one unit__node__node relationship between the Språnget_stored_water unit with the stored_water and Språnget_upper nodes and another one for Fallet_stored_water unit with the stored_water and Fallet_upper nodes,
      • add a unit__to_node relationship between the Fallet_stored_water and the stored_water node and another one between the Språnget_stored_water unit and the stored_water node.
    • Now we need to make some changes in object parameter values.

      • Extend the planning horizon of the model by one hour, i.e., change the model_end parameter value to 2021-01-02T01:00:00 (right-click on the value cell, click edit and paste the new datetime in the popup window).
      • Remove the fix_node_state parameter values for the end of the optimization horizon as you seen in the following figure: double click on the value cell of the Språnget_upper and Fallet_upper nodes, select the third data row, right-click, select Remove rows, and click OK.
      • Add an electricity price for the extra hour. Enter the parameter vom_cost on the unit__from_node relationship between the electricity_node and the electricity_load and set 0 as the price of electricity for the last hour 2021-01-02T00:00:00. The price is set to zero to ensure no electricity is sold during this hour.

      Modify the fix_node_state parameter value of Språnget_upper and Fallet_upper nodes. Modify the fix_node_state parameter value of Språnget_upper and Fallet_upper nodes.

    • Finally, we need to add some relationship parameter values for the new units:

      • Add a vom_cost parameter value on a value_stored_water|stored_water instance of a unit__from_node relationship, as you see in the figure bellow. For the timeseries you can copy-paste the data directly from this link. If you examine the timeseries data you'll notice that we have imposed a zero cost for all the optimisation horizon, while we use an assumed future electricity value for the additional time step at the end of the horizon.

      Adding vom_cost parameter value on the value_stored_water unit. Adding vom_cost parameter value on the value_stored_water unit.

      • Add two fix_ratio_out_in_unit_flow parameter values as you see in the figure bellow. The efficiency of Fallet_stored_water is the same as the Fallet_pwr_plant as the water in Fallet's reservoir will be used to produce electricity by the the Fallet plant only. On the other hand, the water from Språnget's reservoir will be used both by Fallet and Språnget plant, therefore we use the sum of the two efficiencies in the parameter value of Språnget_stored_water.

      Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units. Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units.

    You can now commit your changes in the database, execute the project and examine the results! As an exercise, try to retrieve the value of stored water as it is calculated by the model.

    Spillage Constraints - Minimisation of Spilt Water

    It might be the case that we need to impose certain limits to the amount of water that is spilt on each time step of the planning horizon, e.g., for environmental reasons, there can be a minimum and a maximum spillage level. At the same time, to avoid wasting water that could be used for producing electricity, we could explicitly impose the spillage minimisation to be added in the objective function.

    • Add one unit (see adding units) to impose the spillage constraints to each plant and name it (for example Språnget_spill).

    • Remove the Språnget_to_Fallet_spill connection (in the Object tree expand the connection class, right-click on Språnget_to_Fallet_spill, and the click Remove).

    • To establish the topology of the unit (see adding unit relationships):

      • Add a unit__from_node relationship, between the Språnget_spill unit from the Språnget_upper node,
      • add a unit__node__node relationship between the Språnget_spill unit with the Fallet_upper and Språnget_upper nodes,
      • add a unit__to_node relationship between the Språnget_spill and the Fallet_upper node,
    • Add the relationship parameter values for the new units:

      • Set the unit_capacity (to apply a maximum), the minimum_operating_point (defined as a percentage of the unit_capacity) to impose a minimum, and the vom_cost to penalise the water that is spilt:

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

    • For the Språnget_spill unit define the fix_ratio_out_in_unit_flow parameter value of the min_spillage|Fallet_upper|Språnget_upper relationship to 1 (see adding unit relationships).

    Commit your changes in the database, execute the project and examine the results! As an exercise, you can perform this process for and Fallet plant (you would also need to add another water node, downstream of Fallet).

    Follow Contracted Load Curve

    It is often the case that a system of hydropower plants should follow a given production profile. To model this in the given system, all we have to do is set a demand in the form of a timeseries to the electricity_node.

    Commit your changes in the database, execute the project and examine the results!

    This concludes the tutorial, we hope that you enjoyed building hydropower systems in Spine as much as we do!

    +Two hydro plants · SpineOpt.jl

    Hydro Power Planning

    Welcome to this Spine Toolbox tutorial for building hydro power planning models. The tutorial guides you through the implementation of different ways of modelling hydrodologically-coupled hydropower systems.

    Introduction

    This tutorial aims at demonstrating how we can model a hydropower system in Spine (SpineOpt.jl and Spine-Toolbox) with different assumptions and goals. It starts off by setting up a simple model of system of two hydropower plants and gradually introduces additional features. The goal of the model is to capture the combined operation of two hydropower plants (Språnget and Fallet) that operate on the same river as shown in the picture bellow. Each power plant has its own reservoir and generates electricity by discharging water. The plants might need to spill water, i.e., release water from their reservoirs without generating electricity, for various reasons. The water discharged or spilled by the upstream power plant follows the river route and becomes available to the downstream power plant.

    A system of two hydropower plants.

    A system of two hydropower plants

    In order to run this tutorial you must first execute some preliminary steps from the Simple System tutorial. Specifically, execute all steps from the guide, up to and including the step of importing-the-spineopt-database-template. It is advisable to go through the whole tutorial in order to familiarise yourself with Spine.

    Note

    Just remember to give a different name for the Spine Project of the hydropower tutorial (e.g., ‘Two_hydro’) in the corresponding step, so to not mix up the Spine Toolbox projects!

    That is all you need at the moment, you can now start inserting the data.

    Setting up a Basic Hydropower Model

    For creating a SpineOpt model you need to create Objects, Relationships (associating the objects), and in some cases, parameters values accompanying them. To do this, open the input database using the Spine DB Editor (double click on the input database in the Design View pane of Spine Toolbox).

    Note

    To save your work in the Spine DB Editor you need to commit your changes (please check the Simple System tutorial for how to do that). As a good practice, you should commit often as you enter the data in the model to avoid data loss.

    Defining objects

    Commodities

    Since we are modelling a hydropower system we will have to define two commodities, water and electricity. In the Spine DB editor, locate the Object tree, expand the root element if required, right click on the commodity class, and select Add objects from the context menu. In the Add objects dialogue that should pop up, enter the object names for the commodities as you see in the image below and then press Ok.

    image

    Defining commodities.

    Nodes

    Follow a similar path to add nodes, right click on the node class, and select Add objects from the context menu. In the dialogue, enter the node names as shown:

    image

    Defining nodes.

    Nodes in SpineOpt are used to balance commodities. As you noticed, we defined two nodes for each hydropower station (water nodes) and a single electricity node. This is one possible way to model the hydropower plant operation. This will become clearer in the next steps, but in a nutshell, the upper node represents the water arriving at each plant, while the lower node represents the water that is discharged and becomes available to the next plant.

    Connections

    Similarly, add connections, right click on the connection class, select Add objects from the context menu and add the following connections:

    image

    Defining connections.

    Connections enable the nodes to interact. Since, for each plant we need to model the amount of water that is discharged and the amount that is spilled, we must define two connections accordingly. When defining relationships we shall associate the connections with the nodes.

    Units

    To convert from one type of commodity associated with one node to another, you need a unit. You guessed it! Right click on the unit class, select Add objects from the context menu and add the following units:

    image

    Defining units.

    We have defined one unit for each hydropower plant that converts water to electricity and an additional unit that we will use to model the income from selling the electricity production in the electricity market.

    Relationships

    Assinging commodities to nodes

    Since we have defined more than one commodities, we need to assign them to nodes. In the Spine DB editor, locate the Relationship tree, expand the root element if required, right click on the node__commodity class, and select Add relationships from the context menu. In the Add relationships dialogue, enter the following relationships as you see in the image below and then press Ok.

    image

    Introducing node__commodity relationships.

    Associating connections to nodes

    Next step is to define the topology of flows between the nodes. To do that insert the following relationships in the connection__from_node class:

    image

    Introducing connection__from_node relationships.

    as well as the following the following connection__node_node relationships as you see in the figure:

    Introducing connection__node_node relationships.

    Introducing connection__node_node relationships.

    Placing the units in the model

    To define the topology of the units and be able to introduce their parameters later on, you need to define the following relationships in the unit__from_node class:

    Introducing unit__from_node relationships.

    Introducing unit__from_node relationships.

    in the unit__node_node class:

    Introducing unit__node_node relationships.

    Introducing unit__node_node relationships.

    and in the unit__to_node class as you see in the following figure:

    Introducing unit__to_node relationships.

    Introducing unit__to_node relationships.

    Defining the report outputs

    To force Spine to export the optimal values of the optimization variables to the output database you need to specify them in the form of report_output relationships. Add the following relationships to the report_output class:

    Introducing report outputs with report_output relationships.

    Introducing report outputs with report_output relationships.

    Objects and Relationships parameter values

    Defining model parameter values

    The specify modelling properties of both objects and relationships you need to introduce respective parameter values. To introduce object parameter values first select the model class in the Object tree and enter the following values in the Object parameter value pane:

    Defining model execution parameters.

    Defining model execution parameters.

    Observe the difference between the Object parameter value and the Object parameter definition sub-panes of the Object parameter value pane. The first one is for the modeller to introduce values for specific parameters, while the second one holds the definition of all available parameters with their default values (these are overwritten when the user introduces their own values). Feel free to explore the different parameters and their default values. While entering data in each row you will also observe that, in most cases, clicking on each cell activates a drop-down list of elements that the user must choose from. In the case of the value cells, however, unless you need to input a scalar value or a string, you should right-click on the cell and select edit for specifying the data type of the parameter value. As you see in the figure above, for the first duration_unit parameter you is of type string, while the model_start and model_end parameters are of type Date time. The Date time parameters can be edited by right-clicking on the corresponding value cells, selecting Edit, and then inserting the Date time values that you see in the figure above in the Datetime field using the correct format.

    Defining node parameter values

    Going back to hydropower modelling, we need to specify several parameters for the nodes of the systems. In the same pane as before, but this time selecting the node class from the Object tree, we need to add the following entries:

    Defining model execution parameters.

    Defining model execution parameters.

    Before we go through the interpretation of each parameter, click on the following link for each fix_node_state parameter (Node state Språnget, Node state Fallet), select all, copy the data and then paste them directly in the respective parameter value cell. Spine should automatically detect and input the timeseries data as a parameter value. The data type for those entries should be Timeseries as shown in the figure above. Alternatively, you can select the data type as Timeseries and manually insert the data (values with their corresponding datetimes).

    To model the reservoirs of each hydropower plant, we leverage the state feature that a node can have to represent storage capability. We only need to do this for one of the two nodes that we have used to model each plant and we choose the upper level node. To define storage, we set the value of the parameter has_state as True (be careful to not set it as a string but select the boolean true value by right clicking and selecting Edit in the respective cells). This activates the storage capability of the node. Then, we need to set the capacity of the reservoir by setting the node_state_cap parameter value. Finally, we fix the initial and final values of the reservoir by setting the parameter fix_node_state to the respective values (we introduce nan values for the time steps that we don't want to impose such constraints). To model the local inflow we use the demand parameter but using the negated value of the actual inflow, due to the definition of the parameter in Spine as a demand.

    Defining the temporal resolution of the model

    Spine automates the creation of the temporal resolution of the optimization model and even supports different temporal resolutions for different parts of the model. To define a model with an hourly resolution we select the temporal_block class in the Object tree and we set the resolution parameter value to 1h as shown in the figure:

    Setting the temporal resolution of the model.

    Setting the temporal resolution of the model.

    Defining connection parameter values

    The water that is discharged from Språnget will flow from Språnget_lower node to Fallet_upper through the Språnget_to_Fallet_disc connection, while the water that is spilled will flow from Språnget_upper directly to to Fallet_upper through the Språnget_to_Fallet_spill connection. To model this we need to select the connection__node_node class in the Relationship tree and add the following entries in the Relationship parameter value pane, as shown next:

    Defining discharge and spillage ratio flows.

    Defining discharge and spillage ratio flows.

    Defining unit parameter values

    Similarly, for each one of the unit__from_node, unit__node_node, and unit__to_node relationship classes we need to add the the maximal water that can be discharged by each hydropower plant:

    Setting the maximal water discharge of each plant.

    Setting the maximal water discharge of each plant.

    To define the income from selling the produced electricity we use the vom_cost parameter and negate the values of the electricity prices. To automatically insert the timeseries data in Spine, click on the Electricity prices timeseries, select all values, copy, and paste them, after having selected the value cell of the corresponding row. You can plot and edit the timeseries data by double clicking on the same cell afterwards:

    Previewing and editing the electricity prices timeseries.

    Previewing and editing the electricity prices timeseries.

    Carrying on with our hydropower model we must define the conversion ratios between the nodes. Assuming that water is not "lost" from the upper node toward the lower node and electricity is produced with the discharged water with a given efficiency we define the following parameter values for each hydropower plant, in the unit__node_node class:

    Defining conversion efficiencies.

    Defining conversion efficiencies.

    Lastly, we can define the maximal electricity production of each plant by inserting the following unit__to_node relationship parameter values:

    Setting the maximal electricity production of each plant.

    Setting the maximal electricity production of each plant.

    Hooray! You can now commit the database, close the Spine DB Editor and run your model! Go to the main Spine window and click on Execute image.

    Examining the results

    Select the output data store and open the Spine DB editor. To quickly plot some results, you can expand the unit class in the Object tree and select the electricity_load unit. In the Relationship parameter value pane double click on the value cell of

    report1 | electricity_load | electricity_node | from_node | realization

    object name. This will open a plotting window from were you can also examine closer and retrieve the data, as shown in the next figure. The unit_flow variable of the electricity_load unit represents the total electricity production in the system:

    Total electricity produced in the system.

    Total electricity produced in the system.

    Now, take to a minute to reflect on how you could retrieve the data representing the water that is discharged by each hydropower plant as shown in the next figure:

    Water discharge of Språnget hydropower plant.

    Water discharge of Språnget hydropower plant.

    The right answer is that you need to select some hydropower plant (e.g., Språnget) and then double-click on the value cell of the object name

    report1 | Språnget_pwr_plant | Språnget_lower | to_node | realization

    or

    report1 | Språnget_pwr_plant | Språnget_upper | from_node | realization

    It could be useful to also reflect on why these objects give the same results, and what do the results from the third element represent. (Hint: observe the to_ or from_ directions in the object names). As an exercise, you can try to retrieve the timeseries data for spilled water as well as the water levels at the reservoir of each hydropower plant.

    You can further explore the model, or make changes in the input database to observe how these affect the results, e.g., you can use different electricity prices, values for the reservoir capacity (and initialization points), as well as change the temporal resolution of the model. All you need to do is commit the changes and run your model. Every time that you run the model, your results are appended in the output database with an execution timestamp. You can however filter your results per execution, by selecting the Alternative that you want from the Alternative/Scenario tree pane. You can use the exporter too to export specific variables in an Excel sheet. Alternatively, you can export all the data of the output database by going to the main menu (Press Alt + F to display it), selecting File -> Export, then select the items that you want, click ok and export the data in Excel, or json format.

    In the following, we extend this simple hydropower system to include more elaborate modelling choices.

    Note

    In each of the next sections, we perform incremental changes to the initial simple hydropower model. If you want to keep the database that you created, you can duplicate the database file (right-click on the input database and select Duplicate and duplicate files) and perform the changes in the new database. You need to configure the workflow accordingly in order to run the database you want (please check the Simple System tutorial for how to do that).

    Maximisation of Stored Water

    Instead of fixing the water content of the reservoirs at the end of the planning period, we can consider that the remaining water in the reservoirs has a value and then maximize the value along with the revenues for producing electricity within the planning horizon. This objective term is often called the Value of stored water and we can approximate it by assuming that this water will be used to generate electricity in the future that would be sold at a forecasted price. The water stored in the upstream hydropower plant will become also available to the downstream plant and this should be taken into account.

    To model the value of stored water we need to make some additions and modifications to the initial model.

    • First, add a new node (see adding nodes) and give it a name (e.g., stored_water). This node will accumulate the water stored in the reservoirs at the end of the planning horizon. Associate the node with the water commodity (see node__commodity).

    • Add three more units (see adding units); two will transfer the water at the end of the planning horizon in the new node that we just added (e.g., Språnget_stored_water, Fallet_stored_water), and one will be used as a sink introducing the value of stored water in the objective function (e.g., value_stored_water).

    • To establish the topology of the new units and nodes (see adding unit relationships):

      • add one unit__from_node relationship, between the value_stored_water unit from the stored_water node, another one between the Språnget_stored_water unit from the Språnget_upper node and one for Faller_stored_water from Fallet_upper.
      • add one unit__node__node relationship between the Språnget_stored_water unit with the stored_water and Språnget_upper nodes and another one for Fallet_stored_water unit with the stored_water and Fallet_upper nodes,
      • add a unit__to_node relationship between the Fallet_stored_water and the stored_water node and another one between the Språnget_stored_water unit and the stored_water node.
    • Now we need to make some changes in object parameter values.

      • Extend the planning horizon of the model by one hour, i.e., change the model_end parameter value to 2021-01-02T01:00:00 (right-click on the value cell, click edit and paste the new datetime in the popup window).
      • Remove the fix_node_state parameter values for the end of the optimization horizon as you seen in the following figure: double click on the value cell of the Språnget_upper and Fallet_upper nodes, select the third data row, right-click, select Remove rows, and click OK.
      • Add an electricity price for the extra hour. Enter the parameter vom_cost on the unit__from_node relationship between the electricity_node and the electricity_load and set 0 as the price of electricity for the last hour 2021-01-02T00:00:00. The price is set to zero to ensure no electricity is sold during this hour.

      Modify the fix_node_state parameter value of Språnget_upper and Fallet_upper nodes. Modify the fix_node_state parameter value of Språnget_upper and Fallet_upper nodes.

    • Finally, we need to add some relationship parameter values for the new units:

      • Add a vom_cost parameter value on a value_stored_water|stored_water instance of a unit__from_node relationship, as you see in the figure bellow. For the timeseries you can copy-paste the data directly from this link. If you examine the timeseries data you'll notice that we have imposed a zero cost for all the optimisation horizon, while we use an assumed future electricity value for the additional time step at the end of the horizon.

      Adding vom_cost parameter value on the value_stored_water unit. Adding vom_cost parameter value on the value_stored_water unit.

      • Add two fix_ratio_out_in_unit_flow parameter values as you see in the figure bellow. The efficiency of Fallet_stored_water is the same as the Fallet_pwr_plant as the water in Fallet's reservoir will be used to produce electricity by the the Fallet plant only. On the other hand, the water from Språnget's reservoir will be used both by Fallet and Språnget plant, therefore we use the sum of the two efficiencies in the parameter value of Språnget_stored_water.

      Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units. Adding fix_ratio_out_in_unit_flow parameter values on the Språnget_stored_water and Fallet_stored_water units.

    You can now commit your changes in the database, execute the project and examine the results! As an exercise, try to retrieve the value of stored water as it is calculated by the model.

    Spillage Constraints - Minimisation of Spilt Water

    It might be the case that we need to impose certain limits to the amount of water that is spilt on each time step of the planning horizon, e.g., for environmental reasons, there can be a minimum and a maximum spillage level. At the same time, to avoid wasting water that could be used for producing electricity, we could explicitly impose the spillage minimisation to be added in the objective function.

    • Add one unit (see adding units) to impose the spillage constraints to each plant and name it (for example Språnget_spill).

    • Remove the Språnget_to_Fallet_spill connection (in the Object tree expand the connection class, right-click on Språnget_to_Fallet_spill, and the click Remove).

    • To establish the topology of the unit (see adding unit relationships):

      • Add a unit__from_node relationship, between the Språnget_spill unit from the Språnget_upper node,
      • add a unit__node__node relationship between the Språnget_spill unit with the Fallet_upper and Språnget_upper nodes,
      • add a unit__to_node relationship between the Språnget_spill and the Fallet_upper node,
    • Add the relationship parameter values for the new units:

      • Set the unit_capacity (to apply a maximum), the minimum_operating_point (defined as a percentage of the unit_capacity) to impose a minimum, and the vom_cost to penalise the water that is spilt:

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

      Setting minimum (the minimal value is defined as percentage of capacity), maximum, and spillage penalty.

    • For the Språnget_spill unit define the fix_ratio_out_in_unit_flow parameter value of the min_spillage|Fallet_upper|Språnget_upper relationship to 1 (see adding unit relationships).

    Commit your changes in the database, execute the project and examine the results! As an exercise, you can perform this process for and Fallet plant (you would also need to add another water node, downstream of Fallet).

    Follow Contracted Load Curve

    It is often the case that a system of hydropower plants should follow a given production profile. To model this in the given system, all we have to do is set a demand in the form of a timeseries to the electricity_node.

    Commit your changes in the database, execute the project and examine the results!

    This concludes the tutorial, we hope that you enjoyed building hydropower systems in Spine as much as we do!

    diff --git a/dev/tutorial/unit_commitment/index.html b/dev/tutorial/unit_commitment/index.html index 65b9e0851a..65965ead22 100644 --- a/dev/tutorial/unit_commitment/index.html +++ b/dev/tutorial/unit_commitment/index.html @@ -1,2 +1,2 @@ -Unit Commitment · SpineOpt.jl

    Unit commitment constraints tutorial

    This tutorial provides a step-by-step guide to include unit commitment constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding unit commitment constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 24-hour time series instead of a unique value
    • The power_plant_b has new parameters to account for the unit commitment constraints, such as minimum operating point, minimum uptime, and minimum downtime
    • The optimization is done a mixed-integer programming (MIP) to account for the binary nature of the unit commitment decision variables

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the unit commitment concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add unit commitment constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 24-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below. You can copy and paste the values from the file: ucelectricitynode_demand.csv
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h since our unit commitment case study is for a day-ahead dispatch of 24 hours.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Establishing new output relationships

    Since we will have the new unit commitment variables, we want to see the results of these variables and their total cost in the objective function. So, we will create new relationships to report these results:

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Enter report1 under report, and units_on under output. Repete the same procedure for the following outputs as seen in the image below; then press OK.
    • This will write the unit commitment variable values and costs in the objective function to the output database as a part of report1.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand first until its maximum capacity, and then the power_plant_b (i.e., the more expensive unit) covers the demand that is left. This is the most economical dispatch since the problem has no extra constraints (so far!).

    image

    To explore the cost results, the pivot table view shows a more user-friendly option to analyze the results. Remember that you can find a description of how to create the pivot table view in the Simple System tutorial here. The cost components in the objective function are shown in the image below. As expected, all the costs are associated with the variable_om_costs since we haven't included the unit-commitment constraints yet.

    image

    Step 2 - Include the minimum operating point

    Let's assume that the power_plant_b has a minimum operating point of 10%, meaning that if the power plant is on, it must produce at least 20MW.

    Adding the minium operating point

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • In Relationship tree, expand the unit__to_node class and select power_plant_b | electricity_node.
    • In the Relationship parameter table (typically at the bottom-center), select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point of power_plant_b when producing electricity.

    image

    Adding the unit commitment costs and initial states

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • online_variable_type parameter and the Base alternative, and select the value unit_online_variable_type_binary. This will define that the unit commitment variables will be binary. SpineOpt identifies this situation from the input data and internally changes the model from LP to MIP.
      • shut_down_cost parameter and the Base alternative, and enter the value 7. This will establish that there's a cost of '7' EUR per shutdown.
      • start_up_cost parameter and the Base alternative, and enter the value 5. This will establish that there's a cost of '5' EUR per startup.
      • units_on_cost parameter and the Base alternative, and enter the value 3. This will establish that there's a cost of '3' EUR per units on (e.g., idling cost).
      • initial_units_on parameter and the Base alternative, and enter the value 0. This will establish that there are no units 'on' before the first time step.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum operating point

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    • Do you notice something different in your solver log? Depending on the solver, the output might change, but you should be able to see that the solver is using MIP to solve the problem. For instance, if you are using the solver HiGHS (i.e., the default solver in SpineOpt), then you will see something like "Solving MIP model with:" and the Branch and Bound (B&B) tree solution. Since this is a tiny problem, sometimes the solver can find the optimal solution from the presolve step, avoiding going into the B&B step.

    Examining the results including the minimum operating point

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Any difference? What happended to the flows in power_plant_a and power_plant_b?

    image

    • Let's take a look to the units_on and units_started_up in the image below to get wider perspective.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on', then power_plant_a needs to reduce its output even though it has the lower variable cost, making the total system cost (i.e., objective function) more expensive than in the previous run. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 3 - Include the minimum uptime

    Let's assume that the power_plant_b also has a minimum uptime of 8 hours, meaning that if the power plant starts up, it must be on at least eight hours.

    Adding the minimum uptime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_up_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum uptime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum uptime

    You know the drill, go ahead :wink:

    Examining the results including the minimum uptime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Interesting. Don't you think?

    image

    • Let's take a look again to the units_on and units_started_up in the image below.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on' and also needs to be at least 8h 'on' each time it starts, then power_plant_b starts even before the demand is greater than the capacity of power_plant_a. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 4 - Include the minimum downtime

    Let's assume that the power_plant_b also has a minimum downtime of 8 hours, meaning that if the power plant shuts down, it must be off at least eight hours.

    Adding the minimum downtime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_down_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum downtime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum downtime

    One last time, don't give up!

    Examining the results including the minimum downtime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Wow! This result is even more interesting :stuckouttonguewinkingeye:. Do you know what happened?

    image

    • Let's take a look again to the units_on and units_started_up in the image below. Instead of two start-ups, the power_plant_b only starts once. Why?

    image

    • Since power_plant_b needs to be at least producing 20MW when it is 'on' plus also needs to be at least 8h 'on' each time it starts, and it needs to be at least 8h 'off' if it shutdowns, then power_plant_b never shuts down and stays 'on' after it starts because it is the only way to fulfil the unit commitment constraints. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs (which is zero this time), and the increase in the variable_om_costs due to flow changes.

    image

    If you have completed this tutorial, congratulations! You have mastered the basic concepts of unit commitment using SpineToolbox and SpineOpt. Keep up the good work!

    +Unit Commitment · SpineOpt.jl

    Unit commitment constraints tutorial

    This tutorial provides a step-by-step guide to include unit commitment constraints in a simple energy system with Spine Toolbox for SpineOpt.

    Introduction

    Welcome to our tutorial, where we will walk you through the process of adding unit commitment constraints in SpineOpt using Spine Toolbox. To get the most out of this tutorial, we suggest first completing the Simple System tutorial, which can be found here.

    Model assumptions

    This tutorial is built on top of the Simple System. The main changes to that system are:

    • The demand at electricity_node is a 24-hour time series instead of a unique value
    • The power_plant_b has new parameters to account for the unit commitment constraints, such as minimum operating point, minimum uptime, and minimum downtime
    • The optimization is done a mixed-integer programming (MIP) to account for the binary nature of the unit commitment decision variables

    This tutorial includes a step-by-step guide to include the parameters to help analyze the results in SpineOpt and the unit commitment concepts.

    Step 1 - Update the demand

    Opening the Simple System project

    • Launch the Spine Toolbox and select File and then Open Project or use the keyboard shortcut Ctrl + O to open the desired project.
    • Locate the folder that you saved in the Simple System tutorial and click Ok. This will prompt the Simple System workflow to appear in the Design View section for you to start working on.
    • Select the 'input' Data Store item in the Design View.
    • Go to Data Store Properties and hit Open editor. This will open the database in the Spine DB editor.

    In this tutorial, you will learn how to add unit commitment constraints to the Simple System using the Spine DB editor, but first let's start by updating the electricity demand from a single value to a 24-hour time series.

    Editing demand value

    • Always in the Spine DB editor, locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [node] class, and select the electricity_node from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the demand parameter which should have a 150 value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Parameter type to Time series fixed resolution, Resolution to 1h, and the demand values to the time series as in the image below. You can copy and paste the values from the file: ucelectricitynode_demand.csv
    • Finish by pressing OK in the Edit value menu. In the Object parameter table you will see that the value of the demand has changed to Time series.

    image

    Editing the temporal block

    You might or might not notice that the Simple System has, by default, a temporal block resolution of 1D (i.e., one day); wait, what! Yes, by default, it has 1D in its template. So, we want to change that to 1h since our unit commitment case study is for a day-ahead dispatch of 24 hours.

    • Locate again the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [temporal_block] class, and select the flat from the expanded tree.
    • Locate the Object parameter table (typically at the top-center).
    • In the Object parameter table, identify the resolution parameter which should have a 1D value from the Simple System first run.
    • Right click on the value cell and then select edit from the context menu. The Edit value dialog will pop up.
    • Change the Duration from 1D to 1h as shown in the image below.

    image

    Establishing new output relationships

    Since we will have the new unit commitment variables, we want to see the results of these variables and their total cost in the objective function. So, we will create new relationships to report these results:

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • Right click on the report__output class, and select Add relationships from the context menu. The Add relationships dialog will pop up.
    • Enter report1 under report, and units_on under output. Repete the same procedure for the following outputs as seen in the image below; then press OK.
    • This will write the unit commitment variable values and costs in the objective function to the output database as a part of report1.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    Examining the results

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. As expected, the power_plant_a (i.e., the cheapest unit) always covers the demand first until its maximum capacity, and then the power_plant_b (i.e., the more expensive unit) covers the demand that is left. This is the most economical dispatch since the problem has no extra constraints (so far!).

    image

    To explore the cost results, the pivot table view shows a more user-friendly option to analyze the results. Remember that you can find a description of how to create the pivot table view in the Simple System tutorial here. The cost components in the objective function are shown in the image below. As expected, all the costs are associated with the variable_om_costs since we haven't included the unit-commitment constraints yet.

    image

    Step 2 - Include the minimum operating point

    Let's assume that the power_plant_b has a minimum operating point of 10%, meaning that if the power plant is on, it must produce at least 20MW.

    Adding the minium operating point

    • In the Spine DB editor, locate the Relationship tree (typically at the bottom-left). Expand the root element if not expanded.
    • In Relationship tree, expand the unit__to_node class and select power_plant_b | electricity_node.
    • In the Relationship parameter table (typically at the bottom-center), select the minimum_operating_point parameter and the Base alternative, and enter the value 0.1 as seen in the image below. This will set the minimum operating point of power_plant_b when producing electricity.

    image

    Adding the unit commitment costs and initial states

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the following parameter as seen in the image below:
      • online_variable_type parameter and the Base alternative, and select the value unit_online_variable_type_binary. This will define that the unit commitment variables will be binary. SpineOpt identifies this situation from the input data and internally changes the model from LP to MIP.
      • shut_down_cost parameter and the Base alternative, and enter the value 7. This will establish that there's a cost of '7' EUR per shutdown.
      • start_up_cost parameter and the Base alternative, and enter the value 5. This will establish that there's a cost of '5' EUR per startup.
      • units_on_cost parameter and the Base alternative, and enter the value 3. This will establish that there's a cost of '3' EUR per units on (e.g., idling cost).
      • initial_units_on parameter and the Base alternative, and enter the value 0. This will establish that there are no units 'on' before the first time step.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum operating point

    • Go back to Spine Toolbox's main window, and hit the Execute project button image from the tool bar. You should see 'Executing All Directed Acyclic Graphs' printed in the Event log (at the bottom left by default).

    • Select the 'Run SpineOpt' Tool. You should see the output from SpineOpt in the Julia Console after clicking the object activity control.

    • Do you notice something different in your solver log? Depending on the solver, the output might change, but you should be able to see that the solver is using MIP to solve the problem. For instance, if you are using the solver HiGHS (i.e., the default solver in SpineOpt), then you will see something like "Solving MIP model with:" and the Branch and Bound (B&B) tree solution. Since this is a tiny problem, sometimes the solver can find the optimal solution from the presolve step, avoiding going into the B&B step.

    Examining the results including the minimum operating point

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Any difference? What happended to the flows in power_plant_a and power_plant_b?

    image

    • Let's take a look to the units_on and units_started_up in the image below to get wider perspective.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on', then power_plant_a needs to reduce its output even though it has the lower variable cost, making the total system cost (i.e., objective function) more expensive than in the previous run. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 3 - Include the minimum uptime

    Let's assume that the power_plant_b also has a minimum uptime of 8 hours, meaning that if the power plant starts up, it must be on at least eight hours.

    Adding the minimum uptime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_up_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum uptime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum uptime

    You know the drill, go ahead :wink:

    Examining the results including the minimum uptime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Interesting. Don't you think?

    image

    • Let's take a look again to the units_on and units_started_up in the image below.

    image

    • So, since power_plant_b needs to be at least producing 20MW when it is 'on' and also needs to be at least 8h 'on' each time it starts, then power_plant_b starts even before the demand is greater than the capacity of power_plant_a. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs, and the increase in the variable_om_costs due to flow changes.

    image

    Step 4 - Include the minimum downtime

    Let's assume that the power_plant_b also has a minimum downtime of 8 hours, meaning that if the power plant shuts down, it must be off at least eight hours.

    Adding the minimum downtime

    • Locate the Object tree (typically at the top-left). Expand the [root] element if not expanded.
    • Expand the [unit] class, and select the power_plant_b from the expanded tree.
    • In the Object parameter table (typically at the top-center), select the min_down_time parameter, the Base alternative, and then right click on the value and select the Edit option in the context menu, as shown in the image below.

    image

    • The Edit value dialog will pop up. Select the Parameter_type as Duration and enter the value 8h. This will establish that minimum downtime is eight hours.

    image

    When you're ready, commit all changes to the database.

    Executing the workflow including the minimum downtime

    One last time, don't give up!

    Examining the results including the minimum downtime

    • Select the output data store and open the Spine DB editor. You can already inspect the fields in the displayed tables.
    • You can also activate the table view by pressing Alt + F for the shortcut to the hamburger menu, and select View -> Table.
    • Remember to select the latest run in the Alternative tree. Expand the Output element if not expanded.
    • In the Relationship parameter value table, double click in the Time series values to explore the results of the different variables.
    • The image below shows the electricity flow results for both power plants. Wow! This result is even more interesting :stuckouttonguewinkingeye:. Do you know what happened?

    image

    • Let's take a look again to the units_on and units_started_up in the image below. Instead of two start-ups, the power_plant_b only starts once. Why?

    image

    • Since power_plant_b needs to be at least producing 20MW when it is 'on' plus also needs to be at least 8h 'on' each time it starts, and it needs to be at least 8h 'off' if it shutdowns, then power_plant_b never shuts down and stays 'on' after it starts because it is the only way to fulfil the unit commitment constraints. Therefore, power_plant_a needs to reduce even further its output, making the total system cost more expensive than in the previous runs. The image below shows the cost components, where we can see the costs of having the power_plant_b on, its start-up and shutdown costs (which is zero this time), and the increase in the variable_om_costs due to flow changes.

    image

    If you have completed this tutorial, congratulations! You have mastered the basic concepts of unit commitment using SpineToolbox and SpineOpt. Keep up the good work!

    diff --git a/dev/tutorial/webinars/index.html b/dev/tutorial/webinars/index.html index 37f0ae1bb0..85d22f684c 100644 --- a/dev/tutorial/webinars/index.html +++ b/dev/tutorial/webinars/index.html @@ -1,2 +1,2 @@ -Webinars · SpineOpt.jl

    Webinars

    SpinOpt tutorial covers the entire process from installing Spine Toolbox and SpineOpt, creating and running a model with these tools and manipulating databases.

    +Webinars · SpineOpt.jl

    Webinars

    SpinOpt tutorial covers the entire process from installing Spine Toolbox and SpineOpt, creating and running a model with these tools and manipulating databases.