Skip to content

Releases: cloudposse/atmos

v1.88.1

04 Sep 19:09
0f0fef3
Compare
Choose a tag to compare

🚀 Enhancements

BUG: fix error with affected stack uploads @mcalhoun (#691) ## what
  • Fix an error with affected stack uploads to atmos pro
  • Fix an error with URL extraction for locally cloned repos
  • Add better JSON formatting for affected stacks
  • Add additional debugging info

v1.88.0

02 Sep 17:33
b52762e
Compare
Choose a tag to compare
Update the `Go` YAML lib from `gopkg.in/yaml.v2` to `gopkg.in/yaml.v3`. Support YAML v1.2 (latest version) @aknysh (#690) ## what
  • Update the Go YAML lib from gopkg.in/yaml.v2 to gopkg.in/yaml.v3
  • Support YAML v1.2 (latest version)
  • Support YAML explicit typing (explicit typing is denoted with a tag using the exclamation point (“!”) symbol)
  • Improve code, e.g. add YAML wrappers in one yaml_utils file (which imports gopkg.in/yaml.v3) to control all YAML marshaling and un-marshaling from one place

why

gopkg.in/yaml.v3

The main differences between gopkg.in/yaml.v3 and gopkg.in/yaml.v2 include enhancements in functionality, bug fixes, and improvements in performance. Here's a summary of key distinctions:

1. Better Conformance to YAML 1.2 Specification:

  • gopkg.in/yaml.v3 offers improved support for the YAML 1.2 specification. This includes better handling of complex YAML features such as core schema, block styles, and anchors.
  • gopkg.in/yaml.v2 is more aligned with YAML 1.1, meaning it might not fully support some of the YAML 1.2 features.

2. Node API Changes:

  • gopkg.in/yaml.v3 introduced a new Node API, which provides more control and flexibility over parsing and encoding YAML documents. This API is more comprehensive and allows for detailed inspection and manipulation of YAML content.
  • gopkg.in/yaml.v2 has a simpler node structure and API, which might be easier to use for simple use cases but less powerful for advanced needs.

3. Error Handling:

  • gopkg.in/yaml.v3 offers improved error messages and better context for where an error occurs during parsing. This makes it easier to debug and correct YAML syntax errors.
  • gopkg.in/yaml.v2 has less detailed error reporting, which can make debugging more challenging.

4. Support for Line and Column Numbers:

  • gopkg.in/yaml.v3 includes support for tracking line and column numbers of nodes, which can be useful when dealing with large or complex YAML files.
  • gopkg.in/yaml.v2 does not provide this level of detail in terms of tracking where nodes are located within the YAML document.

5. Performance:

  • gopkg.in/yaml.v3 has various performance improvements, particularly in the encoding and decoding process. However, these improvements might not be significant in all scenarios.
  • gopkg.in/yaml.v2 might be slightly faster in certain cases, particularly when dealing with very simple YAML documents, due to its simpler feature set.

6. Deprecation of Legacy Functions:

  • gopkg.in/yaml.v3 deprecates some older functions that were available in v2, encouraging developers to use more modern and efficient alternatives.
  • gopkg.in/yaml.v2 retains these older functions, which may be preferred for backward compatibility in some projects.

7. Anchors and Aliases:

  • gopkg.in/yaml.v3 has better handling of YAML anchors and aliases, making it more robust in scenarios where these features are heavily used.
  • gopkg.in/yaml.v2 supports anchors and aliases but with less robustness and flexibility.

8. API Changes and Compatibility:

  • gopkg.in/yaml.v3 introduces some API changes that are not backward-compatible with v2. This means that upgrading from v2 to v3 might require some code changes.
  • gopkg.in/yaml.v2 has been widely used and is stable, so it may be preferable for projects where stability and long-term support are critical.

YAML v1.2

YAML v1.1 and YAML v1.2 differ in several key aspects, particularly in terms of specification, syntax, and data type handling. Here's a breakdown of the most significant differences:

1. Specification and Goals:

  • YAML 1.1 was designed with some flexibility in its interpretation of certain constructs, aiming to be a human-readable data serialization format that could also be easily understood by machines.
  • YAML 1.2 was aligned more closely with the JSON specification, aiming for better interoperability with JSON and standardization. YAML 1.2 is effectively a superset of JSON.

2. Boolean Values:

  • YAML 1.1 has a wide range of boolean literals, including y, n, yes, no, on, off, true, and false. This flexibility could sometimes lead to unexpected interpretations.
  • YAML 1.2 standardizes boolean values to true and false, aligning with JSON. This reduces ambiguity and ensures consistency.

3. Integers with Leading Zeros:

  • YAML 1.1 interprets integers with leading zeros as octal (base-8) numbers. For example, 012 would be interpreted as 10 in decimal.
  • YAML 1.2 no longer interprets numbers with leading zeros as octal. Instead, they are treated as standard decimal numbers, which aligns with JSON. This change helps avoid confusion.

4. Null Values:

  • YAML 1.1 allows a variety of null values, including null, ~, and empty values (e.g., an empty string).
  • YAML 1.2 standardizes the null value to null (or an empty value), aligning with JSON's null representation.

5. Tag Handling:

  • YAML 1.1 uses an unquoted !! syntax for tags (e.g., !!str for a string). The tag system is more complex and includes non-standard tags that can be confusing.
  • YAML 1.2 simplifies tag handling and uses a more JSON-compatible syntax, with less emphasis on non-standard tags. Tags are optional and less intrusive in most use cases.

6. Floating-Point Numbers:

  • YAML 1.1 supports special floating-point values like .inf, -.inf, and .nan with a dot notation.
  • YAML 1.2 aligns with JSON and supports Infinity, -Infinity, and NaN, which are the standard representations in JSON.

7. Direct JSON Compatibility:

  • YAML 1.2 is designed to be a strict superset of JSON, meaning any valid JSON document is also a valid YAML 1.2 document. This was not the case in YAML 1.1, where certain JSON documents could be interpreted differently.

8. Indentation and Line Breaks:

  • YAML 1.2 introduced more consistent handling of line breaks and indentation. While YAML 1.1 was flexible, it sometimes led to ambiguities in how line breaks and whitespace were interpreted.
  • YAML 1.2 has clearer rules, reducing the potential for misinterpretation of line breaks and indentation.

9. Miscellaneous Syntax Changes:

  • YAML 1.2 introduced some syntax changes for better clarity and alignment with JSON. For instance, YAML 1.2 removed support for single-quoted multiline strings, which were present in YAML 1.1.

10. Core Schema vs. JSON Schema:

  • YAML 1.2 introduced the concept of schemas, particularly the Core schema, which aims to be YAML's native schema, and the JSON schema, which strictly follows JSON's data types and structures.
  • YAML 1.1 did not have this formal schema distinction, leading to more flexible but sometimes less predictable data handling.

Summary:

  • YAML 1.2 is more standardized, consistent, and aligned with JSON, making it more predictable and easier to interoperate with JSON-based systems.
  • YAML 1.1 offers more flexibility and a wider range of literal values, but this flexibility can sometimes lead to ambiguities and unexpected behavior.

references

v1.87.1

01 Sep 01:33
61389b2
Compare
Choose a tag to compare
Support `atmos terraform apply --from-plan` with additional flags @duncanaf (#684)

what

  • Change argument generation for atmos terraform apply to make sure the plan-file arg is placed after any flags specified by the user

why

  • Terraform is very picky about the order of flags and args, and requires all args (e.g. plan-file) to come after any flags (e.g. --parallelism=1), or it crashes.
  • atmos terraform apply accepts a plan-file arg, or generates one when --from-plan is used. When this happens, it currently puts the plan-file arg first, before any additional flags specified by the user.
  • This breaks when additional flags are specified, e.g. atmos terraform apply --from-plan -- -parallelism=1. In this example, atmos tries to call terraform apply <planfile> --paralellism=1 and terraform crashes with Error: Too many command line arguments

v1.87.0

22 Aug 04:01
fec62aa
Compare
Choose a tag to compare
Update/improve Atmos UX @aknysh (#679)

what & why

  • Improve error messages in the cases when atmos.yaml CLI config file is not found, and if's it found, but points to an Atmos stacks directory that does not exist

image

image


  • When executing atmos command to show the terminal UI, do not evaluate the Go templates in Atmos stack manifests to make the command faster since the TUI just shows the names of the components and stacks and does not need the components' configurations

  • Fix/restore the TUI borders for the commands atmos and atmos workflow around the selected columns. The BorderStyle functionality was changed in the latest releases of the charmbracelet/lipgloss library, preventing the borders around the selected column from showing


atmos

image


atmos workflow

image


Announce Cloud Posse's Refarch Docs @osterman (#680)

what

  • Announce cloud posse refarch docs
Add atmos pro stack locking @mcalhoun (#678)

what

  • add the atmos pro lock and atmos pro unlock commands

v1.86.2

17 Aug 21:30
b7f8e30
Compare
Choose a tag to compare
Improve logging for the template function `atmos.Component`. Generate backend config and provider override files in `atmos.Component` function @aknysh (#674)

what

  • Improve logging for the template function atmos.Component
  • Generate backend config and provider override files in atmos.Component function
  • Update docs

why

  • Add more debugging information and fix issues with the initial implementation of the atmos.Component function when the backend config file backend.tf.json (if enabled in atmos.yaml) and provider overrides file providers_override.tf.json (if configured in the providers section) were not generated, which prevented the function atmos.Component from returning the outputs of the component when executing in GitHub actions

  • When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template function atmos.Component will log the execution flow and the results of template evaluation - useful for debugging

     ATMOS_LOGS_LEVEL=Trace atmos describe component <component> -s <stack>
     ATMOS_LOGS_LEVEL=Trace atmos terraform plan <component> -s <stack>
     ATMOS_LOGS_LEVEL=Trace atmos terraform apply <component> -s <stack>
bump http context timeout for upload to atmos pro @mcalhoun (#656)

what

Increase the maximum http timeout when uploading to atmos pro

why

There are cases when there are a large number of stacks and a large number of workflows that this call can exceed 10 seconds

v1.86.1

15 Aug 20:24
a34048a
Compare
Choose a tag to compare
Improve logging for the template function `atmos.Component` @aknysh (#672) ## what
  • Improve logging for the template function atmos.Component
  • Update Golang to the latest version 1.23

why

  • When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template functions atmos.Component and atmos.GomplateDatasource will log the execution flow and the results of template evaluation - useful for debugging

     ATMOS_LOGS_LEVEL=Trace atmos terraform plan <component> -s <stack>

This PR adds more debugging information and shows the results of the atmos.Component execution, and shows if the result was found in the cache:

Found component 'template-functions-test' in the stack 'tenant1-ue2-prod' in the stack manifest 'orgs/cp/tenant1/prod/us-east-2'
ProcessTmplWithDatasources(): template 'all-atmos-sections' - evaluation 1

Converting the variable 'test_list' with the value
[
      "list_item_1",
      "list_item_2",
      "list_item_3"
    ]
from JSON to 'Go' data type

Converted the variable 'test_list' with the value
[
      "list_item_1",
      "list_item_2",
      "list_item_3"
    ]
from JSON to 'Go' data type
Result: [list_item_1 list_item_2 list_item_3]

Converting the variable 'test_map' with the value
{
      "a": 1,
      "b": 2,
      "c": 3
    }
from JSON to 'Go' data type

Converted the variable 'test_map' with the value
{
      "a": 1,
      "b": 2,
      "c": 3
    }
from JSON to 'Go' data type
Result: map[a:1 b:2 c:3]

Converting the variable 'test_label_id' with the value
"cp-ue2-prod-test"
from JSON to 'Go' data type

Converted the variable 'test_label_id' with the value
"cp-ue2-prod-test"
from JSON to 'Go' data type
Result: cp-ue2-prod-test

Executed template function 'atmos.Component(template-functions-test, tenant1-ue2-prod)'

'outputs' section:
test_label_id: cp-ue2-prod-test
test_list:
- list_item_1
- list_item_2
- list_item_3
test_map:
  a: 1
  b: 2
  c: 3

Executing template function 'atmos.Component(template-functions-test, tenant1-ue2-prod)'
Found the result of the template function 'atmos.Component(template-functions-test, tenant1-ue2-prod)' in the cache
'outputs' section:
test_label_id: cp-ue2-prod-test
test_list:
- list_item_1
- list_item_2
- list_item_3
test_map:
  a: 1
  b: 2
  c: 3
add install instructions for atmos on windows/scoop @dennisroche (#649) ## what

add option/documentation for installing atmos using scoop.sh on windows.

scoop install atmos

scoop manifests will check GitHub releases and automatically update. no additional maintenance required for anyone 🥳.

why

needed an easy way for my team to install atmos

references

Fix docker build `ATMOS_VERSION` support `v*` tags @goruha (#671) ## what * Remove `v` for `ATMOS_VERSION` on docker build

why

  • Cloudposse changed the tag template policy - now the tag is always prefixed with v.
    The tag passed as ATMOS_VERSION docker build argument

references

CleanShot 2024-08-14 at 19 58 32@2x

v1.86.0

14 Aug 16:49
e5680c7
Compare
Choose a tag to compare
Add `--process-templates` flag to `atmos describe stacks` and `atmos describe component` commands. Update docs @aknysh (#669) ## what
  • Add logging to the template functions atmos.Component and atmos.GomplateDatasource
  • Add --process-templates flag to atmos describe stacks and atmos describe component commands
  • Update docs

why

  • When the environment variable ATMOS_LOGS_LEVEL is set to Trace, the template functions atmos.Component and atmos.GomplateDatasource will log the execution flow and the results of template evaluation - useful for debugging

     ATMOS_LOGS_LEVEL=Trace atmos terraform plan <component> -s <stack>
  • Enable/disable processing of Go templates in Atmos stacks manifests when executing the commands

  • For atmos describe component <component> -s <stack> command, use the --process-templates flag to see the component configuration before and after the templates are processed. If the flag is not provided, it's set to true by default

# Process `Go` templates in stack manifests and show the final values
atmos describe component <component> -s <stack>

# Process `Go` templates in stack manifests and show the final values
atmos describe component <component> -s <stack> --process-templates=true

# Do not process `Go` templates in stack manifests and show the template tokens in the output
atmos describe component <component> -s <stack> --process-templates=false
  • For atmos describe stacks command, use the --process-templates flag to see the stack configurations before and after the templates are processed. If the flag is not provided, it's set to true by default

    # Process `Go` templates in stack manifests and show the final values
    atmos describe stacks
    
    # Process `Go` templates in stack manifests and show the final values
    atmos describe stacks --process-templates=true
    
    # Do not process `Go` templates in stack manifests and show the template tokens in the output
    atmos describe stacks --process-templates=false

    The command atmos describe stacks --process-templates=false can also be used in Atmos custom commands that just list Atmos stacks does not require template processing. This will significantly speed up the custom command execution. For example, the custom command atmos list stacks just outputs the top-level stack names and might not require template processing. It will execute much faster if implemented like this (using the --process-templates=false flag with the atmos describe stacks command :

      - name: list
        commands:
          - name: stacks
            description: |
              List all Atmos stacks.
            steps:
              - >
                atmos describe stacks --process-templates=false --sections none | grep -e "^\S" | sed s/://g
fix: Atmos Affected GitHub Action Documentation @milldr (#661) ## what - Update affected-stacks job outputs and matrix integration

why

  • The affected step was missed when the plan example was updated

references

Updated Documentation for GHA Versions @milldr (#657) ## what - Update documentation for Atmos GitHub Action version management

why

  • New major releases for both actions

references

  1. https://github.com/cloudposse/github-action-atmos-terraform-plan/releases/tag/v3.0.0
  2. https://github.com/cloudposse/github-action-atmos-terraform-drift-detection/releases/tag/v2.0.0

v1.85.0

18 Jul 13:58
db0ac7c
Compare
Choose a tag to compare
Update `atmos describe affected` and `atmos terraform` commands @aknysh (#654) ## what
  • Update atmos describe affected command
  • Update atmos terraform command
  • Allow Gomplate, Sprig and Atmos template functions in imports in Atmos stack manifests

why

  • The atmos describe affected command had an issue with calculating the included_in_dependents field for all combination of the affected components with their dependencies. Now it's correctly calculated for all affected

  • In atmos describe affected command, if the Git config core.untrackedCache is enabled, it breaks the command execution. We disable this option if it is set

  • The atmos terraform command now respects the TF_WORKSPACE environment variable. If the environment variable is set by the caller, Atmos will not calculate and set a Terraform workspace for the component in the stack, but instead will let Terraform use the workspace provided in the TF_WORKSPACE environment variable

  • Allow Gomplate, Sprig and Atmos template functions in imports in Atmos stack manifests. All functions are allowed now in Atmos stacks manifests and in the import templates

v1.84.0

11 Jul 15:16
8060adb
Compare
Choose a tag to compare
Add Atmos Pro integration to `atmos.yaml`. Add caching to `atmos.Component` template function. Implement `atmos.GomplateDatasource` template function @aknysh (#647) ## what
  • Add Atmos Pro integration to atmos.yaml
  • Add caching to atmos.Component template function
  • Implement atmos.GomplateDatasource template function
  • Update docs

why

  • Add Atmos Pro integration to atmos.yaml. This is in addition to the functionality added in Add --upload flag to atmos describe affected command. If the Atmos Pro configuration is present in the integrations.pro section in atmos.yaml, it will be added in the config section when executing the atmos describe affected --upload=true command for further processing on the server

    {
         "base_sha": "6746ba4df9e87690c33297fe740011e5ccefc1f9",
         "head_sha": "5360d911d9bac669095eee1ca1888c3ef5291084",
         "owner": "cloudposse",
         "repo": "atmos",
         "config": {
           "timeout": 3,
            "events": [
               "pull_request": [
                  {
                    "on": ["open", "synchronize", "reopen"],
                    "workflow": "atmos-plan.yml",
                    "dispatch_only_top_level_stacks": true
                  },               
                  {
                    "on": ["merged"],
                    "workflow": "atmos-apply.yaml",
                  },               
                ],
                "release": [
                ]
             ]
         }
         "stacks": [
            {
              "component": "vpc",
              "component_type": "terraform",
              "component_path": "components/terraform/vpc",
              "stack": "plat-ue2-dev",
              "stack_slug": "plat-ue2-dev-vpc",
              "affected": "stack.vars",
              "included_in_dependents": false,
              "dependents": []
            }
        ]
     }

  • Add caching to atmos.Component template function

    Atmos caches (in memory) the results of atmos.Component template function execution. If you call the function for the same component in a stack more than once, the first call will produce the result and cache it, and all the consecutive calls will just use the cached data. This is useful when you use the atmos.Component function for the same component in a stack in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.

    For example:

    components:
      terraform:
        test2:
          vars:
            tags:
              test: '{{ (atmos.Component "test" .stack).outputs.id }}'
              test2: '{{ (atmos.Component "test" .stack).outputs.id }}'
              test3: '{{ (atmos.Component "test" .stack).outputs.id }}'

    In the example, the test2 Atmos component uses the outputs (remote state) of the test Atmos component from the same stack. The template function {{ atmos.Component "test" .stack }} is executed three times (once for each tag).

    After the first execution, Atmos caches the result in memory (all the component sections, including the outputs), and reuses it in the next two calls to the function. The caching makes the stack processing about three times faster in this particular example. In a production environment where many components are used, the speedup can be even more significant.


  • Implement atmos.GomplateDatasource template function

    The atmos.GomplateDatasource template function wraps the Gomplate Datasources and caches the results, allowing executing the same datasource many times without calling the external endpoint multiple times. It speeds up the datasource execution and stack processing, and can eliminate other issues with calling an external endpoint, e.g. timeouts and rate limiting.

    Usage

      {{ (atmos.GomplateDatasource "<alias>").<attribute> }}

    Caching the result of atmos.GomplateDatasource function

    Atmos caches (in memory) the results of atmos.GomplateDatasource template function execution. If you execute the function for the same datasource alias more than once, the first execution will call the external endpoint, produce the result and cache it. All the consecutive calls will just use the cached data. This is useful when you use the atmos.GomplateDatasource function for the same datasource alias in multiple places in Atmos stack manifests. It will speed up the function execution and stack processing.

    For example:

    settings:
      templates:
        settings:
          gomplate:
            timeout: 5
            datasources:
              ip:
                url: "https://api.ipify.org?format=json"
                headers:
                  accept:
                    - "application/json"
    components:
      terraform:
        test:
          vars:
            tags:
              test1: '{{ (datasource "ip").ip }}'
              test2: '{{ (atmos.GomplateDatasource "ip").ip }}'
              test3: '{{ (atmos.GomplateDatasource "ip").ip }}'
              test4: '{{ (atmos.GomplateDatasource "ip").ip }}'

    In the example, we define a gomplate datasource ip and specify an external endpoint in the url parameter.

    We use the Gomplate datasource function in the tag test1, and the atmos.GomplateDatasource wrapper for the same datasource alias ip in the other tags. The atmos.GomplateDatasource wrapper will call the same external endpoint, but will cache the result and reuse it between the datasource invocations.

    When processing the component test from the above example, Atmos does the following:

    • Executes the {{ (datasource "ip").ip }} template. It calls the external endpoint using the HTTP protocol and assign the ip attribute from the result to the tag test1

    • Executes the {{ (atmos.GomplateDatasource "ip").ip }} template. It calls the external endpoint again, caches the result in memory, and assigns the ip attribute from the result to the tag test2

    • Executes the {{ (atmos.GomplateDatasource "ip").ip }} two more times for the tags test3 and test4. It detects that the result for the same datasource alias ip is already presend in the memory cache and reuses it without calling the external endpoint two more times

    The datasource result caching makes the stack processing much faster and significantly reduces the load on external endpoints, preventing such issues as timeouts and rate limiting.

v1.83.1

27 Jun 16:12
798bdfb
Compare
Choose a tag to compare
Auto completion for zsh devcontainer @osterman (#639) ## what - Add autocompletion while you type

why

  • Better DX (less typing)

demo

image
Add docker image @osterman (#627) ## what - Add a docker image for Atmos - Bundle typcal dependencies - Multi-architecture build for ARM64 and AMD64

why

  • Make it easier to get up and running with Atmos
Introduce License Check @osterman (#638) ## what

Check for approved licenses

why

Avoid accidentally introducing code that is non-permissively licensed

Fix Codespace url @osterman (#637) ## what
  • Update to Codespace URL for main branch

why

  • It was pointed to an older reorg branch
Reorganize Documentation For a Better Learning Journey @osterman (#612) ## what - Rename top menu items to "Learn" and "Reference" - Move community to the left, remove discussions and add contributing - Introduce sidebar sections, so that content is further left-justified - Consolidate Terraform content into one section to tell a better story about how to use Terraform with Atmos.

why

  • Reorganize Atmos Docs to better help developers on their learning journey

Note

🚀 Enhancements

Don't copy unix sockets in `atmos describe affected` command @aknysh (#640) ## what
  • Don't copy unix sockets when executing atmos describe affected command
  • Fix some links (left over after renaming examples/quick-start to examples/quick-start-advanced)

why

  • Sockets are not regular files, and if someone uses tools like git-fsmonito and executes atmos describe affected command, the following error will be thrown:
open .git/fsmonitor--daemon.ipc: operation not supported on socket