-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CDKTF convert Errors on array on Nutanix Provider/Virtual Machne Resource, Synthing manually made python cdktf leaves array fields blank #3170
Comments
I'll leave it for the rest of the team to comment on the technical details, but FWIW this looks like a duplicate of #3111 (though this one has a lot more detail and should help the team with debugging; thank you!). |
We are willing to work with whoever to get this resolved, we are using the CDKTF extensively and this is a blocker for us. Thank you @xiehan |
@DanielMSchmidt could we schedule a time to meetup? I'm available whenever you are. |
This has been sitting quite a while @xiehan can we get a status update by chance? |
I tried to reproduce this and for me it worked fine with #!/usr/bin/env python
from constructs import Construct
from cdktf import App, TerraformStack
from cdktf import Token
#
# Provider bindings are generated by running `cdktf get`.
# See https://cdk.tf/provider-generation for more details.
#
from imports.nutanix.data_nutanix_cluster import DataNutanixCluster
from imports.nutanix.data_nutanix_clusters import DataNutanixClusters
from imports.nutanix.data_nutanix_image import DataNutanixImage
from imports.nutanix.data_nutanix_subnet import DataNutanixSubnet
from imports.nutanix.provider import NutanixProvider
from imports.nutanix.virtual_machine import VirtualMachine
class MyStack(TerraformStack):
def __init__(self, scope, name):
super().__init__(scope, name)
NutanixProvider(self, "nutanix",
endpoint="10.6.100.20",
insecure=True,
password="",
port=Token.as_string(9440),
username="terraform_admin"
)
cluster1 = DataNutanixCluster(self, "cluster1",
cluster_id="0005f342-2700-****-467a-*********"
)
DataNutanixClusters(self, "clusters")
s2019 = DataNutanixImage(self, "s2019",
image_id="e"
)
vlan618 = DataNutanixSubnet(self, "VLAN_618",
subnet_id="cd38c43c-fa7a-****-9bf0-***********"
)
VirtualMachine(self, "ggtest01",
cluster_uuid=Token.as_string(cluster1.cluster_id),
description="demo Frontend Web Server",
disk_list=[{
"data_source_reference": {
"kind": "image",
"uuid": Token.as_string(s2019.image_id)
},
"device_properties": {
"device_type": "DISK",
"disk_address": {
"adapter_type": "SCSI",
"device_index": Token.as_string(0)
}
},
"storage_config": {
"storage_container_reference": [{
"kind": "storage_container",
"uuid": "17a6c666-db20-4179-9a7c-*********"
}
]
}
}
],
memory_size_mib=16000,
name="test01",
nic_list=[{
"subnet_uuid": Token.as_string(vlan618.subnet_id)
}
],
num_sockets=1,
num_vcpus_per_socket=2
)
app = App()
MyStack(app, "tmp.axnj99KS6U")
app.synth() And I got this synthed JSON which to me looks also correct {
"//": {
"metadata": {
"backend": "local",
"stackName": "tmp.axnj99KS6U",
"version": "0.20.1"
},
"outputs": {
}
},
"data": {
"nutanix_cluster": {
"cluster1": {
"//": {
"metadata": {
"path": "tmp.axnj99KS6U/cluster1",
"uniqueId": "cluster1"
}
},
"cluster_id": "0005f342-2700-****-467a-*********"
}
},
"nutanix_clusters": {
"clusters": {
"//": {
"metadata": {
"path": "tmp.axnj99KS6U/clusters",
"uniqueId": "clusters"
}
}
}
},
"nutanix_image": {
"s2019": {
"//": {
"metadata": {
"path": "tmp.axnj99KS6U/s2019",
"uniqueId": "s2019"
}
},
"image_id": "e"
}
},
"nutanix_subnet": {
"VLAN_618": {
"//": {
"metadata": {
"path": "tmp.axnj99KS6U/VLAN_618",
"uniqueId": "VLAN_618"
}
},
"subnet_id": "cd38c43c-fa7a-****-9bf0-***********"
}
}
},
"provider": {
"nutanix": [
{
"endpoint": "10.6.100.20",
"insecure": true,
"password": "",
"port": 9440,
"username": "terraform_admin"
}
]
},
"resource": {
"nutanix_virtual_machine": {
"ggtest01": {
"//": {
"metadata": {
"path": "tmp.axnj99KS6U/ggtest01",
"uniqueId": "ggtest01"
}
},
"cluster_uuid": "${data.nutanix_cluster.cluster1.cluster_id}",
"description": "demo Frontend Web Server",
"disk_list": [
{
}
],
"memory_size_mib": 16000,
"name": "test01",
"nic_list": [
{
}
],
"num_sockets": 1,
"num_vcpus_per_socket": 2
}
}
},
"terraform": {
"backend": {
"local": {
"path": "/private/var/folders/m4/673s3vwn1_g7c72bmvq521g00000gn/T/tmp.axnj99KS6U/terraform.tmp.axnj99KS6U.tfstate"
}
},
"required_providers": {
"nutanix": {
"source": "nutanix/nutanix",
"version": "1.9.5"
}
}
}
} Could you try again with the current cdktf version? |
@DanielMSchmidt if you notice your synthesized json output has a empty Those fields should not be empty, as you try to run that via a |
Looking for a status update as it's been months @DanielMSchmidt |
I'm going to lock this issue because it has been closed for 30 days. This helps our maintainers find and focus on the active issues. If you've found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Expected Behavior
TL;DR
After converting the Nutanix Provider, following the provided docs, converting a vm resource from terraform using the cdktf convert tool errors on conversion, it's expected to convert the provider and convert the
main.tf
filecat main.tf | cdktf convert --provider 'nutanix/nutanix' --language python > test.py
results in a array error from theprovider-generator/ .../resource-parser.js
This lines up with the behavior experienced if one converts the provider the themselves and constructs a Terraform Stack in Python code themselves. Using the Terraform stack in this way, results in array fields on the Nutanix VM array parameter fields being empty in the cdktf.out.json.
Read
The cdktf conversion process should be able to convert this Nutanix VM Terraform manifest. Following instructions from the cdktf convert subcommand.
The cdktf being used via Python, manually writing the code to create a Nutanix Terraform stack should convert all fields of a VM without blank fields.
These two different yet connected scenarios we believe incorrectly convert types of arrays for disk_list, and nic_list.
Actual Behavior
See steps to reproduce.
Steps to Reproduce
Manual Conversion of Provider and Running Created Stack code:
The manual conversion process is documented in a issue in the nutanix github space:
nutanix/terraform-provider-nutanix#624
After contacting Nutanix they stated to go through Hashicorp Support due to them not supporting CDKTF. They are a trusted partner according to their registry page.
Automated Conversion Steps
Output
Note
I tested this on node 18 to node 20 using NVM.
Files:
cdktf.json
main.tf
Versions
cdktf debug
language: python
cdktf-cli: 0.18.0
node: v20.2.0
terraform: 1.5.6
arch: arm64
os: darwin 21.3.0
Providers
┌─────────────────┬──────────────────┬───────┬────────────┬──────────────┬─────────────────┐
│ Provider Name │ Provider Version │ CDKTF │ Constraint │ Package Name │ Package Version │
├─────────────────┼──────────────────┼───────┼────────────┼──────────────┼─────────────────┤
│ nutanix/nutanix │ 1.9.3 │ │ │ │ │
Gist
No response
Possible Solutions
We believe the struct that nutanix has defined in their provider is incorrectly setting type, or that sequence is not being picked up by jsii.
Converted code using
cdktf get
:Workarounds
There's none, we can't convert this module and use it using the provided conversion tools, and trying to create the python code manually results in unexpected blank
Sequence
fields.Anything Else?
We also tested conversion using javascript instead of python with VERY similar errors.
References
No response
Help Wanted
Community Note
The text was updated successfully, but these errors were encountered: