-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bigquery subscription type support #476
base: master
Are you sure you want to change the base?
Add bigquery subscription type support #476
Conversation
Signed-off-by: Viktor Danyliuk <[email protected]>
1d6055f
to
e03ae44
Compare
Hey @POD666, thanks for opening the PR!
Did you mean testing with the unit tests or manual testing in GCP? Can you give me more details to help you? |
@Feggah I failed to run e2e locally. Should I check it manually in GCP somehow? |
Got it, I think running Testing manually is a good idea. Have you tried to use You can test the following cases:
You can also check if the import is working:
|
I have tried to install it in my GCP but it fails with:
not sure what it means. Here are my steps:
HEALTHY=False
I also tried to set this image in provider.packages list of values.yaml for crossplane. Feels like I use a wrong build or it should be published elsewhere except my docker registry? Are there any simple instructions I could follow to install my local build? Maybe I missed it. |
I find it easier to just run the code locally when I'm developing. You can build, push and install the controller image but its slower than just running locally. Try this:
Now you can interact normally with your local cluster (creating, editing and deleting manifests) and the code will be running in your terminal. If you prefer to build the image and install it, try this:
|
Well, that was pretty challenging. I had to create a new cluster because it's impossible install two different gcp-providers 😅 After all, I found a bug, so it was worth testing manually. I had to rename BigQueryConfig -> BigqueryConfig because of
|
to avoid undesired underscores on CamelCase to snale_case conversion Signed-off-by: Viktor Danyliuk <[email protected]>
ac93b49
to
2c67e5a
Compare
Seems to be ready to merge. Anything else is required? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for you effort, @POD666!
I added some small comments in the code.
In the test file, I think you don't need to create new Test functions neither a new function to create Parameters or the Subscription struct. You can just add the BigQuery fields to the existent functions and the test will cover the rest. (the same way that all other fields are being tested).
type BigqueryConfig struct { | ||
// Bigquery table to deliver messages to. | ||
Table string `json:"table,omitempty"` | ||
|
||
// When enabled, the topic schema will be used when writing to BigQuery. Else, | ||
// tes the message bytes to a column called data in BigQuery. | ||
UseTopicSchema bool `json:"useTopicSchema,omitempty"` | ||
|
||
// When enabled, the metadata of each message is written to additional columns in | ||
// the BigQuery table. Else, the metadata is not written to the BigQuery table. | ||
// https://cloud.google.com/pubsub/docs/bigquery?hl=ru#write-metadata | ||
WriteMetadata bool `json:"writeMetadata,omitempty"` | ||
|
||
// When enabled along with the "Use topic schema" option, any field that is present in | ||
// the topic schema but not in the BigQuery schema will be dropped. Else, messages with extra fields are not written | ||
// and remain in the subscription backlog. | ||
DropUnknownFields bool `json:"dropUnknownFields,omitempty"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these fields are optional, right?
So I think we should add the // +optional
marker and change all types to be pointers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, I will check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works but it changes existing values to "false". I mean I had this:
bigqueryConfig:
table: "..."
useTopicSchema: true
writeMetadata: true
dropUnknownFields: false
changed to
bigqueryConfig:
table: "..."
useTopicSchema: false
and all 3 flags become disabled
Is it expected? I expected writeMetadata to stay enabled but maybe I'm wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't get it. Do you mean that you specified true
in the managed resource manifest and in the interface it was changed to Disabled (false)?
Not sure. Existing parameters define PushConfig and they should not be used together with BigQueryConfig. Do I miss anything? |
So from 3 comments:
|
Signed-off-by: Viktor Danyliuk <[email protected]>
Agreed. Good to me
I saw that you added the |
Hmm, that seems to make sense. There are other places where pointers are not used for optional fields: provider-gcp/apis/pubsub/v1alpha1/subscription_types.go Lines 80 to 95 in 8446ff3
You might need to add a linter to prevent it. I have tried to use pointers and it forces to use
Using pointers here is just a memory optimization or it's required? |
That's a pattern that we try to follow for all providers. There are some fields that aren't in the same specification, but this could be just a mistake from the reviewer of the PR. Could you please fix it? It is pretty straight forward 🙏🏼 The error that you are getting from the Check if your code is with following the example image below. You can't have packages mixed together. |
Signed-off-by: Viktor Danyliuk <[email protected]>
Ah, that was just import ordering! I thought it was some kind of cyclic import. My vs code was sorting import statements by itself so I was sure it was good.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just added a quick suggestion that would be helpful for users (I needed to search myself to get this working)
// Bigquery table to deliver messages to. | ||
Table string `json:"table,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make the Table
field required (when bigqueryConfig
is specified) you need to remove the omitempty
JSON tag.
// Bigquery table to deliver messages to. | |
Table string `json:"table,omitempty"` | |
// Bigquery table to deliver messages to. | |
// Must be in the format {projectId}.{datasetId}.{tableId} | |
Table string `json:"table"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I was testing a little more and I got these strange behaviors:
Regarding the fields DropUnknownFields, UseTopicSchema and WriteMetadata not being LateInitialized, I believe that's because the if
condition (print screen below) will only enter if bigqueryConfig is nil. If you define any value inside, it won't LateInitilize the rest. (and we should LateInitialize everything that is not defined in the manifest but is defined in the external API)
Regarding the PushConfig field being LateInitialized, that's an error, right? Because it shouldn't exist if the subscription type is BigQuery. Maybe it can be the cause of the Update loop, I suggest you start investigating there.
You can make run
your code locally with some fmt.Println()
to understand what is going wrong and which conditions aren't behaving as expected.
Or you can watch container logs as well. See this documentation about how to run the provider container in debug mode
Signed-off-by: Viktor Danyliuk <[email protected]>
I have a little context around expected behavior. Update loop 7 times feels wrong. I guess that is something that should be covered with TestLateInitialize. I extended test cases for it, see my latest commit. Anyway, I have copied code for Push config when implementing BigqueryConfig. So the issue might be not related to my PR (it might require refactoring the whole LateInitialize function). |
Description of your changes
Trying to add support for bigquery subscriptions. See feature request: #469
Fixes #469
I have:
make reviewable test
to ensure this PR is ready for review.How has this code been tested
✅ make reviewable test
🔴 make e2e
fails with timeouts, seems to be expecting linux build but I'm on osx
🔴 build/run make -j4
fails with
unknown flag: --load
here https://github.com/crossplane-contrib/provider-gcp/blob/master/cluster/images/provider-gcp/Makefile#L16I'm not sure how to proceed with testing.