-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MGMT-16108: introduce the Resource server #12
Conversation
/cc @jhernand |
264bc97
to
166a477
Compare
"github.com/openshift-kni/oran-o2ims/internal/exit" | ||
"github.com/openshift-kni/oran-o2ims/internal/logging" | ||
"github.com/openshift-kni/oran-o2ims/internal/service" | ||
searchmodel "github.com/stolostron/search-v2-api/graph/model" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any chance that we can do without importing this? I ask because that means adding all the dependencies of that project, not just the the model. I had bad experiences with that, due to dependency conflicts. Can we maybe copy the required files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, makes sense. done.
} `json:"filters"` | ||
Relatedkinds []string `json:"relatedKinds"` | ||
} `json:"input"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is very similar to the SearchInput
struct in the searchmodel
package imported above. What is the reason for the additional struct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a left over... removed.
logger *slog.Logger, | ||
cloudID, backendURL, backendToken string, | ||
graphqlQuery string, graphqlVars *searchmodel.SearchInput) ResourcePoolFetcher { | ||
return &resourcePoolFetcher{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use the "builder" approach to create this object? Otherwise when this already long list of parameters grow it will be hard to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, used the builder pattern instead.
go.mod
Outdated
@@ -4,20 +4,25 @@ go 1.21 | |||
|
|||
require ( | |||
github.com/PaesslerAG/jsonpath v0.1.1 | |||
github.com/golang/mock v1.6.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are already using go.uber.org/mock
which is a maintaned fork of github.com/golang/mock
please make sure to use it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, missed it! changed.
// ResourceServerCommand contains the data and logic needed to run the `start | ||
// resource-server` command. | ||
type ResourceServerCommand struct { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that you are going to use the logger
in some methods of this type. In that case please add a logger *slog.Logger
field here, initialize it in the run
method and avoid passing it as parameter to methods. That is the reason for the existence of this type: to be able to do such things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, changed.
ctx := cmd.Context() | ||
|
||
// Get the dependencies from the context: | ||
logger := internal.LoggerFromContext(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mention above about the logger is doing something like this:
c.logger = internal.LoggerFromContext(ctx)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
func (r *resourcePoolFetcher) getSearchResults(ctx context.Context) (result []any, err error) { | ||
tr := &http.Transport{ | ||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using InsecureSkipVerify: true
is not acceptable for production environments. It is OK for now, but we need to address it. Please open a ticket explaining that we need to have command line options to configure the trusted certificate authorities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Used transportWrapper as in DeploymentManagerCollectionHandler:
InsecureSkipVerify: true, |
Will open a ticket to handle it in CLI.
|
||
// Set GraphQL query and vars | ||
request := graphql.NewRequest(r.graphqlQuery) | ||
request.Var("input", *r.graphqlVars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the advantage of using the graphql
package here? Can't we do this with a plain HTTP client? My concern is that it looks like this graphql
package will store all the result set in memory before returning control. That means that we can't process them in an streaming approach. Is there any way to do that with the graphql
package? Otherwise it negates the advantages of using streams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It mainly makes the code less clean without it. Can be removed.
// Fetch resource pools | ||
resourcePools, err := NewResourcePoolFetcher( | ||
h.logger, h.cloudID, h.backendURL, h.backendToken, h.graphqlQuery, h.graphqlVars, | ||
).FetchItems(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the resource pool fetcher stateless? Can we create one instance and reuse it for all calls to fetchObject
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it is called only once per single item request. Moved it to Get func to make it clearer.
) | ||
|
||
//go:generate mockgen -source=resource_object_mapper.go -package=service -destination=resource_object_mapper_mock.go | ||
type ResourceObjectMapper interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is already a Mapper
type here:
oran-o2ims/internal/streaming/stream.go
Lines 145 to 146 in ccd39ba
// Mapper is a function that transofms one object into another. | |
type Mapper[F, T any] func(context.Context, F) (T, error) |
It is a simple function, so I think we can use that directly instead of creating a new interface and a mock for it.
If using the generic type is unconfortable we can add an alias in the data
package:
type Mapper = stream.Mapper[data.Object, data.Object]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As these are indeed simple function, just moves them to the fetcher.
// Map Cluster to an O2 ResourcePool object. | ||
func (r *resourcePoolFetcher) mapClusterItem(ctx context.Context, | ||
from data.Object) (to data.Object, err error) { | ||
to, err = NewResourcePoolObjectMapper(r.cloudID, r.resources).MapItem(ctx, from) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create the resource pool mapper only once and reuse it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved the logic.
// Map a Node to an O2 Resource object. | ||
func (rr *resourcePoolFetcher) mapNodeItem(ctx context.Context, | ||
from data.Object) (to data.Object, err error) { | ||
to, err = NewResourceObjectMapper().MapItem(ctx, from) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create the resource object mapper only once and reuse it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved the logic.
} | ||
|
||
// Converts a specified data stream of O2 Resources (Nodes) to an array. | ||
func (r *resourcePoolObjectMapper) getResourceArray(ctx context.Context, items data.Stream, resourcePoolID string) (result []map[string]any) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general we should try to avoid things that return arrays, because they negate the value of using streams. But if we do then we can combine the Collect
and Select
methods. For example:
selector := func(ctx context.Context, item data.Object) (result bool, err error) {
result = item["resourcePoolID"] == resourcePoolID
return
}
result = streams.Collect(streams.Select(items, selector))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
Added an initial implementation for the resource server. The server is serving the 'resourcePools' endpoint which fetches data from the search query API and converts the results into O2 objects. In a nutshell, the mapping is done as follows: * Node object from search API --> O2 Resource object * Cluster object from search API --> O2 ResourcePool object As some objects/properties don't have direct mapping, for now this PR does *not* include the following: * Handling for ResourceType objects * Mapping for these properties: * ResourcePool.location * ResourcePool.globalLocationID * ResourcePool.description * Resource.resourceTypeID
166a477
to
3aac8f9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, thanks @danielerez!
Added an initial implementation for the resource server. The server is serving the 'resourcePools' endpoint which fetches data from the search query API and converts the results into O2 objects.
In a nutshell, the mapping is done as follows:
As some objects/properties don't have direct mapping, for now this PR does not include the following: