Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpanKind support for badger #6376

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open

Conversation

Manik2708
Copy link
Contributor

Which problem is this PR solving?

Description of the changes

  • Queries with span kind will now be supported for Badger

How was this change tested?

  • Writing unit tests

Checklist

@Manik2708 Manik2708 requested a review from a team as a code owner December 17, 2024 07:43
@Manik2708 Manik2708 requested a review from jkowall December 17, 2024 07:43
@dosubot dosubot bot added enhancement storage/badger Issues related to badger storage labels Dec 17, 2024
@Manik2708
Copy link
Contributor Author

Manik2708 commented Dec 17, 2024

I have changed the structure of cache which is leading to these concerns:

  1. Will a 3D map be a viable option for production?
  2. Cache will never be able to retrieve operations of old data! When kind is not sent by the user, all operations related to new data will be sent. I have a probable solution for this! We might have to introduce boolean which when true will load the cache from old data (old index key) and mark all the span of kind UNSPECIFIED
  3. To maintain consistency, we must take the service name from the newly created index, but extracting service name from serviceName+operationName+kind is the challenge! The solution which I have thought is reserving the last 7 places for len(serviceName)+len(operationName)+kind in the new index. This has an issue that we have to limit the length of serviceName and operationName to 999. This way we can get rid of the c.services map also. Removing this map is optional and a matter of discussion because for this we have to decide between storage and iteration, removing this map will lead to extra iterations in GetServices, I also thought of a solution for this:
data = map[string]struct
// Here this struct can be defined as
type struct {
expiryTime uint64
operations map[trace.SpanKind]map[string]uint64
}

Once the correct approach is discussed I will handle some more edge cases and make the e2e tests pass (making GetOperationsMissingSpanKind: false!

Copy link

codecov bot commented Dec 17, 2024

Codecov Report

Attention: Patch coverage is 92.05021% with 19 lines in your changes missing coverage. Please review.

Project coverage is 96.18%. Comparing base (27af7b0) to head (36e8517).

Files with missing lines Patch % Lines
plugin/storage/badger/spanstore/writer.go 90.00% 5 Missing and 2 partials ⚠️
plugin/storage/badger/spanstore/reader.go 91.04% 4 Missing and 2 partials ⚠️
plugin/storage/badger/spanstore/kind.go 80.00% 4 Missing and 1 partial ⚠️
plugin/storage/badger/spanstore/cache.go 98.52% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6376      +/-   ##
==========================================
- Coverage   96.29%   96.18%   -0.11%     
==========================================
  Files         372      373       +1     
  Lines       21282    21464     +182     
==========================================
+ Hits        20493    20645     +152     
- Misses        603      625      +22     
- Partials      186      194       +8     
Flag Coverage Δ
badger_v1 11.15% <43.51%> (+0.48%) ⬆️
badger_v2 2.73% <0.00%> (-0.05%) ⬇️
cassandra-4.x-v1-manual 16.24% <0.00%> (-0.28%) ⬇️
cassandra-4.x-v2-auto 2.67% <0.00%> (-0.05%) ⬇️
cassandra-4.x-v2-manual 2.67% <0.00%> (-0.05%) ⬇️
cassandra-5.x-v1-manual 16.24% <0.00%> (-0.28%) ⬇️
cassandra-5.x-v2-auto 2.67% <0.00%> (-0.05%) ⬇️
cassandra-5.x-v2-manual 2.67% <0.00%> (-0.05%) ⬇️
elasticsearch-6.x-v1 19.83% <0.00%> (-0.35%) ⬇️
elasticsearch-7.x-v1 19.92% <0.00%> (-0.33%) ⬇️
elasticsearch-8.x-v1 20.07% <0.00%> (-0.34%) ⬇️
elasticsearch-8.x-v2 2.73% <0.00%> (-0.05%) ⬇️
grpc_v1 12.10% <0.00%> (-0.22%) ⬇️
grpc_v2 8.93% <0.00%> (-0.17%) ⬇️
kafka-3.x-v1 10.18% <0.00%> (-0.18%) ⬇️
kafka-3.x-v2 2.73% <0.00%> (-0.05%) ⬇️
memory_v2 2.73% <0.00%> (-0.05%) ⬇️
opensearch-1.x-v1 19.96% <0.00%> (-0.34%) ⬇️
opensearch-2.x-v1 19.96% <0.00%> (-0.33%) ⬇️
opensearch-2.x-v2 2.73% <0.00%> (-0.04%) ⬇️
tailsampling-processor 0.50% <0.00%> (-0.01%) ⬇️
unittests 95.07% <92.05%> (-0.10%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Manik2708
Copy link
Contributor Author

I have changed the structure of cache which is leading to these concerns:

  1. Will a 3D map be a viable option for production?
  2. Cache will never be able to retrieve operations of old data! When kind is not sent by the user, all operations related to new data will be sent. I have a probable solution for this! We might have to introduce boolean which when true will load the cache from old data (old index key) and mark all the span of kind UNSPECIFIED
  3. To maintain consistency, we must take the service name from the newly created index, but extracting service name from serviceName+operationName+kind is the challenge! The solution which I have thought is reserving the last 7 places for len(serviceName)+len(operationName)+kind in the new index. This has an issue that we have to limit the length of serviceName and operationName to 999. This way we can get rid of the c.services map also. Removing this map is optional and a matter of discussion because for this we have to decide between storage and iteration, removing this map will lead to extra iterations in GetServices, I also thought of a solution for this:
data = map[string]struct
// Here this struct can be defined as
type struct {
expiryTime uint64
operations map[trace.SpanKind]map[string]uint64
}

Once the correct approach is discussed I will handle some more edge cases and make the e2e tests pass (making GetOperationsMissingSpanKind: false!

@yurishkuro Please review the approach and problems!

@Manik2708
Copy link
Contributor Author

@yurishkuro I have added more changes which reduces the iterations in prefill to 1 but it limits the serviceName to length of 999. Please review!

@Manik2708
Copy link
Contributor Author

Manik2708 commented Dec 19, 2024

I have an idea for old data without using the migration script! We can store the old data in two other data structures in cache (without kind). But then the only question which rises then: What to return when no span kind is given by user? Operations of new data of all kind or operations of old data (kind marked as unspecified) or an addition of both?

@yurishkuro yurishkuro added the changelog:new-feature Change that should be called out as new feature in CHANGELOG label Dec 20, 2024
model/span.go Outdated Show resolved Hide resolved
model/span.go Outdated Show resolved Hide resolved
model/span.go Outdated Show resolved Hide resolved
@yurishkuro
Copy link
Member

What to return when no span kind is given by user?

then we should return all operations regardless of the span kind

@Manik2708
Copy link
Contributor Author

What to return when no span kind is given by user?

then we should return all operations regardless of the span kind

That means including all spans of old data also (Whose kind is not there in cache)?

@Manik2708 Manik2708 marked this pull request as draft December 22, 2024 14:04
@Manik2708 Manik2708 marked this pull request as ready for review December 22, 2024 19:16
@dosubot dosubot bot added the area/storage label Dec 22, 2024
@Manik2708
Copy link
Contributor Author

My current approach is leading to errors in unit test of factory_test.go. The badger is throwing this error infinetly times:

runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1700, retrying
badger 2024/12/23 01:12:11 ERROR: error flushing memtable to disk: error while creating table err: while creating table: /tmp/badger116881967/000002.sst error: open /tmp/badger116881967/000002.sst: no such file or directory
unable to open: /tmp/badger116881967/000002.sst
github.com/dgraph-io/ristretto/v2/z.OpenMmapFile

This is probably because f.Close is closed before the completion of prefill. That implies creation of new index for old data is slow. Hence I think we have only one way, if we want to skip even auto migration and that is using this function:

func getSpanKind(txn *badger.Txn, service string, timestampAndTraceId string) model.SpanKind {
	for i := 0; i < 6; i++ {
		value := service + model.SpanKindKey + model.SpanKind(i).String()
		valueBytes := []byte(value)
		operationKey := make([]byte, 1+len(valueBytes)+8+sizeOfTraceID)
		operationKey[0] = tagIndexKey
		copy(operationKey[1:], valueBytes)
		copy(operationKey[1+len(valueBytes):], timestampAndTraceId)
		_, err := txn.Get(operationKey)
		if err == nil {
			return model.SpanKind(i)
		}
	}
	return model.SpanKindUnspecified
}

The only problem is that, during prefilling 6*NumberOfOperations Get Queries will be called. Please review this approach @yurishkuro and I think we need to discuss about autoCreation of new index or should we skip the creation of any new index and use the function given above?

@Manik2708 Manik2708 requested a review from yurishkuro December 23, 2024 19:28
@Manik2708 Manik2708 marked this pull request as draft December 26, 2024 02:07
@Manik2708 Manik2708 marked this pull request as ready for review December 26, 2024 05:22
@Manik2708
Copy link
Contributor Author

@yurishkuro I finally got rid of migration and now I think its ready for review! Please ignore my previous comments. The current commit has no linkage them!

Signed-off-by: Manik2708 <[email protected]>
Signed-off-by: Manik2708 <[email protected]>
Signed-off-by: Manik2708 <[email protected]>
Copy link
Member

@yurishkuro yurishkuro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you revisit the tests by using API methods of the cache instead of manually manipulating its internal data structures? Tests should be validating expected behavior that the user of the cache expects. The only time it's acceptable to go into internal details is when some error conditions cannot be tested otherwise purely from external API.

@Manik2708
Copy link
Contributor Author

can you revisit the tests by using API methods of the cache instead of manually manipulating its internal data structures? Tests should be validating expected behavior that the user of the cache expects. The only time it's acceptable to go into internal details is when some error conditions cannot be tested otherwise purely from external API.

I have fixed all the tests except that of Update and Prefill, even they are also not manipulating the data structure, they are used just to check whether cache is storing by using the update or prefill

@Manik2708 Manik2708 requested a review from yurishkuro December 30, 2024 08:19
@Manik2708
Copy link
Contributor Author

@yurishkuro Can you please review?

@Manik2708 Manik2708 marked this pull request as draft January 2, 2025 09:31
Signed-off-by: Manik2708 <[email protected]>
@Manik2708 Manik2708 marked this pull request as ready for review January 2, 2025 16:09
@Manik2708 Manik2708 requested a review from yurishkuro January 2, 2025 16:12
@yurishkuro
Copy link
Member

Q: do we have to maintain two indices forever, or is this only a side-effect of having to be backwards compatible with the existing data?

For example, one way I could see this working is:

  • we only write the new index with kind
  • when reading, we do a dual lookup, first in the new index then in the old (if the old exists)
  • we have a config option to turn off the dual-reading behavior. The motivation here is that people rarely keep tracing data for very long, so in 4 months (4 releases) the old index is likely going to be TTLed out anyway.
    • In the first release of the feature this option could be defaulted to ON
    • Then a couple releases down the road we can default it to OFF
    • Then 2 more releases down the road we deprecate the option and remove the old index reading code.

@Manik2708
Copy link
Contributor Author

Q: do we have to maintain two indices forever, or is this only a side-effect of having to be backwards compatible with the existing data?

For example, one way I could see this working is:

  • we only write the new index with kind

  • when reading, we do a dual lookup, first in the new index then in the old (if the old exists)

  • we have a config option to turn off the dual-reading behavior. The motivation here is that people rarely keep tracing data for very long, so in 4 months (4 releases) the old index is likely going to be TTLed out anyway.

    • In the first release of the feature this option could be defaulted to ON
    • Then a couple releases down the road we can default it to OFF
    • Then 2 more releases down the road we deprecate the option and remove the old index reading code.

The key : serviceName+Kind+OperationName+Time+TraceId can't be used in the reader to find trace ids. Because while finding trace ids we might not be aware of the kind. We can avoid dual lookups while prefilling by your suggested roadmap. This key schema was also discussed in the issue and it was asked in the comment #1922 (comment). If we want to use this key schema permanently then employ a different key: serviceName+OperationName+kind+Time+TraceId while scanning the indexes we have to create this key from service and operation. So when a TraceQueryParameter with only service name and operation name is there while scanning we have to append 6 keys so as to fetch all trace ids. Please have a look at this

serviceName = "service"
operationName = "operation"
//So in the scanning we have to create the following //6 keys:
key1 = "serviceoperation0"
key2 = "serviceoperation1"
...

Then finding the trace ids would also work fine. So either we have to create an extra index or do this extra scanning!

@yurishkuro
Copy link
Member

yurishkuro commented Jan 2, 2025

serviceName+OperationName+kind+Time+TraceId

This index doesn't make sense to me. It cannot effectively support a query that only includes service+operation, you must always know the kind to get to the desired time range.

Wouldn't it make more sense to append the kind after the Time? Then we have the following two queries:

  1. user does not specify kind - we scan everything within the given time range
  2. user does specify kind - we still scan everything within the given time range and discard entries with the wrong kind. As you mentioned earlier, the probability of having different kinds for the same service+operation is quite low, so even if it does happen, in the worse case we'd have to scan 5x more entries (kind can have 5 different values), but that worse case will almost never happen because in most cases it will be exactly 1 value.

@Manik2708
Copy link
Contributor Author

serviceName+OperationName+kind+Time+TraceId

This index doesn't make sense to me. It cannot effectively support a query that only includes service+operation, you must always know the kind to get to the desired time range.

Wouldn't it make more sense to append the kind after the Time? Then we have the following two queries:

  1. user does not specify kind - we scan everything within the given time range
  2. user does specify kind - we still scan everything within the given time range and discard entries with the wrong kind. As you mentioned earlier, the probability of having different kinds for the same service+operation is quite low, so even if it does happen, in the worse case we'd have to scan 5x more entries (kind can have 5 different values), but that worse case will almost never happen because in most cases it will be exactly 1 value.

We can try this but then we need to remember that it will break these conventions:

  1. Last 16 bytes of trace is trace id
  2. Then 8 bytes of time stamp
    Only this key will be breaking these. Also this key need not to be present when tags are there. So we need prepare two seperate logics of scanning and parsing.

@yurishkuro
Copy link
Member

Why is it "breaking" if kind introduced after Time but not "breaking" when it's before Time?

Whatever we do the changes must be backwards compatible.

@Manik2708
Copy link
Contributor Author

Manik2708 commented Jan 2, 2025

Why is it "breaking" if kind introduced after Time but not "breaking" when it's before Time?

Whatever we do the changes must be backwards compatible.

Please see this:

func createIndexKey(indexPrefixKey byte, value []byte, startTime uint64, traceID model.TraceID) []byte {
// KEY: indexKey<indexValue><startTime><traceId> (traceId is last 16 bytes of the key)
key := make([]byte, 1+len(value)+8+sizeOfTraceID)
key[0] = (indexPrefixKey & indexKeyRange) | spanKeyPrefix
pos := len(value) + 1
copy(key[1:pos], value)
binary.BigEndian.PutUint64(key[pos:], startTime)
pos += 8 // sizeOfTraceID / 2
binary.BigEndian.PutUint64(key[pos:], traceID.High)
pos += 8 // sizeOfTraceID / 2
binary.BigEndian.PutUint64(key[pos:], traceID.Low)
return key

This is how we are creating a key, when service+operation+kind is used it is used as value here but appending it after time will break this.

@yurishkuro
Copy link
Member

why does it matter? We're creating an index with a different layout, we don't have to be restricted by how that specific function is implemented, especially since we are introducing a different look up process (it seems all other indices are doing direct lookup by the prefix up to the timestamp and then scan / parse).

@Manik2708
Copy link
Contributor Author

why does it matter? We're creating an index with a different layout, we don't have to be restricted by how that specific function is implemented, especially since we are introducing a different look up process (it seems all other indices are doing direct lookup by the prefix up to the timestamp and then scan / parse).

Ok, will give it a try and get back to you! Thanks for your time!

@Manik2708
Copy link
Contributor Author

@yurishkuro I have tried to take care of all the edge cases, please review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/storage changelog:new-feature Change that should be called out as new feature in CHANGELOG enhancement storage/badger Issues related to badger storage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Badger storage plugin: query service to support spanKind when retrieve operations for a given service.
2 participants