Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gRPC clients need to always set a deadline/timeout #1193

Closed
jpkrohling opened this issue Jun 25, 2020 · 0 comments · Fixed by #1386
Closed

gRPC clients need to always set a deadline/timeout #1193

jpkrohling opened this issue Jun 25, 2020 · 0 comments · Fixed by #1386
Labels
bug Something isn't working

Comments

@jpkrohling
Copy link
Member

Describe the bug
While debugging #1192, I found out that the OTLP exporter's gRPC client didn't set a timeout. As there are problems with the connection in my setup, the trace is never exported. As a general rule, all blocking operations need a deadline, but the minimum would be to ensure that all exporters have a deadline/timeout.

Something like this would be sufficient for gRPC-based connections, as gRPC will cancel the RPC once the deadline expires:

	ctxWithTimeout, cancel := context.WithTimeout(ctx, 5*time.Second)
	defer cancel()
	request := &otlptrace.ExportTraceServiceRequest{
		ResourceSpans: pdata.TracesToOtlp(td),
	}
	err := exporter.exportTrace(ctxWithTimeout, request)

Perhaps an even better approach would be to set a specific budget for each batch, applicable to the whole set of exporters.

Steps to reproduce
See #1192

What did you expect to see?
n/a

What did you see instead?
n/a

What version did you use?
Version: 4eca960a4eb02104694324cf161ad9ec944c44c9

What config did you use?
Config: see #1192

Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

Additional context
n/a

@jpkrohling jpkrohling added the bug Something isn't working label Jun 25, 2020
bogdandrutu added a commit to bogdandrutu/opentelemetry-collector that referenced this issue Jul 16, 2020
Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes open-telemetry#1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.
bogdandrutu added a commit to bogdandrutu/opentelemetry-collector that referenced this issue Jul 16, 2020
Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes open-telemetry#1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.
bogdandrutu added a commit to bogdandrutu/opentelemetry-collector that referenced this issue Jul 16, 2020
Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes open-telemetry#1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.
bogdandrutu added a commit to bogdandrutu/opentelemetry-collector that referenced this issue Jul 17, 2020
Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes open-telemetry#1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.
bogdandrutu added a commit to bogdandrutu/opentelemetry-collector that referenced this issue Jul 17, 2020
Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes open-telemetry#1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.
bogdandrutu added a commit that referenced this issue Jul 17, 2020
* Add support for queued retry in the exporter helper.

Changed only the OTLP exporter for the moment to use the new settings.

Timeout is enabled for all the exporters. Fixes #1193

There are some missing features that will be added in a followup PR:
1. Enforcing errors. For the moment added the Throttle error as a hack to keep backwards compatibility with OTLP.
2. Enable queued and retry for all exporters.
3. Fix observability metrics for the case when requests are dropped because the queue is full.

* First round of comments addressed
Troels51 pushed a commit to Troels51/opentelemetry-collector that referenced this issue Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
1 participant