-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make frontend drain traffic time configurable #3934
Conversation
service/frontend/service.go
Outdated
|
||
logger.Info("ShutdownHandler: Updating gRPC health status to ShuttingDown") | ||
s.healthServer.Shutdown() | ||
|
||
logger.Info("ShutdownHandler: Waiting for others to discover I am unhealthy") | ||
time.Sleep(failureDetectionTime) | ||
time.Sleep(10 * time.Second) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we have another dynamic config for this? this actually seems like the one that's more dependent on the environment (external health check frequency). the requestDrainTime can be fixed to 5s or 10s since we use 5s or 10s timeout on rpcs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for having a separate knob for this.
The timeout is from client which can be quite long I think? Or may be I misunderstood something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure. Will add a different config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is during this sleep, any rpcs that end up here will still be handled, but we expect some external system to do a health check, notice the "shutting down" response, and adjust its state to stop sending rpcs here. That timeout is controlled by the load balancer.
The second sleep is when we stop accepting rpcs, but continue processing ones that have already come in. That one depends on how long we expect our operations to take, which we have more control over. For long-polls, we can just fail and let them get retried. For everything else, if it takes more than 10s something is probably going wrong, so it seems okay to fail. But no harm in making that configurable too
common/dynamicconfig/constants.go
Outdated
@@ -202,6 +202,8 @@ const ( | |||
FrontendThrottledLogRPS = "frontend.throttledLogRPS" | |||
// FrontendShutdownDrainDuration is the duration of traffic drain during shutdown | |||
FrontendShutdownDrainDuration = "frontend.shutdownDrainDuration" | |||
// FrontendMembershipFailureDetectionDuration is the duration of membership failure detection | |||
FrontendMembershipFailureDetectionDuration = "frontend.membershipFailureDetectionDuration" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is about grpc health checks (as done by an external load balancer or similar component), not membership. I don't think ringpop uses grpc health checks, does it?
FrontendMembershipFailureDetectionDuration = "frontend.membershipFailureDetectionDuration" | |
FrontendShutdownFailHealthcheckDuration = "frontend.membershipShutdownFailHealthcheckDuration" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. it doesn't
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
although, maybe it also makes sense to add a call to membershipMonitor.EvictSelf()
at the same time we start failing health checks? that will make workers stop sending rpcs to this frontend, I think
service/frontend/service.go
Outdated
@@ -207,6 +208,7 @@ func NewConfig(dc *dynamicconfig.Collection, numHistoryShards int32, enableReadF | |||
BlobSizeLimitWarn: dc.GetIntPropertyFilteredByNamespace(dynamicconfig.BlobSizeLimitWarn, 256*1024), | |||
ThrottledLogRPS: dc.GetIntProperty(dynamicconfig.FrontendThrottledLogRPS, 20), | |||
ShutdownDrainDuration: dc.GetDurationProperty(dynamicconfig.FrontendShutdownDrainDuration, 0*time.Second), | |||
ShutdownFailureDetectionDuration: dc.GetDurationProperty(dynamicconfig.FrontendShutdownFailHealthcheckDuration, 10*time.Second), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name this the same as the property too?
What changed?
Make frontend drain traffic time configurable
Why?
Make frontend drain traffic time configurable
How did you test it?
Potential risks
Is hotfix candidate?