Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make frontend drain traffic time configurable #3934

Merged
merged 4 commits into from
Feb 10, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions common/dynamicconfig/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,8 @@ const (
FrontendThrottledLogRPS = "frontend.throttledLogRPS"
// FrontendShutdownDrainDuration is the duration of traffic drain during shutdown
FrontendShutdownDrainDuration = "frontend.shutdownDrainDuration"
// FrontendMembershipFailureDetectionDuration is the duration of membership failure detection
FrontendMembershipFailureDetectionDuration = "frontend.membershipFailureDetectionDuration"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is about grpc health checks (as done by an external load balancer or similar component), not membership. I don't think ringpop uses grpc health checks, does it?

Suggested change
FrontendMembershipFailureDetectionDuration = "frontend.membershipFailureDetectionDuration"
FrontendShutdownFailHealthcheckDuration = "frontend.membershipShutdownFailHealthcheckDuration"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. it doesn't

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although, maybe it also makes sense to add a call to membershipMonitor.EvictSelf() at the same time we start failing health checks? that will make workers stop sending rpcs to this frontend, I think

// FrontendMaxBadBinaries is the max number of bad binaries in namespace config
FrontendMaxBadBinaries = "frontend.maxBadBinaries"
// SendRawWorkflowHistory is whether to enable raw history retrieving
Expand Down
7 changes: 5 additions & 2 deletions service/frontend/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ type Config struct {
WorkerBuildIdSizeLimit dynamicconfig.IntPropertyFn
DisallowQuery dynamicconfig.BoolPropertyFnWithNamespaceFilter
ShutdownDrainDuration dynamicconfig.DurationPropertyFn
MembershipFailureDetectionDuration dynamicconfig.DurationPropertyFn

MaxBadBinaries dynamicconfig.IntPropertyFnWithNamespaceFilter

Expand Down Expand Up @@ -207,6 +208,7 @@ func NewConfig(dc *dynamicconfig.Collection, numHistoryShards int32, enableReadF
BlobSizeLimitWarn: dc.GetIntPropertyFilteredByNamespace(dynamicconfig.BlobSizeLimitWarn, 256*1024),
ThrottledLogRPS: dc.GetIntProperty(dynamicconfig.FrontendThrottledLogRPS, 20),
ShutdownDrainDuration: dc.GetDurationProperty(dynamicconfig.FrontendShutdownDrainDuration, 0*time.Second),
MembershipFailureDetectionDuration: dc.GetDurationProperty(dynamicconfig.FrontendMembershipFailureDetectionDuration, 10*time.Second),
EnableNamespaceNotActiveAutoForwarding: dc.GetBoolPropertyFnWithNamespaceFilter(dynamicconfig.EnableNamespaceNotActiveAutoForwarding, true),
SearchAttributesNumberOfKeysLimit: dc.GetIntPropertyFilteredByNamespace(dynamicconfig.SearchAttributesNumberOfKeysLimit, 100),
SearchAttributesSizeOfValueLimit: dc.GetIntPropertyFilteredByNamespace(dynamicconfig.SearchAttributesSizeOfValueLimit, 2*1024),
Expand Down Expand Up @@ -334,18 +336,19 @@ func (s *Service) Stop() {

// initiate graceful shutdown:
// 1. Fail rpc health check, this will cause client side load balancer to stop forwarding requests to this node
// 2. wait for 10 seconds failure detection time
// 2. wait for failure detection time
// 3. stop taking new requests by returning InternalServiceError
// 4. Wait for X second
// 5. Stop everything forcefully and return

requestDrainTime := util.Max(time.Second, s.config.ShutdownDrainDuration())
failureDetectionTime := util.Max(0, s.config.MembershipFailureDetectionDuration())

logger.Info("ShutdownHandler: Updating gRPC health status to ShuttingDown")
s.healthServer.Shutdown()

logger.Info("ShutdownHandler: Waiting for others to discover I am unhealthy")
time.Sleep(10 * time.Second)
time.Sleep(failureDetectionTime)

s.handler.Stop()
s.operatorHandler.Stop()
Expand Down