Skip to content

server: allow 0s grpc-timeout header values, as java is known to be able to send them#8439

Merged
dfawley merged 1 commit intogrpc:masterfrom
dfawley:0timeout
Jul 9, 2025
Merged

server: allow 0s grpc-timeout header values, as java is known to be able to send them#8439
dfawley merged 1 commit intogrpc:masterfrom
dfawley:0timeout

Conversation

@dfawley
Copy link
Member

@dfawley dfawley commented Jul 8, 2025

grpc/grpc-java#12201 was recently created to prevent this, but older versions will still do it. This reverts our old behavior of allowing 0 second timeouts (pre- #8290).

I would propose that we do not revert the change in http_util and allow the RPC to instantly timeout instead of throw an internal error.

cc @ejona86

RELEASE NOTES:

  • server: allow 0s grpc-timeout header values, which older gRPC-Java versions could send

@dfawley dfawley added this to the 1.75 Release milestone Jul 8, 2025
@dfawley dfawley requested a review from arjan-bal July 8, 2025 21:56
@dfawley dfawley added the Type: Behavior Change Behavior changes not categorized as bugs label Jul 8, 2025
@codecov
Copy link

codecov bot commented Jul 8, 2025

Codecov Report

Attention: Patch coverage is 0% with 11 lines in your changes missing coverage. Please review.

Project coverage is 82.34%. Comparing base (a21e374) to head (3059358).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
internal/transport/http2_server.go 0.00% 10 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #8439      +/-   ##
==========================================
+ Coverage   82.26%   82.34%   +0.07%     
==========================================
  Files         414      414              
  Lines       40410    40418       +8     
==========================================
+ Hits        33243    33281      +38     
+ Misses       5796     5774      -22     
+ Partials     1371     1363       -8     
Files with missing lines Coverage Δ
internal/transport/http_util.go 94.44% <ø> (-0.07%) ⬇️
internal/transport/http2_server.go 90.14% <0.00%> (-0.64%) ⬇️

... and 18 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@arjan-bal arjan-bal assigned dfawley and unassigned arjan-bal Jul 9, 2025
@dfawley dfawley merged commit 12f9d9c into grpc:master Jul 9, 2025
17 checks passed
dfawley added a commit to dfawley/grpc-go that referenced this pull request Jul 9, 2025
dfawley added a commit that referenced this pull request Jul 9, 2025
dimpavloff pushed a commit to dimpavloff/grpc-go that referenced this pull request Aug 22, 2025
@colega
Copy link

colega commented Oct 9, 2025

Please note that this change has changed the behavior of ServerInHandle (gprc.InTapHandle option).

Previously, when ServerInHandle returned no error, the request would be processed, so any resource changes (like inflight requests counter) performed in ServerInHandle could be later undone though a stats handler.

Now, however, there's no such guarantee anymore.

I wonder if it would be possible to move this early return before the inTapHandle call?

@dfawley
Copy link
Member Author

dfawley commented Oct 13, 2025

The reason I put this after the intaphandle call is because that is able to change the context to one that has already expired.

I'm also curious about your use case. Can you not track in-flight requests entirely within the stats handler (increment on Begin)? Or if you need to restrict parallelism, note that you can also do that via MaxConcurrentStreams, which will let the client know about your limit as well.

@colega
Copy link

colega commented Dec 10, 2025

Sorry for the delayed response, this has been sitting on my TODO list for a while.

We do want to restrict parallelism, but we want a global limit, not per-client. We want to restrict the total amount of connections that the given server may handle, and MaxConcurrentStreams doesn't seem to help with that.

@dfawley
Copy link
Member Author

dfawley commented Dec 10, 2025

Do you want to limit the total number of streams on a server across all connections? Or just the total number of connections? The latter can be accomplished by via https://pkg.go.dev/golang.org/x/net/netutil#LimitListener.

You could also use interceptors instead of the tap/stats combo you are using now. That is the approach that would translate to other gRPC language implementations which don't have anything like our tap handler thing. There's a small amount of extra CPU cycles that go into interceptors vs. the tap handler, but it shouldn't be too bad.

But also unless your clients are looking at the error and doing some kind of custom backoff logic, then I'm not sure the general approach is ideal. Using MaxConcurrentStreams and a LimitListener instead would help you protect your server and also advertise the stream limit to clients so they would naturally get pushback when they try to create too many streams. This approach also prevents one client from stealing all of the available streams from the other clients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Type: Behavior Change Behavior changes not categorized as bugs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments