Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

efficiency improvements in receive call flow #294

Open
jeffallen opened this issue Jan 11, 2018 · 1 comment
Open

efficiency improvements in receive call flow #294

jeffallen opened this issue Jan 11, 2018 · 1 comment

Comments

@jeffallen
Copy link
Contributor

It should be possible to (on average) receive frames with two system calls per frame (size and msg), no further copies, and no allocations.

In package network, file tcp.go, Receive and receiveRaw could be improved to (re)use a []byte which is stored in the TCPConn. This would reduce garbage created per frame.

When the length is known, the []byte should be stretched if size < cap, or realloced.

The read loop should be reading directly into the per-connection []byte. It should try to minimise the number of calls to Read (it already does a good job of this).

The []byte returned by receiveRaw would be a slice on the per-connection buffer not including the size. Then Unmarshal should stop using bytes.Buffer, since it doesn't need it. Instead, it can copy 16 bytes into a UUID, and then pass a slice onwards to protobuf (which is already careful to work on the buffer it was given efficiently).

The per-connection []byte can be checked before the exit to Receive, and if it is too big, free it. (Letting rarely used connections hold a lot of RAM is not good for scalability.)

Of course, like all performance changes, this needs to be measured before/after. It should be pretty easy to measure that before allocs > 1 and after allocs == 1, which is already a huge win because it reduces GC pressure.

@jeffallen
Copy link
Contributor Author

At a bare minimum, it is unclear why receiveRaw is copying bytes into a bytes.Buffer. The code change in #532 (comment) removes this. It needs more testing, but it should be checked in as a first step.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant