-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: cannot assign requested address #16012
Comments
If I write the benchmark without concurrency: package benchhttp
import (
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
)
func Benchmark(b *testing.B) {
data := []byte("Foobar")
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
w.Write(data)
}))
defer srv.Close()
for i := 0; i < b.N; i++ {
resp, err := http.Get(srv.URL)
if err != nil {
b.Fatal(err)
}
io.Copy(ioutil.Discard, resp.Body)
resp.Body.Close()
}
} It works (no crash). And the value displayed by |
I suspect you are exceeding the number of local socket connections permitted by your OS. |
You are probably right, but I can't explain why. I've performed the test again, with differents GOMAXPROCS values, and I displayed the number of network connection:
Most connections are in the Should I configure something on my system or fix my code? |
The point of a parallel benchmark is to run as many iterations of the function, in parallel, as will complete in 1 second. The benchmark will keep ramping up the number of iterations until it finds the answer. On your system, it seems that the answer is: more than the system can handle. I would suggest that you put a limit in your code on the number of simultaneous open connections. I'm going to close this issue because at this point I don't see anything to be fixed in Go. |
As far as I know, there are only 8 simultaneous HTTP requests in my code. Does |
Try adding this line to your benchmark:
|
It works! |
ss -s should show high number of tcp connections with much of them in TIMEWAIT . Add these 2 lines to your sysctl.conf . net.ipv4.tcp_tw_recycle = 1 |
The sysctl way causes issues: The MaxIdleConnsPerHost helped. |
http connection pooling in golang seems broken according to golang/go#16012 (comment) . This works around cases when we have many parallel requests
The idea here is for Pilosa to behave better under high query load where a node might be making many connections to the other nodes in the cluster in order to support lots of concurrent batches of SetBit queries (for example). By allowing for more idle connections and more idle connections per host, we reduce connection churn, and allow more connections to be reused rather than creating new ones and potentially having many stale sockets in the TIME_WAIT state. See golang/go#16012
go version
)?1.6.2 and tip
go env
)?Run this benchmark:
With:
go test -bench=. -benchmem -benchtime=10s
It should work.
It takes a long time and crashes:
During the benchmark, the value displayed by
watch "ss -a | wc -l"
increases really quickly (around 30-40k).The text was updated successfully, but these errors were encountered: