Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect use of low-level Index API may cause goroutine leak #958

Open
PrettyABoy opened this issue Feb 26, 2025 · 0 comments
Open

Incorrect use of low-level Index API may cause goroutine leak #958

PrettyABoy opened this issue Feb 26, 2025 · 0 comments

Comments

@PrettyABoy
Copy link

PrettyABoy commented Feb 26, 2025

go 1.22
go-elasticsearch/v8 v8.16.0

According to the doc says:

Image

I wrote my code like

	client, err := elasticsearch.NewClient(elasticsearch.Config{
		Addresses: config.Addresses,
		Username:  config.Username,
		Password:  config.Password,
		Transport: &http.Transport{
			TLSClientConfig: &tls.Config{
				InsecureSkipVerify: true,
			},
		},
	})
	if err != nil {
		return err
	}
	_, err := client.Index(index, bytes.NewReader(data))
	if err != nil {
		return err
	}

Then the memory usage began to grow.
And I got this

(pprof) top
Showing nodes accounting for 10158, 100% of 10160 total
Dropped 111 nodes (cum <= 50)
      flat  flat%   sum%        cum   cum%
     10158   100%   100%      10158   100%  runtime.gopark
         0     0%   100%       5057 49.77%  net/http.(*persistConn).readLoop
         0     0%   100%       5057 49.77%  net/http.(*persistConn).writeLoop
         0     0%   100%      10138 99.78%  runtime.selectgo
(pprof) 

and

(pprof) top
Showing nodes accounting for 94.79MB, 83.48% of 113.54MB total
Dropped 26 nodes (cum <= 0.57MB)
Showing top 10 nodes out of 106
      flat  flat%   sum%        cum   cum%
   33.64MB 29.63% 29.63%    33.64MB 29.63%  bytes.growSlice
   17.57MB 15.47% 45.10%    17.57MB 15.47%  bufio.NewWriterSize (inline)
   17.57MB 15.47% 60.57%    17.57MB 15.47%  bufio.NewReaderSize (inline)
       6MB  5.29% 65.86%    51.14MB 45.04%  net/http.(*Transport).dialConn
       5MB  4.41% 70.27%        5MB  4.41%  crypto/tls.Client (inline)
       4MB  3.52% 73.79%        4MB  3.52%  runtime.malg
    3.50MB  3.08% 76.88%     3.50MB  3.08%  crypto/aes.(*aesCipherGCM).NewGCM
    2.50MB  2.20% 79.08%     2.50MB  2.20%  crypto/tls.(*Config).Clone
    2.50MB  2.20% 81.28%    10.08MB  8.88%  github.com/elastic/go-elasticsearch/v8/esapi.IndexRequest.Do
    2.50MB  2.20% 83.48%     2.50MB  2.20%  net/http.(*persistConn).roundTrip

and a lot of

...
goroutine 676455 [select, 221 minutes]:
net/http.(*persistConn).writeLoop(0xc006a00000)
	/snap/go/10828/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 676453
	/snap/go/10828/src/net/http/transport.go:1800 +0x1585

goroutine 660868 [select, 239 minutes]:
net/http.(*persistConn).readLoop(0xc00691eb40)
	/snap/go/10828/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 660761
	/snap/go/10828/src/net/http/transport.go:1799 +0x152f
...

It took me a while to confirm that it was the problem here.
One reaquest would lead to two leaks.

I just briefly looked at the source code and found that there can be some readings of Body in transport.Perform(req) . Although I don't have time to find the specific cause of this leak, manually Close the Body can prevent the leak.

	resp, err := client.Index(index, reader)
	if err != nil {
		return err
	}
	defer resp.Body.Close()

By the way, I'm not sure if other APIs have the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant