Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Only encode the streaming content once.
In order to calculate the contentLength, we must write the encode the
data first. The encoded data is written to a buffer using a
ByteArrayOutputStream implementation and we use that to figure out how
many bytes of data is to be sent.

Previously, this data was thrown away and the content was re-encoded
when it was actually time to send the data.

Instead, we now replace the content with a ByteArrayContent which
contains the buffer we wrote to when calculating the size.

We implemented a new CachingByteArrayOutputStream so that we can access
the byte buffer directly rather than copying into a new byte array (for
memory performance)
  • Loading branch information
chingor13 committed May 24, 2019
commit 331e250f242f4ca8939adffd06f5045e64398c53
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
package com.google.api.client.http;

import com.google.api.client.util.Beta;
import com.google.api.client.util.IOUtils;
import com.google.api.client.util.CachingByteArrayOutputStream;
import com.google.api.client.util.LoggingStreamingContent;
import com.google.api.client.util.ObjectParser;
import com.google.api.client.util.Preconditions;
Expand Down Expand Up @@ -932,7 +932,14 @@ public HttpResponse execute() throws IOException {
} else {
contentEncoding = encoding.getName();
streamingContent = new HttpEncodingStreamingContent(streamingContent, encoding);
contentLength = contentRetrySupported ? IOUtils.computeLength(streamingContent) : -1;
if (contentRetrySupported) {
CachingByteArrayOutputStream outputStream = new CachingByteArrayOutputStream();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOUtils checks the size via streaming, and doesn't actually store the data in memory. The same thing for sending the data. A large object will currently never fully have to be in memory.

This change may cause an unexpected memory spike for users with large objects

streamingContent.writeTo(outputStream);
contentLength = outputStream.getContentLength();
streamingContent = new ByteArrayContent(contentType, outputStream.getBuffer());
} else {
contentLength = -1;
}
}
// append content headers to log buffer
if (loggable) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
/*
* Copyright 2019 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
* the License.
*/

package com.google.api.client.util;

import java.io.ByteArrayOutputStream;

/**
* Output stream that extends the built-in {@link ByteArrayOutputStream} to return the internal
* byte buffer rather than creating a copy.
*/
public class CachingByteArrayOutputStream extends ByteArrayOutputStream {

/**
* Returns the content length of the buffer.
*
* @return tthe content length of the buffer.
*/
public int getContentLength() {
return count;
}

/**
* Returns the buffer where the byte data is stored.
*
* @return the buffer where the byte data is stored.
*/
public byte[] getBuffer() {
return buf;
}

}