I get url in android and convert data stream to 64 bit data string with this code:
URL url = new URL("http://iranassistance.com/images/sos-logo.png");
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.setRequestMethod("GET");
//urlConnection.setDoOutput(true);
urlConnection.connect();
File SDCardRoot = Environment.getExternalStorageDirectory().getAbsoluteFile();
String filename="downloadedFile.png";
Log.i("Local filename:",""+filename);
File file = new File(SDCardRoot,filename);
if(file.createNewFile()) {
file.createNewFile();
}
FileOutputStream fileOutput = new FileOutputStream(file);
InputStream inputStream = urlConnection.getInputStream();byte[] imageBytes = new byte[urlConnection.getContentLength()];
inputStream.read(imageBytes, 0, imageBytes.length);
inputStream.close();
String base64Image = Base64.encodeToString(imageBytes, Base64.DEFAULT);
But the base64Image result is not complete and gave something like this :
......nUTJaJnb7PLyscfBMQLLiexyKSEh/o2RfctcZtc8Hr5xcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA........
The repeated 'A' show something is wrong and image is not complete! Why this not work properly ?
Simple:
inputStream.read(imageBytes, 0, imageBytes.length);
You assume that the above always reads all bytes in one shot.
Wrong. This method reads as many bytes as it wants to read. Therefore it is returning you the number of bytes read. See its javadoc:
Returns: the total number of bytes read into the buffer, or -1 if there is no more data because the end of the stream has been reached.
In other words: you have to loop and accumulate these numbers until you got exactly the amount of bytes you are looking for!
And you get those A chars: your array is initially all 0. As explained: you are only filling parts of that array. So the rest of the arrays has still 0s in it - which results as AAAAAs after encoding.
You can use the following function to convert image into base64 just pass on your image....
private String encodeImage(Bitmap mphoto)
{
ByteArrayOutputStream baos = new ByteArrayOutputStream();
mphoto.compress(Bitmap.CompressFormat.JPEG,100,baos);
byte[] b = baos.toByteArray();
String encImage = Base64.encodeToString(b, Base64.DEFAULT);
return encImage;
}
GhostCat said the correct answer ,
i change my code as bellow and it worked find :
InputStream is = null;
try {
URL url = new URL("http://iranassistance.com/images/sos-logo.png");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
is = url.openStream ();
byte[] byteChunk = new byte[4096];
int n;
while ( (n = is.read(byteChunk)) > 0 ) {
baos.write(byteChunk, 0, n);
}
String base64Image2 = Base64.encodeToString(baos.toByteArray(), Base64.DEFAULT);
db.UpdateImage64(base64Image2);
productModel pd = db.GetProductById(2);
}
catch (IOException e) {
e.printStackTrace ();
}
finally {
if (is != null) {
try{
is.close();
}
catch (IOException s){
}
}
}
Related
I am trying to compress and array of bytes into another array of bytes using GZIPOutputStream (in Java).
This is my code:
#Test
public void testCompressBytes() throws IOException {
final byte[] uncompressed = RandomStringUtils.randomAlphanumeric(100000 /* 100 kb */).getBytes();
// compress
byte[] compressed;
try (InputStream is = new ByteArrayInputStream(uncompressed);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
OutputStream os = new GZIPOutputStream(baos)) {
IOUtils.copy(is, os); // org.apache.commons.io
os.flush();
compressed = baos.toByteArray();
}
System.out.println("Size before compression = " + uncompressed.length + ", after = " + compressed.length);
// decompress back
byte[] decompressedBack;
try (InputStream is = new GZIPInputStream(new ByteArrayInputStream(compressed));
ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
IOUtils.copy(is, baos); // EXCEPTION THROWN HERE
baos.flush();
decompressedBack = baos.toByteArray();
}
assertArrayEquals(uncompressed, decompressedBack);
}
And this is the output I'm getting:
Size before compression = 100000, after = 63920
java.io.EOFException: Unexpected end of ZLIB input stream
What could I be doing wrong?
You need to call GZIPOutputStream::close before calling ByteArrayOutputStream::toByteArray, so that GZIPOutputStream writes all the end bits.
In your current code you are calling ByteArrayOutputStream::toByteArray before GZIPOutputStream::close (via try-with-resources) that's why it doesn't work.
Thanks, everybody!
Although calling GZIPOutputStream::finish() before ByteArrayOutputStream::toByteArray() seems to do the trick, I believe it's better to completely close the GZIP stream first, which in turn forces us to keep ByteArrayOutputStream outside the try-with-resources clause.
So, my reworked compression part looks like that now:
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
try (InputStream is = new ByteArrayInputStream(uncompressed);
GZIPOutputStream gzos = new GZIPOutputStream(baos)) {
IOUtils.copy(is, gzos);
} catch (final IOException e) {
throw new RuntimeException(e);
}
IOUtils.closeQuietly(baos);
final byte[] compressed = baos.toByteArray();
I need to convert an image choosen from the gallery into a base64 string.
Then, I pass the baase64 string as a parameter for an API request.
There is only one problem. When I use netbeans it works, when I use Android Studio it doesn't. I found that the problem is the base64 string output. I don't know why, if I use the same exactly image, the output is different.
Maybe the problem happpens because I have to use the same exact method to read the image file...?
That's my code in Netbeans(working):
InputStream inputStream = new FileInputStream("testImage.jpg");
byte[] bytes;
byte[] buffer = new byte[8192];
int bytesRead;
ByteArrayOutputStream output = new ByteArrayOutputStream();
try {
while ((bytesRead = inputStream.read(buffer)) != -1) {
output.write(buffer, 0, bytesRead);
}
} catch (IOException e) {
e.printStackTrace();
}
bytes = output.toByteArray();
String encodedFile = Base64.getEncoder().encodeToString(bytes);
And that's the code in Android Studio:
Bitmap bitmap = ((BitmapDrawable) image.getDrawable()).getBitmap();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] imageData = baos.toByteArray();
InputStream inputStream = getContentResolver().openInputStream(imageUri);
byte[] buffer_new = new byte[8192];
int bytesRead;
ByteArrayOutputStream output = new ByteArrayOutputStream();
try {
while ((bytesRead = inputStream.read(buffer_new)) != -1) {
output.write(buffer_new, 0, bytesRead);
}
} catch (IOException e) {
e.printStackTrace();
}
bytes = output.toByteArray();
String encodedImage = Base64.encodeToString(bytes,Base64.DEFAULT);
Log.v("encodedImage", encodedImage);
The ouput string are almost the same only at the first char..then they are different..with the android studio encoded string, I get this error when I try to use the API.
BAD_ARGUMENTS:<key>
Error while parsing some arguments. This error may be caused by illegal type or length of argument.
What should I use to get the same base64 string?
ps. in Netbeans the image is a file in the same folder of the project, in android studio the user can load a picture from gallery.
I can't use ImageIO.read() because of my own restrictions. I can only load bytes after GET request and I need to save this bytes to file as image. But it seems to me, that there also loads some extra data, which browser usually filter (maybe response headers). So I get the array of raw bytes which I even can't open as image.
What should I do with this bytes?
Example:
byte[] buf = ContentLoader.loadBytes(new URL("http://images.visitcanberra.com.au/images/canberra_hero_image.jpg"));
try {
FileOutputStream fileOutputStream = new FileOutputStream(new File("D:\\image.jpg"));
fileOutputStream.write(buf);
fileOutputStream.flush();
} catch (IOException e) {
e.printStackTrace();
}
loadBytes() method:
public static byte[] loadBytes(URL url) {
ByteArrayOutputStream boutArray = new ByteArrayOutputStream();
try {
URLConnection connection = url.openConnection();
BufferedInputStream bin = new BufferedInputStream(connection.getInputStream());
byte[] buffer = new byte[1024 * 16];
while (bin.read(buffer) != -1) {
boutArray.write(buffer);
boutArray.flush();
}
bin.close();
} catch (Exception e) {
return null;
}
return boutArray.toByteArray();
}
Usual problems. The standard way to copy a stream in Java is:
int count;
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
out.close();
in.close();
Note that you need to store the result returned by read() into a variable; that you need to use it in the next write() call; that you shouldn't flush() inside a loop; and that you need to close the input and output streams.
And why you're using a ByteArrayInputStream at all is a mystery. It's just a waste of time and space. Read directly from the URL input stream, and write directly to the FileOutputStream.
The following code works for me:-
URL url = new URL("my url...");
InputStream is = url.openStream();
OutputStream os = new FileOutputStream("img.jpg");
byte[] b = new byte[2048];
int length;
while ((length = is.read(b)) != -1) {
os.write(b, 0, length);
}
is.close();
os.close();
Getting the above error when trying to download large data using HttpGet
String uri = "";
getMethod = executeGet(uri);
httpClient.executeMethod(getMethod);
InputStream istream = getMethod.getResponseBodyAsStream();
byte[] data = IOUtils.toByteArray(istream);
FileUtils.writeByteArraytoFile(new File("xxx.zip"),data)
You are using a temporary byte array that might be the cause of the problem.
You can directly write the content of the stream to your file.
String uri = "";
getMethod = executeGet(uri);
httpClient.executeMethod(getMethod);
InputStream istream = getMethod.getResponseBodyAsStream();
IOUtils.copy(istream, new FileOutputStream(new File("xxx.zip"));
You're reading the entire response into the byte[] (memory). Instead, you could stream the output as you read it from istream with something like,
File f = new File("xxx.zip");
try (OutputStream os = new BufferedOutputStream(new FileOutputStream(f));) {
int c = -1;
while ((c = istream.read()) != -1) {
os.write(c);
}
} catch (Exception e) {
e.printStackTrace();
}
I'm trying to read an image from an URL (with the Java package
java.net.URL) to a byte[]. "Everything" works fine, except that the content isn't being entirely read from the stream (the image is corrupt, it doesn't contain all the image data)... The byte array is being persisted in a database (BLOB). I really don't know what the correct approach is, maybe you can give me a tip. :)
This is my first approach (code formatted, removed unnecessary information...):
URL u = new URL("http://localhost:8080/images/anImage.jpg");
int contentLength = u.openConnection().getContentLength();
Inputstream openStream = u.openStream();
byte[] binaryData = new byte[contentLength];
openStream.read(binaryData);
openStream.close();
My second approach was this one (as you'll see the contentlength is being fetched another way):
URL u = new URL(content);
openStream = u.openStream();
int contentLength = openStream.available();
byte[] binaryData = new byte[contentLength];
openStream.read(binaryData);
openStream.close();
Both of the code result in a corrupted image...
I already read this post from Stack Overflow.
There's no guarantee that the content length you're provided is actually correct. Try something akin to the following:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
InputStream is = null;
try {
is = url.openStream ();
byte[] byteChunk = new byte[4096]; // Or whatever size you want to read in at a time.
int n;
while ( (n = is.read(byteChunk)) > 0 ) {
baos.write(byteChunk, 0, n);
}
}
catch (IOException e) {
System.err.printf ("Failed while reading bytes from %s: %s", url.toExternalForm(), e.getMessage());
e.printStackTrace ();
// Perform any other exception handling that's appropriate.
}
finally {
if (is != null) { is.close(); }
}
You'll then have the image data in baos, from which you can get a byte array by calling baos.toByteArray().
This code is untested (I just wrote it in the answer box), but it's a reasonably close approximation to what I think you're after.
Just extending Barnards's answer with commons-io. Separate answer because I can not format code in comments.
InputStream is = null;
try {
is = url.openStream ();
byte[] imageBytes = IOUtils.toByteArray(is);
}
catch (IOException e) {
System.err.printf ("Failed while reading bytes from %s: %s", url.toExternalForm(), e.getMessage());
e.printStackTrace ();
// Perform any other exception handling that's appropriate.
}
finally {
if (is != null) { is.close(); }
}
http://commons.apache.org/io/api-1.4/org/apache/commons/io/IOUtils.html#toByteArray(java.io.InputStream)
Here's a clean solution:
private byte[] downloadUrl(URL toDownload) {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try {
byte[] chunk = new byte[4096];
int bytesRead;
InputStream stream = toDownload.openStream();
while ((bytesRead = stream.read(chunk)) > 0) {
outputStream.write(chunk, 0, bytesRead);
}
} catch (IOException e) {
e.printStackTrace();
return null;
}
return outputStream.toByteArray();
}
I am very surprised that nobody here has mentioned the problem of connection and read timeout. It could happen (especially on Android and/or with some crappy network connectivity) that the request will hang and wait forever.
The following code (which also uses Apache IO Commons) takes this into account, and waits max. 5 seconds until it fails:
public static byte[] downloadFile(URL url)
{
try {
URLConnection conn = url.openConnection();
conn.setConnectTimeout(5000);
conn.setReadTimeout(5000);
conn.connect();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
IOUtils.copy(conn.getInputStream(), baos);
return baos.toByteArray();
}
catch (IOException e)
{
// Log error and return null, some default or throw a runtime exception
}
}
byte[] b = IOUtils.toByteArray((new URL( )).openStream()); //idiom
Note however, that stream is not closed in the above example.
if you want a (76-character) chunk (using commons codec)...
byte[] b = Base64.encodeBase64(IOUtils.toByteArray((new URL( )).openStream()), true);
Use commons-io IOUtils.toByteArray(URL):
String url = "http://localhost:8080/images/anImage.jpg";
byte[] fileContent = IOUtils.toByteArray(new URL(url));
Maven dependency:
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
The content length is just a HTTP header. You cannot trust it. Just read everything you can from the stream.
Available is definitely wrong. It's just the number of bytes that can be read without blocking.
Another issue is your resource handling. Closing the stream has to happen in any case. try/catch/finally will do that.
It's important to specify timeouts, especially when the server takes to respond. With pure Java, without using any dependency:
public static byte[] copyURLToByteArray(final String urlStr,
final int connectionTimeout, final int readTimeout)
throws IOException {
final URL url = new URL(urlStr);
final URLConnection connection = url.openConnection();
connection.setConnectTimeout(connectionTimeout);
connection.setReadTimeout(readTimeout);
try (InputStream input = connection.getInputStream();
ByteArrayOutputStream output = new ByteArrayOutputStream()) {
final byte[] buffer = new byte[8192];
for (int count; (count = input.read(buffer)) > 0;) {
output.write(buffer, 0, count);
}
return output.toByteArray();
}
}
Using dependencies, e.g., HC Fluent:
public byte[] copyURLToByteArray(final String urlStr,
final int connectionTimeout, final int readTimeout)
throws IOException {
return Request.Get(urlStr)
.connectTimeout(connectionTimeout)
.socketTimeout(readTimeout)
.execute()
.returnContent()
.asBytes();
}