Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 months ago.
Improve this question
I'm using a specific library (unfortunately can't be avoided) that writes some information from a class to a file using a utility function that receives a DataOutputStream as the input.
I would like to get the resulting file's content as a String without actually creating a file and writing into it as the writing can be pretty "taxing" (1000+ lines).
Is this possible to do by using a dummy DataOutputStream or some other method and without resorting to creating a temporary file and reading the result from there?
P.S: the final method that actually writes to the DataOutputStream changes from time to time so I would prefer not actually copy-paste it and redo it every time.
As java.io.DataOutputStream wraps around just any other java.io.OutputStream (you have to specify an instance in the constructor) I would recommend that you use a java.io.ByteArrayOutputStream to collect the data in-memory and get the String of that later with the .toString() method.
Example:
ByteArrayOutputStream inMemoryOutput = new ByteArrayOutputStream();
DataOutputStream dataOutputStream = new DataOutputStream(inMemoryOutput);
// use dataOutputStream here as intended
// and then get the String data
System.out.println(inMemoryOutput.toString());
If the encoding of the collected bytes does not match the system default encoding, you might have to specify a Charset as parameter of the toString.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is it a good practice to write in file with Formatter?
For example:
Formatter g = null;
try {
g = new Formatter("data.out");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
int x = 3;
g.format("abc" + Integer.toString(x));
g.close();
I have been writing with Formatter all semester long, to find out that one of the common ways to write in files is with PrintWriter. Is there any possibility that some programs of my homework won't write what they were meant to write? Thanks!
I wouldn't say there's anything "wrong" with it per se. You simply gain c-style formatting options for your output, though you do lose some compatibility, since Formatter does not extend Writer.
Based on documentation, I wouldn't expect your output to be wrong for your homework. If you really wanted to be sure, though, Formatter does happen to have a constructor that accepts Appandable objects, and PrintWriter, by virtue of extending Writer, happens to implement that interface. This means you could construct a Formatter such that it uses a PrintWriter to do its dirty work, instead of the default BufferedWriter it would normally internally use to output to a file. However, this would be overkill, as while Print and BufferedWriter have internal differences with exception handling, exposed methods, and performance, the printed output would ultimately be identical since Formatter interacts with both the same way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We added a new feature in our web application that has the the following code, basically decompressing the inputstream and creating a new String with UTF-8 encoding
....
// is is an instance of java.util.zip.GZIPInputStream
byte[] payloadBuf = org.apache.commons.compress.utils.IOUtils.toByteArray(is);
String plainPayload = new String(payloadBuf, CharEncoding.UTF_8);
...
when we run an intensive load test that triggers this path many times, we see an abnormal increase of not-heap memory in JVM. Can anyone give some hint on interpreting this? And even better, is there a way to avoid it somehow? Thanks a lot
There is nothing abnormal about your results:
If you call this code in a tight loop you are creating lots and lots of short lived objects. 3 byte[] instances ( all Objects ) as well as a ByteArrayStream for every call! And for no reason apparently.
So you are creating and copying a bunch of byte[] instances around and then the String constructor creates at least one more byte[] and copies that as well, all for nothing.
Are not accomplishing what you think you are doing:
You are not creating a new String with UTF-8 encoding, you are creating a new String which is interpreting the byte[] as UTF-8.
Java stores all String objects in memory as UTF-16, so you are not creating a new String with UTF-8 encoding.
Solution:
You should just read the file into a String to begin with and be done with it, you are creating this intermediate byte[] for nothing!
Here is a couple of examples using Guava:
final String text = CharStreams.toString(new InputStreamReader(is,Charsets.UTF_8));
or
final ByteSource source ...
final String text = source.asCharSource(Charsets.UTF_8).read();
Opinion:
That org.apache.commmons stuff is crap with all the cancerous dependencies and it is not doing anything special to begin with and still makes you deal with a checked exception on top of it all!
165 public static byte[] toByteArray(final InputStream input) throws IOException {
166 final ByteArrayOutputStream output = new ByteArrayOutputStream();
167 copy(input, output);
168 return output.toByteArray();
169 }
If you follow the rabbit hole you will find out that one call to .toByteArray() creates at least 3 instances of byte[] objects, a couple of ByteArrayStream objects that all end up as garbage just to get to String.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
So I am having problems reading from a serialized file.
More specifically, I have serialized an object to a file written in a hexadecimal format. The problem occurs when I want to read one line at a time from this file. For example, the file can look like this:
aced 0005 7372 0005 5465 7374 41f2 13c1
215c 9734 6b02 0000 7870
However, the code underneath reads the whole file (instead of just the first line). Also, it automatically converts the hexadecimal data into something more readable: ¬ísrTestAòÁ
....
try (BufferedReader file = new BufferedReader(new FileReader(fileName))) {
read(file);
} catch ...
....
public static void read(BufferedReader in) throws IOException{
String line = in.readLine();
System.out.println(line); // PROBLEM: This prints every line
}
}
This code works perfectly fine if I have a normal text file with some random words, it only prints the first line. My guess is the problems lies in the serialization format. I read somewhere (probably the API) that the file is supposed to be in binary (even though my file is in hexadecimal??).
What should I do to be able to read one line at a time from this file?
EDIT: I have gotten quite a few of answers, which I am thankful for. I never wanted to deserialize the object - only be able to read every hexadecimal line (one at a time) so I could analyze the serialized object. I am sorry if the question was unclear.
Now I have realized that the file is actually not written in hexadecimal but in binary. Further, it is not even devided into lines. The problem I am facing now is to read every byte and convert it into hexadecimal. Basically, I want the data to look like the hexadecimal data above.
UPDATE:
immibis comments helped me solve this.
"Use FileInputStream (or a BufferedInputStream wrapping one) and call read() repeatedly - each call returns one byte (from 0 to 255) or -1 if there are no more bytes in the file. This is the simplest, but not the most efficient, way (reading an array is usually faster)"
The file does not contain hexadecimal text and is not separated into lines.
Whatever program you are using to edit the file is "helpfully" converting it into hexadecimal for you, since it would be gibberish if displayed directly.
If you are writing the file using ObjectOutputStream and FileOutputStream, then you need to read it using ObjectInputStream and FileInputStream.
Your question doesn't make any sense. Serialized data is binary. It doesn't contain lines. You can't read lines from it. You should either read bytes, with an InputStream, or objects, with an ObjectInputStream.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the most efficient way to read input from a file ?
I have a very very large file which contains a list of words separated by a newline
e.g
computer
science
is
fun
really
I was thinking about using a BufferedReader object however I was confused by this line in the documentation.
"In general, each read request made of a Reader causes a corresponding read request to be made of the underlying character or byte stream. It is therefore advisable to wrap a BufferedReader around any Reader whose read() operations may be costly, such as FileReaders and InputStreamReaders. For example,
BufferedReader in = new BufferedReader(new FileReader("foo.in"));
will buffer the input from the specified file. Without buffering, each invocation of read() or readLine() could cause bytes to be read from the file, converted into characters, and then returned, which can be very inefficient. " <
Can some please explain this to me?
On second read I starting to believe the BufferedReader is my best bet. Is there a better way?
This post may help.
BufferedReader is a good choice, letting you turn a BufferedReader into a java.util.Stream in Java 8.
Parsing a large CSV file for instance with java.util.stream package:
InputStream is = new FileInputStream(new File("persons.csv"));
BufferedReader br = new BufferedReader(new InputStreamReader(is));
List<Person> persons = br.lines()
.substream(1)
.map(mapToPerson)
.filter(person -> person.getAge() > 17)
.limit(50)
.collect(toList());
Unlike Collections which are in-memory data structures which hold elements within it , Streams allow parallel processing and behave like fixed data structures which computes the elements on-demand basis.
Moreover, Streams also support Pipelining and Internal Iterations.
This question already has answers here:
Get an OutputStream into a String
(6 answers)
Closed 9 years ago.
I think this has been answered but I can't seem to find it.
I have an instance method which writes some contents to an output stream
writeTo(OutputStream){
//class specific logic
}
I want it to get these contents into a StringBuilder. I can do this via a temporary file but that does not seem right. I want to do something like:
Stringbuilder sb = /* */;
OutputStream os = outForStringBuilder(sb);//not sure how to do this
instance.writeTo(os); //This should write the contents to Stringbuilder
Use a ByteArrayOutputStream and then call toString(charSet) - no need for a StringBuilder.
So you are wanting output written to the stream to go to a StringBuffer instead. I am assuming you are doing this because an OutputStream is required somewhere else. You could use ByteArrayOutputStream, but if you want to preserve the StringBuffer behavior, you might simply wrap a StringBuffer in a subclass of OutputStream like the code here:
http://geronimo.apache.org/maven/specs/geronimo-javamail_1.4_spec/1.6/apidocs/src-html/org/apache/geronimo/mail/util/StringBufferOutputStream.html#line.31