Why people use BufferedReader to read post data - java

People sometimes use BufferedReader to read post data.
BufferedReader bReader;
String postData = null;
try {
bReader = request.getReader();
char[] buf = new char[1024];
int len;
StringBuilder sBuilder = new StringBuilder();
while ((len = bReader.read(buf)) != -1) {
sBuilder.append(buf, 0, len);
}
postData = sBuilder.toString();
} catch (IOException e) {
bReader = null;
}
When should I use this to get parameter, how about request.getParameter()?

As EJP notes, this approach is used when the request's POST data consists of something other than request parameters.
So ...
When should I use this to get parameter, how about request.getParameter()?
You use it when you are expecting the request POST body to be a document. But it may not be adequate, as explained below.
That code is not particularly efficient, and that it could be problematic in other respects.
On the efficiency side, the code is using a BufferedReader AND reading into a large(-ish) character buffer before transferring into a StringBuilder.
Using a BufferedReader and a char[] is kind of pointless. If you are going to do block reads, it is (marginally) better to read from the underlying Reader.
Reading the entire POST data into a StringBuilder (without limiting its length) could leave you open to denial of service attacks aimed at triggering OOMEs. (You will get the same problem if the long requests are legitimate ...).
There are also larger issues:
Should process the POST data as a character stream rather trying to create a single String?
Is it correct to treat the POST data as characters at all? (See the Content-type header.)
Are you using the correct encoding scheme to decode the characters? (See the Content-type header, etcetera)
Should you be using the using the Content-length header to as a hint for sizing things and/or enforcing request size limits.
In short, the code that you are asking us about looks to be too simplistic to be a general solution to the problem of reading POST data.
If the post data cost lots of memory, will it be sent into two parts or more?
Probably not. Indeed, unless you (the developer of the webapp) implement a scheme which allows the client side to send in smaller chunks, the client may have no choice but to send one big document in the POST data. Of course, depending on what the document is and how it needs to be processed, you may not need to assemble it all in memory. The other point is that you should not rely on the client "doing the right thing" in terms of what it sends you. Your server needs to defend itself at some point.

Sometimes POST data isn't name-value pairs. Sometimes for example it is an XML document.

I do not know much about BufferedReader, but you could try creating an ArrayList of Strings and each time a post is sent shove the message into the array and give a limit on the index size otherwise you will end up with a memory problem depending on the harddrive space of the designated server.

Related

Does Guava have an overload that aborts the stream if it is too large?

I have a servlet that clients will post xml or json data too.
Currently I am reading the posted content using Guava:
String string = CharStreams.toString( new InputStreamReader( inputStream, "UTF-8" ) );
I want to be able to abort my entire operation of reading the posted file if it is larger n in size.
Is there a way to do this using Guava or do I have to know implement my own function to do this?
I don't see anything that aborts, but you can use ByteStreams#limit(InputStream, long) to set a maximum number of bytes to read. The InputStream returned will simply return -1 on any read(..) that goes over the limit.
If you really want abort behavior, you could write your own InputStream wrapper that throws an exception if you go above some number of bytes read.

Java: Faster alternative to String(byte[])

I am developing a Java-based downloader for binary data. This data is transferred via a text-based protocol (UU-encoded). For the networking task the netty library is used. The binary data is split by the server into many thousands of small packets and sent to the client (i.e. the Java application).
From netty I receive a ChannelBuffer object every time a new message (data) is received. Now I need to process that data, beside other tasks I need to check the header of the package coming from the server (like the HTTP status line). To do so I call ChannelBuffer.array() to receive a byte[] array. This array I can then convert into a string via new String(byte[]) and easily check (e.g. compare) its content (again, like comparison to the "200" status message in HTTP).
The software I am writing is using multiple threads/connections, so that I receive multiple packets from netty in parallel.
This usually works fine, however, while profiling the application I noticed that when the connection to the server is good and data comes in very fast, then this conversion to the String object seems to be a bottleneck. The CPU usage is close to 100% in such cases, and according to the profiler very much time is spent in calling this String(byte[]) constructor.
I searched for a better way to get from the ChannelBuffer to a String, and noticed the former also has a toString() method. However, that method is even slower than the String(byte[]) constructor.
So my question is: Does anyone of you know a better alternative to achieve what I am doing?
Perhaps you could skip the String conversion entirely? You could have constants holding byte arrays for your comparison values and check array-to-array instead of String-to-String.
Here's some quick code to illustrate. Currently you're doing something like this:
String http200 = "200";
// byte[] -> String conversion happens every time
String input = new String(ChannelBuffer.array());
return input.equals(http200);
Maybe this is faster:
// Ideally only convert String->byte[] once. Store these
// arrays somewhere and look them up instead of recalculating.
final byte[] http200 = "200".getBytes("UTF-8"); // Select the correct charset!
// Input doesn't have to be converted!
byte[] input = ChannelBuffer.array();
return Arrays.equals(input, http200);
Some of the checking you are doing might just look at part of the buffer. If you could use the alternate form of the String constructor:
new String(byteArray, startCol, length)
That might mean a lot less bytes get converted to a string.
Your example of looking for "200" within the message would be an example.
2
You might find that you can use the length of the byte array as a clue. If some messages are long and you are looking for a short one, ignore the long ones and don't convert to characters. Or something like that.
3
Along with what #EricGrunzke said, partially looking in the byte buffer to filter out some messages and find that you don't need to convert them from bytes to characters.
4
If your bytes are ASCII characters, the conversion to characters might be quicker if you use charset "ASCII" instead of whatever the default is for your server:
new String(bytes, "ASCII")
might be faster in that case.
In fact, you might be able to pick and choose the charset for conversion byte-character in some organized fashion that speeds up things.
Depending on what you are trying to do there are a few options:
If you are just trying to get the response status to then can't you just call getStatus()? This would probably be faster than getting the string out.
If you are trying to convert the buffer, then, assuming you know it will be ASCII, which it sounds like you do, then just leave the data as byte[] and convert your UUDecode method to work on a byte[] instead of a String.
The biggest cost of the string conversion is most likely the copying of the data from the byte array to the internal char array of the String, this combined with the conversion is most likely just a bunch of work that you don't need to do.

Most Robust way of reading a file or stream using Java (to prevent DoS attacks)

Currently I have the below code for reading an InputStream. I am storing the whole file into a StringBuilder variable and processing this string afterwards.
public static String getContentFromInputStream(InputStream inputStream)
// public static String getContentFromInputStream(InputStream inputStream,
// int maxLineSize, int maxFileSize)
{
StringBuilder stringBuilder = new StringBuilder();
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String lineSeparator = System.getProperty("line.separator");
String fileLine;
boolean firstLine = true;
try {
// Expect some function which checks for line size limit.
// eg: reading character by character to an char array and checking for
// linesize in a loop until line feed is encountered.
// if max line size limit is passed then throw an exception
// if a line feed is encountered append the char array to a StringBuilder
// after appending check the size of the StringBuilder
// if file size exceeds the max file limit then throw an exception
fileLine = bufferedReader.readLine();
while (fileLine != null) {
if (!firstLine) stringBuilder.append(lineSeparator);
stringBuilder.append(fileLine);
fileLine = bufferedReader.readLine();
firstLine = false;
}
} catch (IOException e) {
//TODO : throw or handle the exception
}
//TODO : close the stream
return stringBuilder.toString();
}
The code went for a review with the Security team and the following comments were received:
BufferedReader.readLine is susceptible to DOS (Denial of Service) attacks (line of infinite length, huge file containing no line feed/carriage return)
Resource exhaustion for the StringBuilder variable (cases when a file containing data greater than the available memory)
Below are the solutions I could think of:
Create an alternate implementation of readLine method (readLine(int limit)), which checks for the no. of bytes read and if it exceeds the specified limit, throw a custom exception.
Process the file line by line without loading the file in entirety. (pure non-Java solution :) )
Please suggest if there are any existing libraries which implement the above solutions.
Also suggest any alternate solutions which offer more robustness or are more convenient to implement than the proposed ones. Though performance is also a major requirement, security comes first.
Updated Answer
You want to avoid all sorts of DOS attacks (on lines, on size of the file, etc). But in the end of the function, you're trying to convert the entire file into one single String!!! Assume that you limit the line to 8 KB, but what happens if somebody sends you a file with two 8 KB lines? The line reading part will pass, but when finally you combine everything into a single string, the String will choke all available memory.
So since finally you're converting everything into one single String, limiting line size doesn't matter, nor is safe. You have to limit the entire size of the file.
Secondly, what you're basically trying to do is, you're trying to read data in chunks. So you're using BufferedReader and reading it line-by-line. But what you're trying to do, and what you really want at the end - is some way of reading the file piece by piece. Instead of reading one line at a time, why not instead read 2 KB at a time?
BufferedReader - by its name - has a buffer inside it. You can configure that buffer. Let's say you create a BufferedReader with buffer size of 2 KB:
BufferedReader reader = new BufferedReader(..., 2048);
Now if the InputStream that you pass to BufferedReader has 100 KB of data, BufferedReader will automatically read it 2 KB at at time. So it will read the stream 50 times, 2 KB each (50x2KB = 100 KB). Similarly, if you create BufferedReader with a 10 KB buffer size, it will read the input 10 times (10x10KB = 100 KB).
BufferedReader already does the work of reading your file chunk-by-chunk. So you don't want to add an extra layer of line-by-line above it. Just focus on the end result - if your file at the end is too big (> available RAM) - how are you going to convert it into a String at the end?
One better way is to just pass things around as a CharSequence. That's what Android does. Throughout the Android APIs, you will see that they return CharSequence everywhere. Since StringBuilder is also a subclass of CharSequence, Android will internally use either a String, or a StringBuilder or some other optimized string class based on the size/nature of input. So you could rather directly return the StringBuilder object itself once you've read everything, rather than converting it to a String. This would be safer against large data. StringBuilder also maintains the same concept of buffers inside it, and it will internally allocate multiple buffers for large strings, rather than one long string.
So overall:
Limit the overall file size since you're going to deal with the entire content at some point. Forget about limiting or splitting lines
Read in chunks
Using Apache Commons IO, here is how you would read data from a BoundedInputStream into a StringBuilder, splitting by 2 KB blocks instead of lines:
// import org.apache.commons.io.output.StringBuilderWriter;
// import org.apache.commons.io.input.BoundedInputStream;
// import org.apache.commons.io.IOUtils;
BoundedInputStream boundedInput = new BoundedInputStream(originalInput, <max-file-size>);
BufferedReader reader = new BufferedReader(new InputStreamReader(boundedInput), 2048);
StringBuilder output = new StringBuilder();
StringBuilderWriter writer = new StringBuilderWriter(output);
IOUtils.copy(reader, writer); // copies data from "reader" => "writer"
return output;
Original Answer
Use BoundedInputStream from Apache Commons IO library. Your work becomes much more easier.
The following code will do what you want:
public static String getContentFromInputStream(InputStream inputStream) {
inputStream = new BoundedInputStream(inputStream, <number-of-bytes>);
// Rest code are all same
You just simply wrap your InputStream with a BoundedInputStream and you specify a maximum size. BoundedInputStream will take care of limiting reads up to that maximum size.
Or you can do this when you're creating the reader:
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(
new BoundedInputStream(inputStream, <no-of-bytes>)
)
);
Basically what we're doing here is, we're limiting the read size at the InputStream layer itself, rather than doing that when reading lines. So you end up with a reusable component like BoundedInputStream which limits reading at the InputStream layer, and you can use that wherever you want.
Edit: Added footnote
Edit 2: Added updated answer based on comments
There are basically 4 ways to do file processing:
Stream-Based Processing (the java.io.InputStream model): Optionally put a bufferedReader around the stream, iterate & read the next available text from the stream (if no text is available, block until some becomes available), process each piece of text independently as it's read (catering for widely-varying sizes of text pieces)
Chunk-Based Non-Blocking Processing (the java.nio.channels.Channel model): Create a set of fixed-sized buffers (representing the "chunks" to be processed), read into each of the buffers in turn without blocking (nio API delegates to native IO, using fast O/S-level threads), your main processing thread picks each buffer in turn once it is filled and processes the fixed-size chunk, as other buffers continue to be asynchronously loaded.
Part File Processing (including line-by-line processing) (can leverage (1) or (2) to isolate or build up each "part"): break your file format down into semantically meaningful sub-parts (if possible! breaking into lines could be possible!), iterate through stream pieces or chunks and build-up content in memory until the next part is completely built, process each part as soon as it's built.
Entire File Processing (the java.nio.file.Files model): Read the entire file into memory in one operation, process the complete contents
Which one should you use?
It depends - on your file contents and the type of processing you require.
From a resource-use efficiency perspective (best to worst) is: 1,2,3,4.
From a processing speed & efficiency perspective (best to worst) is: 2,1,3,4.
From an ease of programming perspective (best to worst): 4,3,1,2.
However, some types of processing might require more than the smallest piece of text (ruling out 1, and maybe 2) and some file formats may not have internal parts (ruling out 3).
You're doing 4. I suggest you shift to 3 (or lower), if you can.
Under 4, there's only one way to avoid DOS - limit the size before it's read into memory, (or for that matter copied to your file system). It's too late once it's read in. If this is not possible, then try 3, 2 or 1.
Limiting File Size
Often the file is uploaded via a HTML form.
If uploading using Servlet #MultipartConfig annotation and request.getPart().getInputStream(), you have control over how much data you read from the stream. Also, request.getPart().getSize() returns the file size in advance and if it's small enough, you can do request.getPart().write(path) to write the file to disk.
If uploading using JSF, then JSF 2.2 (very new) has the standard html component <h:inputFile> (javax.faces.component.html.InputFile), which has an attribute for maxLength; pre-JSF 2.2 implementations have similar custom components (e.g. Tomahawk has <t:InputFileUpload> with maxLength attribute; PrimeFaces has <p:FileUpload> with sizeLimit attribute).
Alternatives to Read Entire File
Your code which uses InputStream, StringBuilder, etc, is an efficient way to read the entire file, but is not necessarily the simplest way (least lines of code).
Junior/average developers could get the misapprehension that you're doing efficient stream-based processing, when you're processing the entire file - so include appropriate comments.
If you want less code, you could try one of the following:
List<String> stringList = java.nio.file.Files.readAllLines(path, charset);
or
byte[] byteContents = java.nio.file.Files.readAllBytes(path);
But they require care, or they could be inefficient in resource usage. If you use readAllLines and then concatenate the List elements into a single String, then you would consume double the memory (for the List elements + the concatenated String). Similarly, if you use readAllBytes, followed by encoding to String (new String(byteContents, charset)), then again, you're using "double" the memory. So best to process directly against List<String> or byte[], unless you limit your files to a small enough size.
instead of readLine use read which reads a given amount of chars.
in each loop check how much data has been read, if it's more then a certain amount, more then the maximum of an expected input, stop it and return an error and log it.
I faced a similar issue when copying a huge binary file (which generally does not contain newline character). doing a readline() leads to reading the entire binary file into one single string causing OutOfMemory on Heap space.
Here is a simple JDK alternative:
public static void main(String[] args) throws Exception
{
byte[] array = new byte[1024];
FileInputStream fis = new FileInputStream(new File("<Path-to-input-file>"));
FileOutputStream fos = new FileOutputStream(new File("<Path-to-output-file>"));
int length = 0;
while((length = fis.read(array)) != -1)
{
fos.write(array, 0, length);
}
fis.close();
fos.close();
}
Things to note:
The above example copies the file using a buffer of 1K bytes. However, if you are doing this copy over network, you may want to tweak the buffer size.
If you would like to use FileChannel or libraries like Commons IO, just make sure that the implementation boils down to something like above
This worked for me without any problems.
char charArray[] = new char[ MAX_BUFFER_SIZE ];
int i = 0;
int c = 0;
while((c = br.read()) != -1 && i < MAX_BUFFER_SIZE) {
char character = (char) c;
charArray[i++] = character;
}
return Arrays.copyOfRange(charArray,0,i);
I cannot think a soloution other than Apache Commons IO FileUtils.
Its pretty simple with FileUtils class, as the so called DOS attack wont come directly from the top layer.
Reading and writing a file is very much simple as you can do it with just one line of code like
String content =FileUtils.readFileToString(new File(filePath));
You can explore more about this.
There is class EntityUtils under Apache httpCore. Use getString() method of this class to get the String from Response content.
Recommendations from Fortify Scan. You can adapt the InputStream to other resources such as HTTP request InputStream.
InputStream zipInput = zipFile.getInputStream(zipEntry);
Reader zipReader = new InputStreamReader(zipInput);
BufferedReader br = new BufferedReader(zipReader);
StringBuffer sb = new StringBuffer();
int intC;
while ((intC = br.read()) != -1){
char c = (char)intC;
if (c == "\n"){
break;
}
if (sb.length >= MAX_STR_LEN){
throw new Exception("Input too long");
}
sb.append(c);
}
String line = sb.toString();

Read binary data from a socket

I'm trying to connect to a server, and then send it a HTTP request (GET in this case). The idea is request a file and then receive it from the server.
It should work with both text files and binary files (imgs for example). I have no problem with text files, it works perfect, but I'm having some troubles with binary files.
First, I declare a BufferedReader (for reading header and textfile) and a DataInput Stream:
BufferedReader in_text = new BufferedReader(
new InputStreamReader(socket.getInputStream()));
DataInputStream in_binary = new DataInputStream(
new BufferedInputStream(socket.getInputStream()));
Then, I read the header with in_text and discover if it's a textfile or binary file. In case it's a textfile, I read it correctly in a StringBuilder. In case it's a binary file, I declare a byte[filesize] and store the following content of in_binary.
byte[] bindata = new byte[filesize];
in_binary.readFully(bindata);
And it doesn't work. I get a EOFException.
I thought that maybe in_binary is still in the first position of the stream, so it hasn't read the header yet. So I captured the length of the header and skip that bytes in in_binary.
byte[] bindata = new byte[filesize];
in_binary.reset();
in_binary.skip(headersize);
in_binary.readFully(bindata);
And still the same.
What could be happening?
Thanks!
PD: I know I could use URLConnection and all of that. That's not the problem.
BufferedReader buffers data (hence the name) - it will almost certainly have read more data from the socket than just the header. Therefore, when you try to read the actual data some has already been read from the socket. If you try reading just a few bytes you'll probably see that they aren't the first bytes of the actual response data.
If you know how to use URLConnection, I have to wonder what reason you have for not using it.
As soon as you use any subclass of Reader, you aren't reading binary. You are converting from bytes to characters, using the default encoding of the JVM. If you really want bytes of binary, you need to stick to streams, not readers. Creating both stacks at once is asking for trouble.
Use Apache Commons IO: IOUtils.toByteArray() to read the entire content into memory as a byte[], and then decide what to do with it, unless you have a gigantic amount of data, in which case you should set up the buffered input stream, decide what to do, and only construct the reader after you push back.

Extract first valid line of string from byte array

I am writing a utility in Java that reads a stream which may contain both text and binary data. I want to avoid having I/O wait. To do that I create a thread to keep reading the data (and wait for it) putting it into a buffer, so the clients can check avialability and terminate the waiting whenever they want (by closing the input stream which will generate IOException and stop waiting). This works every well as far as reading bytes out of it; as binary is concerned.
Now, I also want to make it easy for the client to read line out of it like '.hasNextLine()' and '.readLine()'. Without using an I/O-wait stream like buffered stream, (Q1) How can I check if a binary (byte[]) contain a valid unicode line (in the form of the length of the first line)? I look around the String/CharSet API but could not find it (or I miss it?). (NOTE: If possible I don't want to use non-build-in library).
Since I could not find one, I try to create one. Without being so complicated, here is my algorithm.
1). I look from the start of the byte array until I find '\n' or '\r' without '\n'.
2). Then, I cut the byte array from the start to that point and using it to create a string (with CharSet if specified) using 'new String(byte[])' or 'new String(byte[], CharSet)'.
3). If that success without exception, we found the first valid line and return it.
4). Otherwise, these bytes may not be a string, so I look further to another '\n' or '\r' w/o '\n'. and this process repeat.
5. If the search ends at the end of available bytes I stop and return null (no valid line found).
My question is (Q2)Is the following algorithm adequate?
Just when I was about to implement it, I searched on Google and found that there are many other codes for new line, for example U+2424, U+0085, U+000C, U+2028 and U+2029.
So my last question is (Q3), Do I really need to detect these code? If I do, Will it increase the chance of false alarm?
I am well aware that recognize something from binary is not absolute. I am just trying to find the best balance.
To sum up, I have an array of byte and I want to extract a first valid string line from it with/without specific CharSet. This must be done in Java and avoid using any non-build-in library.
Thanks you all in advance.
I am afraid your problem is not well-defined. You write that you want to extract the "first valid string line" from your data. But whether somet byte sequence is a "valid string" depends on the encoding. So you must decide which encoding(s) you want to use in testing.
Sensible choices would be:
the platform default encoding (Java property "file.encoding")
UTF-8 (as it is most common)
a list of encodings you know your clients will use (such as several Russian or Chinese encodings)
What makes sense will depend on the data, there's no general answer.
Once you have your encodings, the problem of line termination should follow, as most encodings have rules on what terminates a line. In ASCII or Latin-1, LF,CR-LF and LF-CR would suffice. On Unicode, you need all the ones you listed above.
But again, there's no general answer, as new line codes are not strictly regulated. Again, it would depend on your data.
First of all let me ask you a question, is the data you are trying to process a legacy data? In other words, are you responsible for the input stream format that you are trying to consume here?
If you are indeed controlling the input format, then you probably want to take a decision Binary vs. Text out of the Q1 algorithm. For me this algorithm has one troubling part.
`4). Otherwise, these bytes may not be a string, so I look further to
another '\n' or '\r' w/o '\n'. and this process repeat.`
Are you dismissing input prior to line terminator and take the bytes that start immediately after, or try to reevaluate the string with now 2 line terminators? If former, you may have broken binary data interface, if latter you may still not parse the text correctly.
I think having well defined markers for binary data and text data in your stream will simplify your algorithm a lot.
Couple of words on String constructor. new String(byte[], CharSet) will not generate any exception if the byte array is not in particular CharSet, instead it will create a string full of question marks ( probably not what you want ). If you want to generate an exception you should use CharsetDecoder.
Also note that in Java 6 there are 2 constructors that take charset
String(byte[] bytes, String charsetName) and String(byte[] bytes, Charset charset). I did some simple performance test a while ago, and constructor with String charsetName is magnitudes faster than the one that takes Charset object ( Question to Sun: bug, feature? ).
I would try this:
make the IO reader put strings/lines into a thread safe collection (for example some implementation of BlockingQueue)
the main code has only reference to the synced collection and checks for new data when needed, like queue.peek(). It doesn't need to know about the io thread nor the stream.
Some pseudo java code (missing exception & io handling, generics, imports++) :
class IORunner extends Thread {
IORunner(InputStream in, BlockingQueue outputQueue) {
this.reader = new BufferedReader(new InputStreamReader(in, "utf-8"));
this.outputQueue = outputQueue;
}
public void run() {
String line;
while((line=reader.readLine())!=null)
this.outputQueue.put(line);
}
}
class Main {
public static void main(String args[]) {
...
BlockingQueue dataQueue = new LinkedBlockingQueue();
new IORunner(myStreamFromSomewhere, dataQueue).start();
while(true) {
if(!dataQueue.isEmpty()) { // can also use .peek() != null
System.out.println(dataQueue.take());
}
Thread.sleep(1000);
}
}
}
The collection decouples the input(stream) more from the main code. You can also limit the number of lines stored/mem used by creating the queue with a limited capacity (see blockingqueue doc).
The BufferedReader handles the checking of new lines for you :) The InputStreamReader handles the charset (recommend setting one yourself since the default one changes depending on OS etc.).
The java.text namespace is designed for this sort of natural language operation. The BreakIterator.getLineInstance() static method returns an iterator that detects line breaks. You do need to know the locale and encoding for best results, though.
Q2: The method you use seems reasonable enough to work.
Q1: Can't think of something better than the algorithm that you are using
Q3: I believe it will be enough to test for \r and \n. The others are too exotic for usual text files.
I just solved this to get test stubb working for Datagram - I did byte[] varName= String.getBytes(); then final int len = varName.length; then send the int as DataOutputStream and then the byte array and just do readInt() on the rcv then read bytes(count) using the readInt.
Not a lib, not hard to do either. Just read up on readUTF and do what they did for the bytes.
The string should construct from the byte array recovered that way, if not you have other problems. If the string can be reconstructed, it can be buffered ... no?
May be able to just use read / write UTF() in DataStream - why not?
{ edit: per OP's request }
//Sending end
String data = new String("fdsfjal;sajssaafe8e88e88aa");// fingers pounding keyboard
DataOutputStream dataOutputStream = new DataOutputStream();//
final Integer length = new Integer(data.length());
dataOutputStream.writeInt(length.intValue());//
dataOutputStream.write(data.getBytes());//
dataOutputStream.flush();//
dataOutputStream.close();//
// rcv end
DataInputStream dataInputStream = new DataInputStream(source);
final int sizeToRead = dataInputStream.readInt();
byte[] datasink = new byte[sizeToRead.intValue()];
dataInputStream.read(datasink,sizeToRead);
dataInputStream.close;
try
{
// constructor
// String(byte[] bytes, int offset, int length)
final String result = new String(datasink,0x00000000,sizeToRead);//
// continue coding here
Do me a favor, keep the heat off of me. This is very fast right in the posting tool - code probably contains substantial errors - it's faster for me just to explain it writing Java ~ there will be others who can translate it to other code language ( s ) which you can too if you wish it in another codebase. You will need exception trapping an so on, just do a compile and start fixing errors. When you get a clean compile, start over from the beginnning and look for blunders. ( that's what a blunder is called in engineering - a blunder )

Categories

Resources