What is the use of System.in.read()? - java

What is the use of System.in.read() in java?
Please explain this.

Two and a half years late is better than never, right?
int System.in.read() reads the next byte of data from the input stream. But I am sure you already knew that, because it is trivial to look up. So, what you are probably asking is:
Why is it declared to return an int when the documentation says that it reads a byte?
and why does it appear to return garbage? (I type '9', but it returns 57.)
It returns an int because besides all the possible values of a byte, it also needs to be able to return an extra value to indicate end-of-stream. So, it has to return a type which can express more values than a byte can.
Note: They could have made it a short, but they opted for int instead, possibly as a tip of the hat of historical significance to C, whose getc() function also returns an int, but more importantly because short is a bit cumbersome to work with, (the language offers no means of specifying a short literal, so you have to specify an int literal and cast it to short,) plus on certain architectures int has better performance than short.
It appears to return garbage because when you view a character as an integer, what you are looking at is the ASCII(*) value of that character. So, a '9' appears as 57. But if you cast that integer to a character, you get '9', so all is well.
Think of it this way: if you typed the character '9' it is nonsensical to expect System.in.read() to return the number 9, because then what number would you expect it to return if you had typed an 'a'? Obviously, characters must be mapped to numbers. ASCII(*) is a system of mapping characters to numbers. And in this system, character '9' maps to number 57, not number 9.
(*) Not necessarily ASCII; it may be some other encoding, like UTF-16; but in the vast majority of encodings, and certainly in all popular encodings, the first 127 values are the same as ASCII. And this includes all english alphanumeric characters and popular symbols.

May be this example will help you.
import java.io.IOException;
public class MainClass {
public static void main(String[] args) {
int inChar;
System.out.println("Enter a Character:");
try {
inChar = System.in.read();
System.out.print("You entered ");
System.out.println(inChar);
}
catch (IOException e){
System.out.println("Error reading from user");
}
}
}

System is a final class in java.lang package
code sample from the source code of api
public final class System {
/**
* The "standard" input stream. This stream is already
* open and ready to supply input data. Typically this stream
* corresponds to keyboard input or another input source specified by
* the host environment or user.
*/
public final static InputStream in = nullInputStream();
}
read() is an abstract method of abstract class InputStream
/**
* Reads the next byte of data from the input stream. The value byte is
* returned as an <code>int</code> in the range <code>0</code> to
* <code>255</code>. If no byte is available because the end of the stream
* has been reached, the value <code>-1</code> is returned. This method
* blocks until input data is available, the end of the stream is detected,
* or an exception is thrown.
*
* <p> A subclass must provide an implementation of this method.
*
* #return the next byte of data, or <code>-1</code> if the end of the
* stream is reached.
* #exception IOException if an I/O error occurs.
*/
public abstract int read() throws IOException;
In short from the api:
Reads some number of bytes from the input stream and stores them into
the buffer array b. The number of bytes actually read is returned as
an integer. This method blocks until input data is available, end of
file is detected, or an exception is thrown.
from InputStream.html#read()

import java.io.IOException;
class ExamTest{
public static void main(String args[]) throws IOException{
int sn=System.in.read();
System.out.println(sn);
}
}
If you want get char input you have to cast like this: char sn=(char) System.in.read()
The value byte is returned as int in the range 0 to 255. However, unlike in other languages’ methods, System.in.read() reads only a byte at a time.

Just to complement the accepted answer, you could also use System.out.read() like this:
class Example {
public static void main(String args[])
throws java.io.IOException { // This works! No need to use try{// ...}catch(IOException ex){// ...}
System.out.println("Type a letter: ");
char letter = (char) System.in.read();
System.out.println("You typed the letter " + letter);
}
}

System.in.read() reads from the standard input.
The standard input can be used to get input from user in a console environment but, as such user interface has no editing facilities, the interactive use of standard input is restricted to courses that teach programming.
Most production use of standard input is in programs designed to work inside Unix command-line pipelines. In such programs the payload that the program is processing is coming from the standard input and the program's result gets written to the standard output. In that case the standard input is never written directly by the user, it is the redirected output of another program or the contents of a file.
A typical pipeline looks like this:
# list files and directories ordered by increasing size
du -s * | sort -n
sort reads its data from the standard input, which is in fact the output of the du command. The sorted data is written to the standard output of sort, which ends up on the console by default, and can be easily redirected to a file or to another command.
As such, the standard input is comparatively rarely used in Java.

This example should help? Along with the comments, of course >:)
WARNING: MAN IS AN OVERUSED FREQUENT WORD IN THIS PARAGRAPH/POST
Overall I recommend using the Scanner class since you can input large sentences, I'm not entirely sure System.in.read has such aspects. If possible, please correct me.
public class InputApp {
// Don't worry, passing in args in the main method as one of the arguments isn't required MAN
public static void main(String[] argumentalManWithAManDisorder){
char inputManAger;
System.out.println("Input Some Crap Man: ");
try{
// If you forget to cast char you will FAIL YOUR TASK MAN
inputManAger = (char) System.in.read();
System.out.print("You entererd " + inputManAger + " MAN");
}
catch(Exception e){
System.out.println("ELEMENTARY SCHOOL MAN");
}
}
}

It allows you to read from the standard input (mainly, the console). This SO question may helps you.

System.in.read() is a read input method for System.in class which is "Standard Input file" or 0 in conventional OS.

system.in.read() method reads a byte and returns as an integer but if you enter a no between 1 to 9 ,it will return 48+ values because in ascii code table ,ascii values of 1-9 are 48-57 .
hope , it will help.

Related

Java error : "Exception in thread "main" java.util.InputMismatchException"

I am a newbie with Java and I'm making a very simple Java program like this:
package exercise7;
import java.util.Scanner;
public class Exercise7 {
public static void main(String[] args) {
// TODO code application logic here
Scanner keyboard = new Scanner(System.in);
System.out.println("Enter a number between 0.0 and 1.0.");
keyboard.nextDouble();
}
}
My problem is, when I enter 0.1 for example, they said to me that I had this problem:
Exception in thread "main" java.util.InputMismatchException
at java.util.Scanner.throwFor(Scanner.java:864)
at java.util.Scanner.next(Scanner.java:1485)
at java.util.Scanner.nextDouble(Scanner.java:2413)
at exercise7.Exercise7.main(Exercise7.java:30)C:\Users\Anh Bui\AppData\Local\NetBeans\Cache\8.2\executor-snippets\run.xml:53: Java returned: 1BUILD FAILED (total time: 7 seconds)
I am very confusing, because I thought that nextDouble could mean 0.1 or something like that? Could you please help me? Thank you very much.
The reason between this is just between using 1.1 or 1,1 in different zones.
The way the Scanner reads the nextDouble is related to your Locale or Zone setting (which, if not overrided, are loaded from system).
For example, I'm in Poland and my delimiter for decimal or floating point numbers is , and regardless the Java standard syntax is to use dot(.), if I input some number with . I'll also get InputMismatchException, but when I use ,, e.g. 0,6 it finishes flawlessly.
In the same time, regardless of the Zone or Locale, the valueOf or parseDouble methods from Double class are using . as floating point separator.
Scanner class methods starting with next... are based on the built-in patterns for discovering the proper values. The discovery pattern is build with the common/regular way of writing numbers/values (of the specified type) in your country/zone.
This is very similar to the Data types, which differs even more on their zone of use.
One of the ways of handling InputMismatchException here is to catch this exception and require the user to provide the number again, similarly to this question.
Your case is a bit different from the input of natural number (because 1.1 and 1,1 should indicate the floating point number.
The simplest way of handling those two separators for decimal numbers would be to read the input as String, replace the wrong separator and parse to double, like:
static double getDouble(String doubleS) {
doubleS = doubleS.replace(',', '.');
try {
return Double.parseDouble(doubleS);
} catch (NumberFormatException e) {
e.printStackTrace();
throw e;
}
}
Thanks to #Andreas comment, the most appropiate way of handling this would to require the user to provide a decimal number with . separator, because the printed message (before input) has the separator in its numbers.
This could be done with setting the type of Scanner object locale before number input like:
Scanner keyboard = new Scanner(System.in);
keyboard.useLocale(Locale.US); //this sets the locale which has below separator
System.out.println("Enter a number between 0.0 and 1.0.");

Java - How to handle special characters when compressing bytes (Huffman encoding)?

I am writing a Huffman Compression/Decompression program. I have started writing my compression method and I am stuck. I am trying to read all bytes in the file and then put all of the bytes into a byte array. After putting all bytes into the byte array I create an int[] array that will store all the frequencies of each byte (with the index being the ASCII code).
It does include the extended ASCII table since the size of the int array is 256. However I encounter issues as soon as I read a special character in my file (AKA characters with a higher ASCII value than 127). I understand that a byte is signed and will wrap around to a negative value as soon as it crosses the 127 number limit (and an array index obviously cant be negative) so I tried to counter this by turning it into a signed value when I specify my index for the array (array[myByte&0xFF]).
This kind of worked but it gave me the wrong ASCII value (for example if the correct ASCII value for the character is 134 I instead got 191 or something). The even more annoying part is that I noticed that special characters are split into 2 separate bytes, which I feel will cause problems later (for example when I try to decompress).
How do I make my program compatible with every single type of character (this program is supposed to be able to compress/decompress pictures, mp3's etc).
Maybe I am taking the wrong approach to this, but I don't know what the right approach is. Please give me some tips for structuring this.
Tree:
package CompPck;
import java.util.TreeMap;
abstract class Tree implements Comparable<Tree> {
public final int frequency; // the frequency of this tree
public Tree(int freq) { frequency = freq; }
// compares on the frequency
public int compareTo(Tree tree) {
return frequency - tree.frequency;
}
}
class Leaf extends Tree {
public final int value; // the character this leaf represents
public Leaf(int freq, int val) {
super(freq);
value = val;
}
}
class Node extends Tree {
public final Tree left, right; // subtrees
public Node(Tree l, Tree r) {
super(l.frequency + r.frequency);
left = l;
right = r;
}
}
Build tree method:
public static Tree buildTree(int[] charFreqs) {
PriorityQueue<Tree> trees = new PriorityQueue<Tree>();
for (int i = 0; i < charFreqs.length; i++){
if (charFreqs[i] > 0){
trees.offer(new Leaf(charFreqs[i], i));
}
}
//assert trees.size() > 0;
while (trees.size() > 1) {
Tree a = trees.poll();
Tree b = trees.poll();
trees.offer(new Node(a, b));
}
return trees.poll();
}
Compression method:
public static void compress(File file){
try {
Path path = Paths.get(file.getAbsolutePath());
byte[] content = Files.readAllBytes(path);
TreeMap<Integer, String> treeMap = new TreeMap<Integer, String>();
File nF = new File(file.getName() + "_comp");
nF.createNewFile();
BitFileWriter bfw = new BitFileWriter(nF);
int[] charFreqs = new int[256];
// read each byte and record the frequencies
for (byte b : content){
charFreqs[b&0xFF]++;
System.out.println(b&0xFF);
}
// build tree
Tree tree = buildTree(charFreqs);
// build TreeMap
fillEncodeMap(tree, new StringBuffer(), treeMap);
} catch (IOException e) {
e.printStackTrace();
}
}
Encodings matter
If I take the character "ö" and read it in my file it will now be
represented by 2 different values (191 and 182 or something like that)
when its actual ASCII table value is 148.
That really depends, which kind of encoding was used to create your text file. Encodings determine how text messages are stored.
In UTF-8 the ö is stored as hex [0xc3, 0xb6] or [195, 182]
In ISO/IEC 8859-1 (= "Latin-1") it would be stored as hex [0xf6], or [246]
In Mac OS Central European, it would be hex [0x9a] or [154]
Please note, that the basic ASCII table itself doesn't really describe anything for that kind of character. ASCII only uses 7 bits, and by doing so only maps 128 codes.
Part of the problem is that in layman's terms, "ASCII" is sometimes used to describe extensions of ASCII as well, (e.g. like Latin-1)
History
There's actually a bit of history behind that. Originally ASCII was a very limited set of characters. When those weren't enough, each region started using the 8th bit to add their language-specific characters. Leading to all kind of compatibility issues.
Then there was some kind of consortium that made an inventory of all characters in all possible languages (and beyond). That set is called "unicode". It contains not just 128 or 256 characters, but thousands of them.
From that point on you would need more advanced encodings to cover them. UTF-8 is one of those encodings that covers that entire unicode set, and it does so while being kind-of backwards compatible with ASCII.
Each ASCII character is still mapped in the same way, but when 1-byte isn't enough, it will use the 8th bit to indicate that a 2nd byte will follow, which is the case for the ö character.
Tools
If you're using a more advanced text editor like Notepad++, then you can select your encoding from the drop-down menu.
In programming
Having said that, your current java source reads bytes, it's not reading characters. And I would think that it's a plus when it works on byte-level, because then it can support all encodings. Maybe you don't need to work on character level at all.
However, if it does matter for your specific algorithm. Let's say you've written an algorithm that is only supposed to handle Latin-1 encoding. So, then it's really going to work on "character-level" and not on "byte-level". In that case, consider reading directly to String or char[].
Java can do the heavy-lifting for you in that case. There are readers in java that will let you read a text-file directly to Strings/char[]. However, in those cases you should of course specify an encoding when you use them. Internally a single java character can contain up to 2 bytes of data.
Trying to convert bytes to characters manually is a tricky business. Unless you're working with plain old ASCII of course. The moment you see a value above 0x7F (127), (which are presented by negative values in byte) you're no longer working with simple ASCII. Then consider using something like: new String(bytes, StandardCharsets.UTF_8). There's no need to write a decoding algorithm from scratch.

java how does readObject in objectinputstream knows how many bytes to read?

In socket I/O, may I know how does a objectinputstream readObject knows how many bytes to read? Is the content length encapsulated inside the bytes itself or does it simply reads all the available bytes in the buffer itself?
I am asking this because I was referring to the Python socket how-to and it says
Now if you think about that a bit, you’ll come to realize a
fundamental truth of sockets: messages must either be fixed length
(yuck), or be delimited (shrug), or indicate how long they are (much
better), or end by shutting down the connection. The choice is
entirely yours, (but some ways are righter than others).
However in another SO answer, #DavidCrawshaw mentioned that `
So readObject() does not know how much data it will read, so it does
not know how many objects are available.
I am interested to know how it works...
You're over-interpreting the answer you cited. readObject() doesn't know how many bytes it will read, ahead of time, but once it starts reading it is just parsing an input stream according to a protocol, that consists of tags, primitive values, and objects, which in turn consist of tags, primitive values, and other objects. It doesn't have to know ahead of time. Consider the similar-ish case of XML. You don't know how long the document will be ahead of time, or each element, but you know when you've read it all, because the protocol tells you.
The readOject() method is using BlockedInputStream to read the byte.If you check the readObject of ObjectInputStream , it is calling
readObject0(false).
private Object readObject0(boolean unshared) throws IOException {
boolean oldMode = bin.getBlockDataMode();
if (oldMode) {
int remain = bin.currentBlockRemaining();
if (remain > 0) {
throw new OptionalDataException(remain);
} else if (defaultDataEnd) {
/*
* Fix for 4360508: stream is currently at the end of a field
* value block written via default serialization; since there
* is no terminating TC_ENDBLOCKDATA tag, simulate
* end-of-custom-data behavior explicitly.
*/
throw new OptionalDataException(true);
}
bin.setBlockDataMode(false);
}
byte tc;
while ((tc = bin.peekByte()) == TC_RESET) {
bin.readByte();
handleReset();
}
which is reading from the stream is using bin.readByte().bin is BlockiedDataInputStream which in turns use PeekInputStream to read it.This class finally is using InputStream.read().
From the description of the read method:
/**
* Reads the next byte of data from the input stream. The value byte is
* returned as an <code>int</code> in the range <code>0</code> to
* <code>255</code>. If no byte is available because the end of the stream
* has been reached, the value <code>-1</code> is returned. This method
* blocks until input data is available, the end of the stream is detected,
* or an exception is thrown.
So basically it reads byte after byte until it encounters -1.So As EJP mentioned, it never know ahead of time how many bytes are there to be read. Hope this will help you in understanfing it.

Determining input for Overloaded Method

I'm running into a bit of an issue with determining if the user input is an int or double.
Here's a sample:
public static int Square(int x)
{
return x*x;
}
public static double Square(double x)
{
return x*x;
}
I need to figure out how to determine based on the Scanner if the input is a int or double for the above methods. However since this is my first programming class, I'm not allowed to use anything that hasn't been taught - which in this case, has been the basics.
Is there anyway of possibly taking the input as a String and checking to see if there is a '.' involved and then storing that into an int or double?
Lastly, I'm not asking for you to program it out, but rather help me think of a way of getting a solution. Any help is appreciated :)
The Scanner has a bunch of methods like hasNextInt, hasNextDouble, etc. which tell you whether the "next token read by the Scanner can be interpreted as a (whatever)".
Since you mentioned you've learned about the Scanner object, I assume the methods of that class are available to you for your use. In this case, you can detect if an input is an integer, double, or just obtain an entire line. The methods you would most be interested here would be the hasNextDouble() method (returns a boolean indicating whether or not the current token in the Scanner is actually a double or not) and the nextDouble() method (if the next token in the Scanner is in fact a double, parse it from the Scanner as one). This is probably the best direction for determining input types from a file or standard input.
Another option is to use the wrapper classes static methods for converting values. These are generally named like Integer.parseInt(str) or Double.parseDouble(str) which will convert a given String object into the appropriate basic type. See the Double classes method pasrseDouble(String s) for more details. It could be used in this way:
String value = "123.45"
double convertedValue = 0.0;
try {
convertedValue = Double.parseDouble(value);
} catch (NuberFormatException nfe) {
System.err.println("Not a double");
}
This method is probably best used for values that exist within the application already and need to be verified (it would be overkill to construct a Scanner on one small String for this purpose).
Finally, yet another potential (but not very clean, straightforward, or probably correct technique) could be looking at the String object directly and trying to find if it contains a decimal point, or other indicators that it is in fact a double. You may be able to use indexOf(String substr) to determine if it appears in the String ever. I suspect this method has a lot of potential problems though (say for example, what if the String has multiple '.' characters?). I wouldn't suggest this route because it is error prone and hard to follow. It might be an option if that's what the constraints are, however.
So, IMHO, your options should go as follow:
Use the Scanner methods hasNextDouble() and nextDouble()
Use the wrapper class methods Double.parseDouble(String s)
Use String methods to try and identify the value (avoid this technique at all costs if either of the above options are available).
Since you think you won't be allowed to use the Scanner methods there are a number of alternatives you try. You mentioned checking to see if a String contains a .. To do this you could use the contains method on String.
"Some words".contains("or") // evaluates to true
The problem with this approach is that there are many Strings that contain . but aren't floating point numbers. For examples, sentences, URLs and IP addresses. However, I doubt you're lecturer is trying to catch you out with and will probably just be giving you ints and doubles.
So instead you could try casting. Casting a double to an int results in the decimal portion of the number being discarded.
double doubleValue = 2.7;
int castedDoubleValue = (int) doubleValue; // evaluates to 2
double integerValue = 3.0;
int castedIntegerValue = (int) integerValue; // evaluates to 3
Hopefully, that should be enough to get you started on writing a solution to the problem.
Can be checked like this
if(scanner.hasNextDouble()}
{
System.out.println("Is double");
}
if(scanner.hasNextDouble()}
{
System.out.println("Is double");
}

Extract first valid line of string from byte array

I am writing a utility in Java that reads a stream which may contain both text and binary data. I want to avoid having I/O wait. To do that I create a thread to keep reading the data (and wait for it) putting it into a buffer, so the clients can check avialability and terminate the waiting whenever they want (by closing the input stream which will generate IOException and stop waiting). This works every well as far as reading bytes out of it; as binary is concerned.
Now, I also want to make it easy for the client to read line out of it like '.hasNextLine()' and '.readLine()'. Without using an I/O-wait stream like buffered stream, (Q1) How can I check if a binary (byte[]) contain a valid unicode line (in the form of the length of the first line)? I look around the String/CharSet API but could not find it (or I miss it?). (NOTE: If possible I don't want to use non-build-in library).
Since I could not find one, I try to create one. Without being so complicated, here is my algorithm.
1). I look from the start of the byte array until I find '\n' or '\r' without '\n'.
2). Then, I cut the byte array from the start to that point and using it to create a string (with CharSet if specified) using 'new String(byte[])' or 'new String(byte[], CharSet)'.
3). If that success without exception, we found the first valid line and return it.
4). Otherwise, these bytes may not be a string, so I look further to another '\n' or '\r' w/o '\n'. and this process repeat.
5. If the search ends at the end of available bytes I stop and return null (no valid line found).
My question is (Q2)Is the following algorithm adequate?
Just when I was about to implement it, I searched on Google and found that there are many other codes for new line, for example U+2424, U+0085, U+000C, U+2028 and U+2029.
So my last question is (Q3), Do I really need to detect these code? If I do, Will it increase the chance of false alarm?
I am well aware that recognize something from binary is not absolute. I am just trying to find the best balance.
To sum up, I have an array of byte and I want to extract a first valid string line from it with/without specific CharSet. This must be done in Java and avoid using any non-build-in library.
Thanks you all in advance.
I am afraid your problem is not well-defined. You write that you want to extract the "first valid string line" from your data. But whether somet byte sequence is a "valid string" depends on the encoding. So you must decide which encoding(s) you want to use in testing.
Sensible choices would be:
the platform default encoding (Java property "file.encoding")
UTF-8 (as it is most common)
a list of encodings you know your clients will use (such as several Russian or Chinese encodings)
What makes sense will depend on the data, there's no general answer.
Once you have your encodings, the problem of line termination should follow, as most encodings have rules on what terminates a line. In ASCII or Latin-1, LF,CR-LF and LF-CR would suffice. On Unicode, you need all the ones you listed above.
But again, there's no general answer, as new line codes are not strictly regulated. Again, it would depend on your data.
First of all let me ask you a question, is the data you are trying to process a legacy data? In other words, are you responsible for the input stream format that you are trying to consume here?
If you are indeed controlling the input format, then you probably want to take a decision Binary vs. Text out of the Q1 algorithm. For me this algorithm has one troubling part.
`4). Otherwise, these bytes may not be a string, so I look further to
another '\n' or '\r' w/o '\n'. and this process repeat.`
Are you dismissing input prior to line terminator and take the bytes that start immediately after, or try to reevaluate the string with now 2 line terminators? If former, you may have broken binary data interface, if latter you may still not parse the text correctly.
I think having well defined markers for binary data and text data in your stream will simplify your algorithm a lot.
Couple of words on String constructor. new String(byte[], CharSet) will not generate any exception if the byte array is not in particular CharSet, instead it will create a string full of question marks ( probably not what you want ). If you want to generate an exception you should use CharsetDecoder.
Also note that in Java 6 there are 2 constructors that take charset
String(byte[] bytes, String charsetName) and String(byte[] bytes, Charset charset). I did some simple performance test a while ago, and constructor with String charsetName is magnitudes faster than the one that takes Charset object ( Question to Sun: bug, feature? ).
I would try this:
make the IO reader put strings/lines into a thread safe collection (for example some implementation of BlockingQueue)
the main code has only reference to the synced collection and checks for new data when needed, like queue.peek(). It doesn't need to know about the io thread nor the stream.
Some pseudo java code (missing exception & io handling, generics, imports++) :
class IORunner extends Thread {
IORunner(InputStream in, BlockingQueue outputQueue) {
this.reader = new BufferedReader(new InputStreamReader(in, "utf-8"));
this.outputQueue = outputQueue;
}
public void run() {
String line;
while((line=reader.readLine())!=null)
this.outputQueue.put(line);
}
}
class Main {
public static void main(String args[]) {
...
BlockingQueue dataQueue = new LinkedBlockingQueue();
new IORunner(myStreamFromSomewhere, dataQueue).start();
while(true) {
if(!dataQueue.isEmpty()) { // can also use .peek() != null
System.out.println(dataQueue.take());
}
Thread.sleep(1000);
}
}
}
The collection decouples the input(stream) more from the main code. You can also limit the number of lines stored/mem used by creating the queue with a limited capacity (see blockingqueue doc).
The BufferedReader handles the checking of new lines for you :) The InputStreamReader handles the charset (recommend setting one yourself since the default one changes depending on OS etc.).
The java.text namespace is designed for this sort of natural language operation. The BreakIterator.getLineInstance() static method returns an iterator that detects line breaks. You do need to know the locale and encoding for best results, though.
Q2: The method you use seems reasonable enough to work.
Q1: Can't think of something better than the algorithm that you are using
Q3: I believe it will be enough to test for \r and \n. The others are too exotic for usual text files.
I just solved this to get test stubb working for Datagram - I did byte[] varName= String.getBytes(); then final int len = varName.length; then send the int as DataOutputStream and then the byte array and just do readInt() on the rcv then read bytes(count) using the readInt.
Not a lib, not hard to do either. Just read up on readUTF and do what they did for the bytes.
The string should construct from the byte array recovered that way, if not you have other problems. If the string can be reconstructed, it can be buffered ... no?
May be able to just use read / write UTF() in DataStream - why not?
{ edit: per OP's request }
//Sending end
String data = new String("fdsfjal;sajssaafe8e88e88aa");// fingers pounding keyboard
DataOutputStream dataOutputStream = new DataOutputStream();//
final Integer length = new Integer(data.length());
dataOutputStream.writeInt(length.intValue());//
dataOutputStream.write(data.getBytes());//
dataOutputStream.flush();//
dataOutputStream.close();//
// rcv end
DataInputStream dataInputStream = new DataInputStream(source);
final int sizeToRead = dataInputStream.readInt();
byte[] datasink = new byte[sizeToRead.intValue()];
dataInputStream.read(datasink,sizeToRead);
dataInputStream.close;
try
{
// constructor
// String(byte[] bytes, int offset, int length)
final String result = new String(datasink,0x00000000,sizeToRead);//
// continue coding here
Do me a favor, keep the heat off of me. This is very fast right in the posting tool - code probably contains substantial errors - it's faster for me just to explain it writing Java ~ there will be others who can translate it to other code language ( s ) which you can too if you wish it in another codebase. You will need exception trapping an so on, just do a compile and start fixing errors. When you get a clean compile, start over from the beginnning and look for blunders. ( that's what a blunder is called in engineering - a blunder )

Categories

Resources