My buffered writer is writting some randomly trash in my txt file. I use int nodes = Integer.valueOf(NODES_TEXT_FIELD.getText()); to store the value of one TextField that should only accept ints.
this is my writer:
private static void writeOnFile(BufferedWriter writer, int nodes){
try {
System.out.println(nodes);
System.out.println("Last check before write");
writer.write(nodes);
System.out.println(nodes);
} catch (IOException e) {
JOptionPane.showMessageDialog(null, "Failed to write data on file");
e.printStackTrace();
}
}
My output:
2
Last check before write
2
and in the text file I found: '?' (which changes to another memory trash depending on what number you input
Anybody has any Idea of what might be wrong? I stuck here for 5 hours until now..
Because write() writes a character .
Writes a single character. The character to be written is contained in the 16 low-order bits of the given integer value; the 16 high-order bits are ignored.
Parameters:
c - int specifying a character to be written
You can use Writer#write(String):
writer.write(String.valueOf(nodes));
Try this:
Writer wr = new FileWriter("thefileToWriteTo.txt");
wr.write( String.valueOf(nodes) );
wr.close();
you can always extract the writing into a loop if that is what you are doing with your nodes or some kind of escaping, it would be more helpful if you explain what is it that you are actually trying to achieve by writing this to a file, as we might advise you on that.
Related
I have to write a program which can split and merge files with various extensions. While splitting and merging it should use multiple threads. My code can do only a half of the task - if I don't use multithreading, it splits the file perfectly. If I do use multithreading, it splits the file, but saves only the first part several times.
What should I fix to make it work?
A method of Splitter.class
public void splitFile(CustomFile customFile, int dataSize) {
for (int i = 1; i <= partsNumber; i++) {
FileSplitterThread thread = new FileSplitterThread(customFile, i, dataSize);
thread.start();
}
}
Run method of my thread:
#Override
public void run() {
try {
fileInputStream = new FileInputStream(initialFile.getData());
byte[] b = new byte[dataSize];
String fileName = initialFile.getName() + "_part_" + index + "." + initialFile.getExtension();
fileOutputStream = new FileOutputStream(fileName);
int i = fileInputStream.read(b);
fileOutputStream.write(b, 0, i);
fileOutputStream.close();
fileOutputStream = null;
} catch (IOException e) {
e.printStackTrace();
}
}
The reason is you cannot achieve multi-threaded file splitting with just InputStream. And you are reading the file from the beginning always, you are getting the same bytes
For a simple file splitting mechanism, the following could be the general steps:
Get the size of the file (data size)
Chunk it into offsets for each thread to read. Example, if you have 2 threads and the data is 1000 bytes, the offsets will be 0,1000/2, where the read length is 500. the first thread will read position from 0 to 499, the next thread will start at 500 and read till 999
Get two InputStreams and position them using Channel (here is a good post, Java how to read part of file from specified position of bytes?)
Encapsulate the above info: InputStream, offset, length to read, output file name etc. and provide it to each of the threads
I have wrote an event listener that suppose to read any messages of a specific type from a receiver.
here is my event listener:
class SerialPortReader implements SerialPortEventListener {
public void serialEvent(SerialPortEvent event) {
if (event.isRXCHAR() && event.getEventValue() > 0) {
try {
byte[] buffer = SP.readBytes(6);
String type = new String(buffer);
if (type.equals("$GPGGA")){
String line = "";
line = line + type;
buffer = SP.readBytes(66);
String values = new String(buffer);
line = line + values;
writer.write(line + "\n");
}
}
catch (SerialPortException ex) {System.out.println(ex);}
catch (IOException ex) {System.out.println(ex);}
}
}
}
right now after i check that the message is of the correct type i just read 80 bytes off data, which is roughly the size of the message.
however, sometimes the message is not full and there for it is shorter then usual.
this is the message structure:
as you can see, the message end s with <CR><FL>.
i would like to modify my method so it would read each byte at a time and stop reading once it hits the end of the message. how can i catch the <CR><FL> inside the byte array?
any help would be appreciated. thank you.
It depends on the definition of "messages": are they fixed-length or delimited by a certain sequence? In your current code, you're reading a "header" that is 6 bytes long and a "message" that is 66 bytes long. However, it appears that you actually don't know the length a priori, and instead the message is newline-terminated. A couple of pointers:
You're reading bytes from the stream, then turning them into a String by using the String(byte[]) ctor. The documentation states that this uses the default charset for your platform, which may be UTF-8, Latin-1 or whatever regional default. If you are communicating with a device over a serial port, this is probably not what you want since the device is likely to have a single, specific charset for messages (maybe ASCII?). Investigate that point and use either this String ctor or, if you want to be notified when the input contains undecodable garbage, the CharsetDecoder class.
If the messages are text-based and newline-delimited, you should definitely use a BufferedReader to read from the stream. Assuming that your SP serial port is an InputStream, you could create a BufferedReader over it, and then call the readLine() method on the reader. The reader will keep requesting bytes from the stream until it sees a newline, then return the whole line to you as a String. Reader objects also encapsulate the Charset I was talking about before.
I have managed to solve my problem.
pretty easy solution:
String tmp = SP.readString();
String[] msgs = tmp.split("\r\n");
after reading the string from the serial port, i just split it by the end of line.
I have block of hexadecimal data which i want to write in a file as it is.
Block
000100050000470040002E000000000099009900500000000000490067008D0000000000890081007600000000004C002F0027000000000200050000470040002E0000000000000D00590065006C006C006F00770020006F006C0069007600650000000099009900500000000000000D004F006C006900760065002000790065006C006C006F007700000000490067008D0000000000000D00440069007300740061006E007400200062006C00750065000000008900810076000000000000110050006500610072006C0020006D006F00750073006500200067007200650079000000004C002F00270000000000000F004D00610068006F00670061006E0079002000620072006F0077006E0000
Above block represent a Adobe .aco (stores a color palette) file .
Similar File opened in hex editor shows:-
On trying to write Given block using
Code
try {
File ACO = new File(f.getAbsolutePath(),"NameRandom.aco");
ACO.createNewFile();
BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(ACO)));
try {
writer.write(<-- Above Block -->);
} finally {
writer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
But the above code shows different in hed Editor (as Following image)
I want to write Given block as it is in its current state in file.
You are getting a different hex code because you are writing the string in some encoding. If you wish to write such byte data first convert the data from string to byte[] and then write the bytes.
To convert see: Convert a string representation of a hex dump to a byte array using Java?
I am obliged to open a ObjectOutputStream, then write an object an finally close the stream. I do that multiple times using the following code :
// try-with-statement is very practical
try(FileOutputStream fos = new FileOutputStream("G.txt") ;
ObjectOutputStream oos = new ObjectOutputStream(fos);){
oos.writeObject(v);
} catch (IOException e) {
e.printStackTrace();
}
But what I want is to append the objects serialized in the "G.txt" ? how to do that please ? I can't figure that out ?
You can put the FileOutputStream into append mode for the second and subsequent writes, but you must also use an AppendingObjectOutputStream for the second and subsequent writes. You will find this class all over the Internet. It overrides writeStreamHeader() to do nothing. Otherwise you will get a StreamCorruptedException: invalid type code AC on reading, when you get to the join.
You could use FileOutputStream(String name, boolean append) constructor.
I'm using the twitter4j package for an information retrieval class and have collected some tweets. However, for the next part of the assignment, I am to use Lucene to index on the tweets. In order to do this, my thought was to save the tweets as JSON Strings to a file and then reread them when needed. However, I'm running into an error.
When the file is written, I can see the entire JSON object just fine. The total object is quite large (2500 characters). However, when reading back from the file, I get a Unterminated string at xxxx error. I am using the TwitterObjectFactory methods to both write and read the string. Here is a sample code:
Writing:
public void onStatus(Status status) {
try{
String jsonString = TwitterObjectFactory.getRawJSON(status);
output.write(jsonString+"\n");
numTweets++;
if(numTweets > 10){
synchronized(lock){
lock.notify();
}
}
}
catch(IOException e){
e.printStackTrace();
}
}
Reading:
Scanner input = new Scanner(file);
while(input.hasNext()){
Status status = TwitterObjectFactory.createStatus(input.nextLine());
System.out.println(status.getUser().getScreenName());
}
This works only some of the time. If I run the program multiple times and get many tweets, the program almost always crashes after 2-3 tweets have been read from the file, always with the same error. If you'd like to replicate the code, you can follow this example. I've added a synchronized block in order to close the stream after 10 tweets, but it's not necessary to replicate the error.
Can someone explain what is happening? My guess is that there's something wrong with the way I'm encoding the JSON into the file. I'm using BufferedWriter wrapping an OutputStreamWriter in order to output in UTF-8 format.
Edit: I do close the stream. Here's the bottom snippet of the code:
twitterStream.addListener(listener);
twitterStream.sample("en");
try{
synchronized(lock){
lock.wait();
}
}
catch(InterruptedException e){
e.printStackTrace();
}
twitterStream.clearListeners();
twitterStream.cleanUp();
twitterStream.shutdown();
output.close();
You probably need to flush your output, before you notify the reader. Otherwise parts of your String will stay in the buffer.
public void onStatus(Status status) {
try{
String jsonString = TwitterObjectFactory.getRawJSON(status);
output.write(jsonString+"\n");
output.flush();
numTweets++;
if(numTweets > 10){
synchronized(lock){
lock.notify();
}
}
}
catch(IOException e){
e.printStackTrace();
}
}
I don't see the code where you properly close the BufferedWriter. If you don't close it manually before the first program ends, then data might remain in the internal buffer and never written to the file.
You can also try to open the file in a text editor and look at the contents. Tools like http://codebeautify.org/jsonviewer or http://jsonlint.com/ allow you to validate/beautify the contents to see errors.
Lastly, try BufferedReader( new InputStreamReader( new FileInputStream(file), "UTF-8" ) ). Maybe non-ASCII characters in the input are confusing Scanner.