I have a tomcat running on a Linux server.
My webapp is creating text files that must be imported by another external system that accepts DOS/Windows formatted files.
FileWriterWithEncoding writer;
writer = new FileWriterWithEncoding(file,"UTF-8", true);
PrintWriter printer = new PrintWriter(writer);
How can I create such DOS formatted files with Java on a Linux server?
Thank you.
Make sure that the line endings you write are "\r\n", this is the Windows way of writing them (carriage return character + line feed character) .
Related
With a java script, I am trying to read a file that contains the name of a file on my Linux filesystem. It points to a file that was generated on a Windows OS and has accents in it's name.
Example of this kind of file "input.csv" :
MYFILE_tést.doc;1
The java program parses the file in question and verifies if that file exists. But for those lines containing accents, file.exists() in Java always returns false.
The file "input.csv" is generated in Windows and encoded in ISO-8859-1
Linux locales are configured like this :
LANG=en_US.ISO-8859-1
LC_CTYPE="en_US.ISO-8859-1"
LC_NUMERIC="en_US.ISO-8859-1"
LC_TIME="en_US.ISO-8859-1"
LC_COLLATE="en_US.ISO-8859-1"
LC_MONETARY="en_US.ISO-8859-1"
LC_MESSAGES="en_US.ISO-8859-1"
LC_PAPER="en_US.ISO-8859-1"
LC_NAME="en_US.ISO-8859-1"
LC_ADDRESS="en_US.ISO-8859-1"
LC_TELEPHONE="en_US.ISO-8859-1"
LC_MEASUREMENT="en_US.ISO-8859-1"
LC_IDENTIFICATION="en_US.ISO-8859-1"
LC_ALL=en_US.ISO-8859-1
When reading the CSV file in java, i'm forcing the encoding :
csvFile = new BufferedReader(new InputStreamReader(new
FileInputStream(FILE_CSV), "ISO-8859-1"));
I tried switching to UTF-8 (OS locales + file encoding) or playing with the -Dfile.encoding=ISO-8859-1 JVM parameter but still the same problem.
The problem doesn't ocur if i hardcode the filename with the accents in the source code instead of reading it in the csv file.
Any ideas of how to fix this ?
Thank you for your help
I have created a java gui to generate bat files.
When I write the bat containing a string like this: "L’uomo più forte" notepad++ shows this: "L?uomo pi— forte"
Here is the code:
FileOutputStream fos = new FileOutputStream(bat);
Writer w = new BufferedWriter(new OutputStreamWriter(fos, "Cp850"));
String stringa = "L’uomo più forte"
w.write(stringa);
w.write("\n");
w.write("pause");
w.write("\n");
w.flush();
w.close();
I had to use cp850 for dos use. Using base charset the bat give error.
Solutions?
Instead of using "Cp850":
Writer w = new BufferedWriter(new OutputStreamWriter(fos, "Cp850"));
Try using "UTF-8":
Writer w = new BufferedWriter(new OutputStreamWriter(fos, "UTF-8"));
For more information on UTF-8 Encoding Go Here.
Don't forget to place a semicolon (;) at the end of your stringa string variable declaration/initialization and surround your code in try/catch block to handle possible FileNotFoundException, UnsupportedEncodingException, and IOException Exceptions.
Also...for NotePad, to provide a new line you need to also supply the \r tag:
w.write("\r\n");
You may do the following:
Open a cmd.exe command prompt session.
Execute echo L’uomo più forte>text.txt command.
Open text.txt file and copy the string there.
Paste it in your code.
For example, I created the text.txt file, renamed it to test.bat and appended an echo command. This is my test.bat file:
#echo off
echo L'uomo pi— forte
... and this is the output:
L'uomo più forte
If some character is not displayed correctly, then the code page used does not contain such a character.
Note: I suggest you to use the standard Windows Notepad. The notepad++ program may cause strange output in these cases.
I read lines from a .txt file into a String list. I show the text in a JTextPane. The encoding is fine when running from Eclipse or NetBeans, however if I create a jar, the encoding is not correct. The encoding of the file is UTF-8. Is there a way to solve this problem?
Your problem is probably that you're opening a reader using the platform encoding.
You should manually specify the encoding whenever you convert between bytes and characters. If you know that the appropriate encoding is UTF-8 you can open a file thus:
FileInputStream inputFile = new FileInputStream(myFile);
try {
FileReader reader = new FileReader(inputFile, "UTF-8");
// Maybe buffer reader and do something with it.
} finally {
inputFile.close();
}
Libraries like Guava can make this whole process easier..
Have you tried to run your jar as
java -Dfile.encoding=utf-8 -jar xxx.jar
Given:
try{
FileWriter fw = new FileWriter(someFileName);
BufferedWriter bw = new BufferedWriter(fw);
bw.write("Hello Java");
}catch...
}finally{
bw.close();
}
It works perfectly in windows, but not in Unix.
Remark: the created file in unix has the complete 777 rights!
What should I do to get it working in unix?
Thanks,
Roxana
Try doing a
bw.flush();
before closing the file (on try block).
Maybe the information is still on the buffer, so it doesn't get reflected on the file contents
You should give us some more code, specially the section where the someFileName is specified. Since there is some difference in Java on how the 'file separator' is treated, your problem could be that you're creating/opening a file in windows, but it isn't on the unix... and your 'catch' is treating it, but you didn't provide its contents.
Take a look here
"file.separator" --> Character that separates components of a file path. This is "/" on UNIX and "\" on Windows.
I aim to make an index of all files with a particular extension and store it into index.txt. I am getting all the right results but not getting the 'files' onto the new line, this is a snapshot of my code:
OutputStream f0 = new FileOutputStream("index.txt");
String ind[] = f.list(only); // logic for getting only relevant files
for(int i=0;i<ind.length;i++)
{
File n = new File(path + "/" +ind[i]);
System.out.println(ind[i]+ " is a " +exten +" file");
ind[i]+="\n"; // doesnt work
f0.write(ind[i].getBytes()); // here i am writing the filenames
}
is it due to the getBytes() function which overlooks the "/n" ? Please tell me what to do. I want to insert a new line everytime i exit the for loop.
One major edit: I am getting the desired result when I open the file with notepad++ or wordpad, but when i open the file with notepad i am getting the results on the same line. Please Explain this too!
Try writing:
System.getProperty("line.separator")
instead of \n
Instead of using an FileOutputStream I'd use a PrintWriter.
PrintWriter out = new PrintWriter("index.txt");
String ind[] = f.list(only); // logic for getting only relevant files
for(int i=0;i<ind.length;i++)
{
File n = new File(path + "/" +ind[i]);
System.out.println(ind[i]+ " is a " +exten +" file");
out.println(ind[i]);
}
Is there any reason you are working at such a low level of I/O in Java? First of all you should be using Writers instead of OutputStreams.
And then if you use PrintWriter you can do away with the getBytes piece:
PrintWriter f0 = new PrintWriter(new FileWriter("index.txt"));
And then later...
f0.print(ind[i]);
And finally to your question, outside the loop simply
f0.println();
There is a missing assumption here.
If you assume that your text file should have MS Windows line separators (meant for Windows platforms), then you should use \r\n.
If you assume that your text file should have Unix-like line separators (meant for GNU/Linux, AIX, Xenix, Mac OS X, FreeBSD, etc.), then you should use \n.
If you assume that your text file should have Mac oldschool line separators (meant for Mac OS up to version 9, Apple II family, OS-9, etc.), then you should use \r.
If you assume that your text file should have line separators of the kind of the platform your program is run from, then you should use System.getProperty("line.separator") (or print the new line with a .println()).
try this,
FileWriter.write("\r\n");