java.io.IOException: No such file or directory - java

I am encountering an issue to save/ create the file using java.
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[na:1.7.0_79]
My environment is using Linux but having a mount on Windows (The place where I try to store the file).
It will hit everytime I tried to create when the filename having a chinese characters.
Could this happen because of encoding between Linux and Windows difference?
When I tried running and storing in similar OS (run apps in Linux, storing in Linux, same thing for windows) it run smoothly.
Any help is very appreciated.
The code i used to create the file
File imgPath = new File(fullpath.toString());
if (!imgPath.exists()){
FileUtils.forceMkdir(imgPath);
imgPath.setWritable(true, false);
}
fullpath.append(File.separator).append(fileName);
outputStream = new FileOutputStream(new File(fullpath.toString()));
Thanks a lot.

Note: I'm a fairly new user and can't comment directly yet (only on my questions and answers so far), so I'm posting this as an answer.
Windows uses UTF-16 while Linux uses UTF-8; (considering that you haven't installed anything extra to change anything yet) UTF-8 and UTF-16 support the same range of characters. However, I remember correctly, it had something to do with memory (UTF-8 starts at 8 bits and UTF-16 starts at 16?). Regardless, they're stored/ read a little differently. And then, InputStreamReader converts characters from their external representation in the specified encoding to the internal representation. It's mentioned in this stackoverflow post (Difference between UTF-8 and UTF-16?) about the exact way it's done in bytes. They're the same for the basics, but different for others, like Chinese characters. would suggest looking for solutions along that line (I have to get to class!). I could be entirely wrong, but this is probably a good starting place. Good luck.

Related

How to send UTF-8 command line data from PHP to Java for correct encoding

I am trying to pass a UTF-8 string as a command line parameter from PHP to a Java program.
When I view the string in the PHP debugger, it show correctly: Présentation
Yet when I look at the arg[0] data in the Java debugger (and the returned value passed back to the PHP program) I see: Pr??sentation
I have tried the Java code below and neither ISO_8859_1 nor UTF_8 return the proper results.
I've looked here on stackoverflow (Translate UTF-8 character encoding function from PHP to Java) as well as other sites and still cannot make
sense at what I am doing wrong.
Everything seems to work find in PHP yet Java is doing something right from the start with the data that looks like it needs perhaps additional processing after or before I call the code below.
This is my first go at dealing with international characters. Any help is greatly appreciated. Thank you!
Edit: I am debugging on Windows remotely - the PHP and Java are being run on an Ubuntu system. But since the PHP code and Java code called from the PHP code reside on the linux based system, there should not be any issue with Windows command line Java and UTF-8. I had read here on stackoverflow that was an issue for some in the recent past.
byte[] test_str_1 = args[0].getBytes(StandardCharsets.ISO_8859_1);
System.out.println(test_str_1);
byte[] test_str_2 = args[0].getBytes(StandardCharsets.UTF_8);
System.out.println(test_str_2);
The problem has been solved using the solution provided here:
Unicode to PHP exec
Everyone's help got me on the right track. It was indeed a locale issue, but not at the OS level. Instead it was with PHP's locale.
Another user had a similar issue and it was fixed with by adding the following code to the PHP script before executing the command line that calls the Java program:
$locale = 'en_US.utf-8';
setlocale(LC_ALL, $locale);
putenv('LC_ALL='.$locale);
So now, in the Java code, when I view the args[0] param, that is now displayed correctly and also the processed text stored in a file and then sent back to and received into the PHP script properly. It took a bit of looking up the byte values, corresponding UTF-8 encodings, and the like before I could start to see the issue was that PHP was translating what was a correct string just before exec, into a different string during the exec() call. During this call the UTF-8 \0xc3 0xa9 bytes for "é" (Unicode \u00E9) into \3f \3f (two ASCII question mark chars).
During my searching here on stackoverflow I saw a warning not the use literals (e.g. "Présentation") and once I backtracked the data to the caller it became evident that the issue involved the actual call to exec().
Hopefully another new to Unicode processing can benefit from this information.
Thanks for everyone's input which pointed me in the right direction.

Java UTF8 encoding issues with Storm

Pretty desperate for help after 2 days trying to debug this issue.
I have some text that contains unicode characters, for example, the word:
korte støvler
If I run code that writes this word to a file on one of the problem machines, it works correctly. However, when I write the file exactly the same way in a storm bolt, it does not encode correctly, and the ø character is replaced with question marks.
In the storm_env.ini file I have set
STORM_JAR_JVM_OPTS:-Dfile.encoding=UTF-8
I also set the encoding as UTF-8 in the code, and in mvn when it is packaged.
I have run tests on the boxes to check JVM default encodings, and they are all UTF-8.
I have tried 3 different methods of writing the file and all cause the same issue, so it is definitely not that.
This issue was fixed by simply build another machine on ec2. It had exactly the same software versions and configuration as the boxes with issues.

Encoding issue on filename with Java 7 on OSX with jnlp/webstart

I have this problem that has been dropped on me, and have been a couple of days of unsuccessful searches and workaround attempts.
I have now an internal java swing program distributed by jnlp/webstart, on osx and windows computers, that, among other things, downloads some files from WebDav.
Recently, on a test machine with OSX 10.8 and Java 7, filenames and directory names with accented characters started having those replaced by question marks.
No problem on OSX with versions of Java before 7.
example :
XXXYYY_è_ABCD/
becomes
XXXYYY_?_ABCD/
using java.text.Normalizer (NFD, NFC, NFKD, NFKC) on the original string, the result is different but still wrong :
XXXYYY_e?_ABCD/
or
XXXYYY_e_ABCD/
I know, from correspondence between [andrew.brygin at oracle.com] and [mik3hall at gmail.com] that
Yes, file.encoding is set based on the locale that the jvm is running
on, and if you run your java vm in xxxx.UTF-8 locale, the
file.encoding should be UTF-8, set to MacRoman will be problematic.
So I believe Oracle/OpenJDK7 behaves correctly. That said, as Andrew
Thompson pointed out, if all previous Apple JDK releases use MacRoman
as the file.encoding for english/UTF-8 locale, there is a
"compatibility" concern here, it might worth putting something in the
release note to give Oracle/OpenJDK MacOS user a heads up.
original mail
from Joni Salonen blog (java-and-file-names-with-invalid-characters) i know that :
You probably know that Java uses a “default character encoding” to
convert binary data to Strings. To read or write text using another
encoding you can use an InputStreamReader or OutputStreamWriter. But
for data-to-text conversions deep in the API you have no choice but to
change the default encoding.
and
What about file.encoding?
The file.encoding system property can also be used to set the default
character encoding that Java uses for I/O. Unfortunately it seems to
have no effect on how file names are decoded into Strings.
executing locale from inside the jnlp invariabily prints
LANG=
LC_COLLATE="C"
LC_CTYPE="C"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
the most similar problem on stackoverflow with a solution is this :
encoding-issues-on-java-7-file-names-in-os-x
but the solution is wrapping the execution of the java program in a script with
#!/bin/bash
export LC_CTYPE="UTF-8" # Try other options if this doesn't work
exec java your.program.Here
but I don't think this option is available to me because of the webstart, and I haven't found any way to set the LC_CTYPE environment variable from within the program.
Any solutions or workarounds?
P.S. :
If we run the program directly from shell, it writes the file/directory correctly even on OSX 10+Java 7.
The problem appears only with the combination of JNLP+OSX+Java7
I take it it's acceptable to have maximal ASCII representation of the file name, which works in virtually any encoding.
First, you want to use specifically NFKD, so that maximum information is retained in the ASCII form. For example, "2⁵" becomes "25"rather than just
"2", "fi" becomes "fi" rather than "" etc once the non-ascii and non-control characters are filtered out.
String str = "XXXYYY_è_ABCD/";
str = Normalizer.normalize(str, Normalizer.Form.NFKD);
str = str.replaceAll( "[^\\x20-\\x7E]", "");
//The file name will be XXXYYY_e_ABCD no matter what system encoding
You would then always pass filenames through this filter to get their filesystem name. You only lose is some uniqueness, I.E file asdé.txt is the same
as asde.txt and in this system they cannot be differentiated.
EDIT: After experimenting with OS X some more I realized my answer was totally wrong, so I'm redoing it.
If your JVM supports -Dfile.encoding=UTF-8 on the JVM command line, that might fix the issue. I believe that is a standard property but I'm not certain about that.
HFS Plus, like other POSIX-compliant file systems, stores filenames as bytes. But unlike Linux's ext3 filesystem, it forces filenames to be valid decomposed UTF-8. This can be seen here with the Python interpreter on my OS X system, starting in an empty directory.
$ python
Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53)
>>> import os
>>> os.mkdir('\xc3\xa8')
>>> os.mkdir('e\xcc\x80')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 17] File exists: 'e\xcc\x80'
>>> os.mkdir('\x8f')
>>> os.listdir('.')
['%8F', 'e\xcc\x80']
>>> ^D
$ ls
%8F è
This proves that the directory name on your filesystem cannot be Mac-Roman encoded (i.e. with byte value 8F where the è is seen), as long as it's an HFS Plus filesystem. But of course, the JVM is not assured of an HFS Plus filesystem, and SMB and NFS do not have the same encoding guarantees, so the JVM should not assume this scheme.
Therefore, you have to convince the JVM to interpret file and directory names with UTF-8 encoding, in order to read the names as java.lang.String objects correctly.
Shot in the dark: File Encoding does not influence the way how the file names are created, just how the content gets written into the file - check this guy here: http://jonisalonen.com/2012/java-and-file-names-with-invalid-characters/
Here is a short entry from Apple: http://developer.apple.com/library/mac/#qa/qa1173/_index.html
Comparing this to http://docs.oracle.com/javase/tutorial/i18n/text/normalizerapi.html I would assume you want to use
normalized_string = Normalizer.normalize(target_chars, Normalizer.Form.NFD);
to normalize the file names before you pass them to the File constructor. Does this help?
I don't think there is a real solution to this problem, right now.
Meantime I came to the conclusion that the "C" environment variables printed from inside the program are from the Java Web Start sandbox, and (by design, apparently) you can't influence those using the jnlp.
The accepted (as accepted by the company) workaround/compromise was of launching the jnlp using javaws from a bash script.
Apparently, launching the jnlp from browser or from finder creates a new sandbox environment with the LANG not setted (so is setted to "C" that is equal to ASCII).
Launching the jnlp from command line instead prints the right LANG from the system default, inheriting it from the shell.
This permits to at least preserve the autoupdating feature of the jnlp and dependencies.
Anyway, we sent a bug report to Oracle, but personally I'm not hoping it to be resolved anytime soon, if ever.
It's a bug in the old-skool java File api, maybe just on a mac? Anyway, the new java.nio api works much better. I have several files containing unicode characters and content that failed to load using java.io.File and related classes. After converting all my code to use java.nio.Path EVERYTHING started working. And I replaced org.apache.commons.io.FileUtils (which has the same problem) with java.nio.Files...
...and be sure to read and write the content of file using an appropriate charset, for example:
Files.readAllLines(myPath, StandardCharsets.UTF_8)

Writing new line character in Java that works across different OS

I have a Java program which generates a text file on a UNIX server. This text file is then sent to a customer's machine using other systems where it is supposed to be opened in notepad on Windows. As we are well aware that notepad will not be able to find a new line as it uses Windows CR LF.
I am using
System.getProperty("line.separator");
Is there another way of acheiving platform independece so that a file generated on UNIX can be displayed in Windows notepad with the proper formatting like changing some UNIX property for new line?
Notepad is a pre-requisite, hence changing that is not possible.
Since Java 7 System.lineSeparator() ..
Java APIs keep changing with time. Writing new line character in Java that works across different OS could be achieved with System.lineSeparator() API which has been there since Java 7. check this,
System.lineSeperator()
No. You need to pick a single file format, Windows or UNIX. Given that your customer requires notepad, you can probably pick Windows format, which means you just need "\r\n" characters. The fact that UNIX generates the file can be considered irrelevant.
Mac OS X supports CR, LF and CRLF
Linux uses LF, but files using CRLF and CR render too.
Windows uses CRLF
the answer: just always use CRLF, then you have support for all platforms.
CR = '\r' , LF='\n'
so just make strings of the format : "mytexthereblahblahblah \r\n"
System.getProperty("line.separator");
This is wrong. You don't care what the system's line separator is. You need to write out a file in the format that file is supposed to be in. It doesn't matter what system generates the file.
A file is a stream of bytes. You need to write out the stream of bytes that the file format says you should be writing, and that is independent of what line separate any system uses.
Have you checked if the UNIX server can tolerate Windows style newlines? Since the requirement is editting in Notepad, you will need to put Windows style newlines in the file using System.print("\r\n").
If the UNIX server can't handle Windows line separators, then your best option is probably to process the file after editting to convert the newlines using dos2unix on the UNIX server.
The Notepad requirement means that this isn't really a platform independence requirement - it's a requirement for the Unix server to understand Windows newlines.
There is no direct way for having such interoperable simple text file. Hence will have to move to a structured file like doc or excel or pdf or simply fix the format to either windows or UNIX and communicate the end users to use the same.
This answer is an aggregate of the above discussion.
Thanks a lot community for helping. If any other solution is found to the problem. Please post for others :)

Java application failing on special characters

An application I am working on reads information from files to populate a database. Some of the characters in the files are non-English, for example accented French characters.
The application is working fine in Windows but on our Solaris machine it is failing to recognise the special characters and is throwing an exception. For example when it encounters the accented e in "Gérer" it says :-
Encountered: "\u0161" (353), after : "\'G\u00c3\u00a9rer les mod\u00c3"
(an exception which is thrown from our application)
I suspect that in order to stop this from happening I need to change the file.encoding property of the JVM. I tried to do this via System.setProperty() but it has not stopped the error from occurring.
Are there any suggestions for what I could do? I was thinking about setting the basic locale of the solaris platform in /etc/default/init to be UTF-8. Does anyone think this might help?
Any thoughts are much appreciated.
That looks like a file that was converted by native2ascii using the wrong parameters. To demonstrate, create a file with the contents
Gérer les modÚ
and save it as "a.txt" with the encoding UTF-8. Then run this command:
native2ascii -encoding windows-1252 a.txt b.txt
Open the new file and you should see this:
G\u00c3\u00a9rer les mod\u00c3\u0161
Now reverse the process, but specify ISO-8859-1 this time:
native2ascii -reverse -encoding ISO-8859-1 b.txt c.txt
Read the new file as UTF-8 and you should see this:
Gérer les modÀ\u0161
It recovers the "é" okay, but chokes on the "Ú", like your app did.
I don't know what all is going wrong in your app, but I'm pretty sure incorrect use of native2ascii is part of it. And that was probably the result of letting the app use the system default encoding. You should always specify the encoding when you save text, whether it's to a file or a database or what--never let it default. And if you don't have a good reason to choose something else, use UTF-8.
Try to use
java -Dfile.encoding=UTF-8 ...
when starting the application in both systems.
Another way to solve the problem is to change the encoding from both system to UTF-8, but i prefer the first option (less intrusive on the system).
EDIT:
Check this answer on stackoverflow, It might help either:
Changing the default encoding for String(byte[])
Instead of setting the system-wide character encoding, it might be easier and more robust, to specify the character encoding when reading and writing specific text data. How is your application reading the files? All the Java I/O package readers and writers support passing in a character encoding name to be used when reading/writing text to/from bytes. If you don't specify one, it will then use the platform default encoding, as you are likely experiencing.
Some databases are surprisingly limited in the text encodings they can accept. If your Java application reads the files as text, in the proper encoding, then it can output it to the database however it needs it. If your database doesn't support any encoding whose character repetoire includes the non-ASCII characters you have, then you may need to encode your non-English text first, for example into UTF-8 bytes, then Base64 encode those bytes as ASCII text.
PS: Never use String.getBytes() with no character encoding argument for exactly the reasons you are seeing.
I managed to get past this error by running the command
export LC_ALL='en_GB.UTF-8'
This command set the locale for the shell that I was in. This set all of the LC_ environment variables to the Unicode file encoding.
Many thanks for all of your suggestions.
You can also set the encoding at the command line, like so java -Dfile.encoding=utf-8.
I think we'll need more information to be able to help you with your problem:
What exception are you getting exactly, and which method are you calling when it occurs.
What is the encoding of the input file? UTF8? UTF16/Unicode? ISO8859-1?
It'll also be helpful if you could provide us with relevant code snippets.
Also, a few things I want to point out:
The problem isn't occurring at the 'é' but later on.
It sounds like the character encoding may be hard coded in your application somewhere.
Also, you may want to verify that operating system packages to support UTF-8 (SUNWeulux, SUNWeuluf etc) are installed.
Java uses operating system's default encoding while reading and writing files. Now, one should never rely on that. It's always a good practice to specify the encoding explicitly.
In Java you can use following for reading and writing:
Reading:
BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(inputPath),"UTF-8"));
Writing:
PrintWriter pw = new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(outputPath), "UTF-8")));

Categories

Resources