Edit and save other jar while running - java

Currently I am trying to update an old project.
The problem is, that in one of my sources (bungeecord) they have changed two fileds (see enum "protocol") from public final to final modifier. To make the project work again I need to access these two fields.
As a reason of this I try to "inject" the project. This works great, so the modifier changes but I am currently not able to save it to the jar file. But this is necessary.
The process of saving works perfectly for the "userconnection" (see enum below). In this case I edit a class modifier.
If you need any more Information please let me know.
When the "injection" (enum: protocol) is done and I check the modifier type of these fileds I see that there have been some changes.
But when I restart the system and check the filed modifiers again before the "injection" they are as there were no changes.
public static int inject(InjectionType type) {
try{
System.out.println("Starting injection.");
System.out.println(type.getInfo());
ClassPool cp = ClassPool.getDefault();
CtClass clazz = cp.getCtClass(type.getClazz().getName());
switch (type) {
case USERCONNECTION:
int modifier = UserConnection.class.getModifiers();
if (!Modifier.isFinal(modifier) && Modifier.isPublic(modifier)) {
return -1;
}
clazz.setModifiers(Modifier.PUBLIC);
break;
case PROTOCOL:
CtField field = clazz.getField("TO_CLIENT");
field.setModifiers(Modifier.PUBLIC + Modifier.FINAL);
field = clazz.getField("TO_SERVER");
field.setModifiers(Modifier.PUBLIC + Modifier.FINAL);
break;
default:
return -1; //no data
}
ByteArrayOutputStream bout;
DataOutputStream out = new DataOutputStream(bout = new ByteArrayOutputStream());
clazz.getClassFile().write(out);
InputStream[] streams = { new ByteArrayInputStream(bout.toByteArray()) };
File bungee_file = new File(BungeeCord.class.getProtectionDomain().getCodeSource().getLocation().toURI().getPath());
updateZipFile(bungee_file, type, streams);
return 1;
}catch (Exception e){
e.printStackTrace();
}
return 0;
}
private static void updateZipFile(File zipFile, InjectionType type, InputStream[] ins) throws IOException {
File tempFile = File.createTempFile(zipFile.getName(), null);
if (!tempFile.delete()) {
System.out.println("Warn: Cant delete temp file.");
}
if (tempFile.exists()) {
System.out.println("Warn: Temp target file alredy exist!");
}
if (!zipFile.exists()) {
throw new RuntimeException("Could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath() + " (Src. not found!)");
}
int renameOk = zipFile.renameTo(tempFile) ? 1 : 0;
if (renameOk == 0) {
tempFile = new File(zipFile.toString() + ".copy");
com.google.common.io.Files.copy(zipFile, tempFile);
renameOk = 2;
if (zipFile.delete()) {
System.out.println("Warn: Src file cant delete.");
renameOk = -1;
}
}
if (renameOk == 0) {
throw new RuntimeException("Could not rename the file " + zipFile.getAbsolutePath() + " to " + tempFile.getAbsolutePath() + " (Directory read only? (Temp:[R:" + (tempFile.canRead() ? 1 : 0) + ";W:" + (tempFile.canWrite() ? 1 : 0) + ",D:" + (tempFile.canExecute() ? 1 : 0) + "],Src:[R:" + (zipFile.canRead() ? 1 : 0) + ";W:" + (zipFile.canWrite() ? 1 : 0) + ",D:" + (zipFile.canExecute() ? 1 : 0) + "]))");
}
if (renameOk != 1) {
System.out.println("Warn: Cant create temp file. Use .copy file");
}
byte[] buf = new byte[Configuration.getLoadingBufferSize()];
System.out.println("Buffer size: " + buf.length);
ZipInputStream zin = new ZipInputStream(new FileInputStream(tempFile));
ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile));
ZipEntry entry = zin.getNextEntry();
while (entry != null) {
String path_name = entry.getName().replaceAll("/", "\\.");
boolean notReplace = true;
for (String f : type.getNames()) {
if (f.equals(path_name)) {
notReplace = false;
break;
}
}
if (notReplace) {
out.putNextEntry(new ZipEntry(entry.getName()));
int len;
while ((len = zin.read(buf)) > 0) {
out.write(buf, 0, len);
}
}
entry = zin.getNextEntry();
}
zin.close();
for (int i = 0; i < type.getNames().length; i++) {
InputStream in = ins[i];
int index = type.getNames()[i].lastIndexOf('.');
out.putNextEntry(new ZipEntry(type.getNames()[i].substring(0, index).replaceAll("\\.", "/") + type.getNames()[i].substring(index)));
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
out.closeEntry();
in.close();
}
out.close();
tempFile.delete();
if (renameOk == -1) {
System.exit(-1);
}
}
}
#Getter
public enum InjectionType {
USERCONNECTION(UserConnection.class, new String[] {"net.md_5.bungee.UserConnection.class"}, "Set modifiers for class UserConnection.class to \"public\""),
PROTOCOL(Protocol.class, new String[] {"net.md_5.bungee.protocol.Protocol"}, "Set modifiers for class Protocol.class to \"public\"");
private Class<?> clazz;
private String[] names;
private String info;
InjectionType (Class<?> clazz, String[] names, String info) {
this.clazz = clazz;
this.names = names;
this.info = info;
}
}

When the "injection" (enum: protocol) is done and I check the modifier type of these fileds I see that there have been some changes. But when I restart the system and check the filed modifiers again before the "injection" they are as there were no changes.
What you're trying to do is permanently modify field's access in a jar file using Java reflection. This cannot work as reflection modifies things in runtime only:
Reflection is an API which is used to examine or modify the behavior of methods, classes, interfaces at runtime.
Excerpt taken from this page.
What you need to do is physically edit the jar itself if you want the changes to be permanent. I know you said that you are not able to do that, but as far as I know that is the only possible way. The file itself has to be physically changed if you want the changes to stick after the application has terminated and be applied before the program has started.
Read the official documentation about Java reflection here.
However I don't really understand why is it important that the changes persists after you've restarted the system. The reason you need to change the access is so you can access and perhaps manipulate the class in some way during runtime. What you are doing is correct, one of the more important apsects of reflection is to manipulate data without actually having to modify the physical files themselves and end up using custom distributions.
EDIT: Read this question, it's comments and the accepted answer. They pretty much say the same thing that you can't edit a jar file that is currently being used by JVM, it's locked in a read-only state.

Related

Glpk java and .mod file

I've got a .mod file and I can run it in java(Using netbeans).
The file gets data from another file .dat, because the guy who was developing it used GUSEK. Now we need to implement it in java, but i dont know how to put data in the K constant in the .mod file.
Doesn't matter the way, can be through database querys or file reading.
I dont know anything about math programming, i just need to add values to the already made glpk function.
Here's the .mod function:
# OPRE
set K;
param mc {k in K};
param phi {k in K};
param cman {k in K};
param ni {k in K};
param cesp;
param mf;
var x {k in K} binary;
minimize custo: sum {k in K} (mc[k]*phi[k]*(1-x[k]) + cman[k]*phi[k]*x[k]);
s.t. recursos: sum {k in K} (cman[k]*phi[k]*x[k]) - cesp <= 0;
s.t. ocorrencias: sum {k in K} (ni[k] + (1-x[k])*phi[k]) - mf <= 0;
end;
And here's the java code:
package br.com.genera.service.otimi;
import org.gnu.glpk.*;
public class Gmpl implements GlpkCallbackListener, GlpkTerminalListener {
private boolean hookUsed = false;
public static void main(String[] arg) {
String[] nomeArquivo = new String[2];
nomeArquivo[0] = "C:\\PodaEquipamento.mod";
System.out.println(nomeArquivo[0]);
GLPK.glp_java_set_numeric_locale("C");
System.out.println(nomeArquivo[0]);
new Gmpl().solve(nomeArquivo);
}
public void solve(String[] arg) {
glp_prob lp = null;
glp_tran tran;
glp_iocp iocp;
String fname;
int skip = 0;
int ret;
// listen to callbacks
GlpkCallback.addListener(this);
// listen to terminal output
GlpkTerminal.addListener(this);
fname = arg[0];
lp = GLPK.glp_create_prob();
System.out.println("Problem created");
tran = GLPK.glp_mpl_alloc_wksp();
ret = GLPK.glp_mpl_read_model(tran, fname, skip);
if (ret != 0) {
GLPK.glp_mpl_free_wksp(tran);
GLPK.glp_delete_prob(lp);
throw new RuntimeException("Model file not found: " + fname);
}
// generate model
GLPK.glp_mpl_generate(tran, null);
// build model
GLPK.glp_mpl_build_prob(tran, lp);
// set solver parameters
iocp = new glp_iocp();
GLPK.glp_init_iocp(iocp);
iocp.setPresolve(GLPKConstants.GLP_ON);
// do not listen to output anymore
GlpkTerminal.removeListener(this);
// solve model
ret = GLPK.glp_intopt(lp, iocp);
// postsolve model
if (ret == 0) {
GLPK.glp_mpl_postsolve(tran, lp, GLPKConstants.GLP_MIP);
}
// free memory
GLPK.glp_mpl_free_wksp(tran);
GLPK.glp_delete_prob(lp);
// do not listen for callbacks anymore
GlpkCallback.removeListener(this);
// check that the hook function has been used for terminal output.
if (!hookUsed) {
System.out.println("Error: The terminal output hook was not used.");
System.exit(1);
}
}
#Override
public boolean output(String str) {
hookUsed = true;
System.out.print(str);
return false;
}
#Override
public void callback(glp_tree tree) {
int reason = GLPK.glp_ios_reason(tree);
if (reason == GLPKConstants.GLP_IBINGO) {
System.out.println("Better solution found");
}
}
}
And i'm getting this in the console:
Reading model section from C:\PodaEquipamento.mod...
33 lines were read
Generating custo...
C:\PodaEquipamento.mod:24: no value for K
glp_mpl_build_prob: invalid call sequence
Hope someone can help, thanks.
The best way would be to read the data file the same way you read the modelfile.
ret = GLPK.glp_mpl_read_data(tran, fname_data, skip);
if (ret != 0) {
GLPK.glp_mpl_free_wksp(tran);
GLPK.glp_delete_prob(lp);
throw new RuntimeException("Data file not found: " + fname_data);
}
I resolved just copying the data block from the .data file into the .mod file.
Anyway,Thanks puhgee.

How to know size of folder using java program? [duplicate]

How can I retrieve size of folder or file in Java?
java.io.File file = new java.io.File("myfile.txt");
file.length();
This returns the length of the file in bytes or 0 if the file does not exist. There is no built-in way to get the size of a folder, you are going to have to walk the directory tree recursively (using the listFiles() method of a file object that represents a directory) and accumulate the directory size for yourself:
public static long folderSize(File directory) {
long length = 0;
for (File file : directory.listFiles()) {
if (file.isFile())
length += file.length();
else
length += folderSize(file);
}
return length;
}
WARNING: This method is not sufficiently robust for production use. directory.listFiles() may return null and cause a NullPointerException. Also, it doesn't consider symlinks and possibly has other failure modes. Use this method.
Using java-7 nio api, calculating the folder size can be done a lot quicker.
Here is a ready to run example that is robust and won't throw an exception. It will log directories it can't enter or had trouble traversing. Symlinks are ignored, and concurrent modification of the directory won't cause more trouble than necessary.
/**
* Attempts to calculate the size of a file or directory.
*
* <p>
* Since the operation is non-atomic, the returned value may be inaccurate.
* However, this method is quick and does its best.
*/
public static long size(Path path) {
final AtomicLong size = new AtomicLong(0);
try {
Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
size.addAndGet(attrs.size());
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult visitFileFailed(Path file, IOException exc) {
System.out.println("skipped: " + file + " (" + exc + ")");
// Skip folders that can't be traversed
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult postVisitDirectory(Path dir, IOException exc) {
if (exc != null)
System.out.println("had trouble traversing: " + dir + " (" + exc + ")");
// Ignore errors traversing a folder
return FileVisitResult.CONTINUE;
}
});
} catch (IOException e) {
throw new AssertionError("walkFileTree will not throw IOException if the FileVisitor does not");
}
return size.get();
}
You need FileUtils#sizeOfDirectory(File) from commons-io.
Note that you will need to manually check whether the file is a directory as the method throws an exception if a non-directory is passed to it.
WARNING: This method (as of commons-io 2.4) has a bug and may throw IllegalArgumentException if the directory is concurrently modified.
In Java 8:
long size = Files.walk(path).mapToLong( p -> p.toFile().length() ).sum();
It would be nicer to use Files::size in the map step but it throws a checked exception.
UPDATE:
You should also be aware that this can throw an exception if some of the files/folders are not accessible. See this question and another solution using Guava.
public static long getFolderSize(File dir) {
long size = 0;
for (File file : dir.listFiles()) {
if (file.isFile()) {
System.out.println(file.getName() + " " + file.length());
size += file.length();
}
else
size += getFolderSize(file);
}
return size;
}
For Java 8 this is one right way to do it:
Files.walk(new File("D:/temp").toPath())
.map(f -> f.toFile())
.filter(f -> f.isFile())
.mapToLong(f -> f.length()).sum()
It is important to filter out all directories, because the length method isn't guaranteed to be 0 for directories.
At least this code delivers the same size information like Windows Explorer itself does.
Here's the best way to get a general File's size (works for directory and non-directory):
public static long getSize(File file) {
long size;
if (file.isDirectory()) {
size = 0;
for (File child : file.listFiles()) {
size += getSize(child);
}
} else {
size = file.length();
}
return size;
}
Edit: Note that this is probably going to be a time-consuming operation. Don't run it on the UI thread.
Also, here (taken from https://stackoverflow.com/a/5599842/1696171) is a nice way to get a user-readable String from the long returned:
public static String getReadableSize(long size) {
if(size <= 0) return "0";
final String[] units = new String[] { "B", "KB", "MB", "GB", "TB" };
int digitGroups = (int) (Math.log10(size)/Math.log10(1024));
return new DecimalFormat("#,##0.#").format(size/Math.pow(1024, digitGroups))
+ " " + units[digitGroups];
}
File.length() (Javadoc).
Note that this doesn't work for directories, or is not guaranteed to work.
For a directory, what do you want? If it's the total size of all files underneath it, you can recursively walk children using File.list() and File.isDirectory() and sum their sizes.
The File object has a length method:
f = new File("your/file/name");
f.length();
If you want to use Java 8 NIO API, the following program will print the size, in bytes, of the directory it is located in.
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class PathSize {
public static void main(String[] args) {
Path path = Paths.get(".");
long size = calculateSize(path);
System.out.println(size);
}
/**
* Returns the size, in bytes, of the specified <tt>path</tt>. If the given
* path is a regular file, trivially its size is returned. Else the path is
* a directory and its contents are recursively explored, returning the
* total sum of all files within the directory.
* <p>
* If an I/O exception occurs, it is suppressed within this method and
* <tt>0</tt> is returned as the size of the specified <tt>path</tt>.
*
* #param path path whose size is to be returned
* #return size of the specified path
*/
public static long calculateSize(Path path) {
try {
if (Files.isRegularFile(path)) {
return Files.size(path);
}
return Files.list(path).mapToLong(PathSize::calculateSize).sum();
} catch (IOException e) {
return 0L;
}
}
}
The calculateSize method is universal for Path objects, so it also works for files.
Note that if a file or directory is inaccessible, in this case the returned size of the path object will be 0.
Works for Android and Java
Works for both folders and files
Checks for null pointer everywhere where needed
Ignores symbolic link aka shortcuts
Production ready!
Source code:
public long fileSize(File root) {
if(root == null){
return 0;
}
if(root.isFile()){
return root.length();
}
try {
if(isSymlink(root)){
return 0;
}
} catch (IOException e) {
e.printStackTrace();
return 0;
}
long length = 0;
File[] files = root.listFiles();
if(files == null){
return 0;
}
for (File file : files) {
length += fileSize(file);
}
return length;
}
private static boolean isSymlink(File file) throws IOException {
File canon;
if (file.getParent() == null) {
canon = file;
} else {
File canonDir = file.getParentFile().getCanonicalFile();
canon = new File(canonDir, file.getName());
}
return !canon.getCanonicalFile().equals(canon.getAbsoluteFile());
}
I've tested du -c <folderpath> and is 2x faster than nio.Files or recursion
private static long getFolderSize(File folder){
if (folder != null && folder.exists() && folder.canRead()){
try {
Process p = new ProcessBuilder("du","-c",folder.getAbsolutePath()).start();
BufferedReader r = new BufferedReader(new InputStreamReader(p.getInputStream()));
String total = "";
for (String line; null != (line = r.readLine());)
total = line;
r.close();
p.waitFor();
if (total.length() > 0 && total.endsWith("total"))
return Long.parseLong(total.split("\\s+")[0]) * 1024;
} catch (Exception ex) {
ex.printStackTrace();
}
}
return -1;
}
for windows, using java.io this reccursive function is useful.
public static long folderSize(File directory) {
long length = 0;
if (directory.isFile())
length += directory.length();
else{
for (File file : directory.listFiles()) {
if (file.isFile())
length += file.length();
else
length += folderSize(file);
}
}
return length;
}
This is tested and working properly on my end.
private static long getFolderSize(Path folder) {
try {
return Files.walk(folder)
.filter(p -> p.toFile().isFile())
.mapToLong(p -> p.toFile().length())
.sum();
} catch (IOException e) {
e.printStackTrace();
return 0L;
}
public long folderSize (String directory)
{
File curDir = new File(directory);
long length = 0;
for(File f : curDir.listFiles())
{
if(f.isDirectory())
{
for ( File child : f.listFiles())
{
length = length + child.length();
}
System.out.println("Directory: " + f.getName() + " " + length + "kb");
}
else
{
length = f.length();
System.out.println("File: " + f.getName() + " " + length + "kb");
}
length = 0;
}
return length;
}
After lot of researching and looking into different solutions proposed here at StackOverflow. I finally decided to write my own solution. My purpose is to have no-throw mechanism because I don't want to crash if the API is unable to fetch the folder size. This method is not suitable for mult-threaded scenario.
First of all I want to check for valid directories while traversing down the file system tree.
private static boolean isValidDir(File dir){
if (dir != null && dir.exists() && dir.isDirectory()){
return true;
}else{
return false;
}
}
Second I do not want my recursive call to go into symlinks (softlinks) and include the size in total aggregate.
public static boolean isSymlink(File file) throws IOException {
File canon;
if (file.getParent() == null) {
canon = file;
} else {
canon = new File(file.getParentFile().getCanonicalFile(),
file.getName());
}
return !canon.getCanonicalFile().equals(canon.getAbsoluteFile());
}
Finally my recursion based implementation to fetch the size of the specified directory. Notice the null check for dir.listFiles(). According to javadoc there is a possibility that this method can return null.
public static long getDirSize(File dir){
if (!isValidDir(dir))
return 0L;
File[] files = dir.listFiles();
//Guard for null pointer exception on files
if (files == null){
return 0L;
}else{
long size = 0L;
for(File file : files){
if (file.isFile()){
size += file.length();
}else{
try{
if (!isSymlink(file)) size += getDirSize(file);
}catch (IOException ioe){
//digest exception
}
}
}
return size;
}
}
Some cream on the cake, the API to get the size of the list Files (might be all of files and folder under root).
public static long getDirSize(List<File> files){
long size = 0L;
for(File file : files){
if (file.isDirectory()){
size += getDirSize(file);
} else {
size += file.length();
}
}
return size;
}
in linux if you want to sort directories then du -hs * | sort -h
You can use Apache Commons IO to find the folder size easily.
If you are on maven, please add the following dependency in your pom.xml file.
<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
If not a fan of Maven, download the following jar and add it to the class path.
https://repo1.maven.org/maven2/commons-io/commons-io/2.6/commons-io-2.6.jar
public long getFolderSize() {
File folder = new File("src/test/resources");
long size = FileUtils.sizeOfDirectory(folder);
return size; // in bytes
}
To get file size via Commons IO,
File file = new File("ADD YOUR PATH TO FILE");
long fileSize = FileUtils.sizeOf(file);
System.out.println(fileSize); // bytes
It is also achievable via Google Guava
For Maven, add the following:
<!-- https://mvnrepository.com/artifact/com.google.guava/guava -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>28.1-jre</version>
</dependency>
If not using Maven, add the following to class path
https://repo1.maven.org/maven2/com/google/guava/guava/28.1-jre/guava-28.1-jre.jar
public long getFolderSizeViaGuava() {
File folder = new File("src/test/resources");
Iterable<File> files = Files.fileTreeTraverser()
.breadthFirstTraversal(folder);
long size = StreamSupport.stream(files.spliterator(), false)
.filter(f -> f.isFile())
.mapToLong(File::length).sum();
return size;
}
To get file size,
File file = new File("PATH TO YOUR FILE");
long s = file.length();
System.out.println(s);
fun getSize(context: Context, uri: Uri?): Float? {
var fileSize: String? = null
val cursor: Cursor? = context.contentResolver
.query(uri!!, null, null, null, null, null)
try {
if (cursor != null && cursor.moveToFirst()) {
// get file size
val sizeIndex: Int = cursor.getColumnIndex(OpenableColumns.SIZE)
if (!cursor.isNull(sizeIndex)) {
fileSize = cursor.getString(sizeIndex)
}
}
} finally {
cursor?.close()
}
return fileSize!!.toFloat() / (1024 * 1024)
}

Get files from Jar which is on the repository without downloading the whole Jar from Java

I would like to access the jar file on the repository, search inside it for the certain files, retrieve those files and store them on my hard disc. I don't want to download the whole jar and then to search for it.
So let's assume I have the address of the Jar. Can someone provide me with the code for the rest of the problem?
public void searchInsideJar(final String jarUrl, final String artifactId,
final String artifactVersion) {
InputStream is = null;
OutputStream outStream = null;
JarInputStream jis = null;
int i = 1;
try {
String strDirectory = "C:/Users/ilijab/" + artifactId +artifactVersion;
// Create one directory
boolean success = (new File(strDirectory)).mkdir();
if (success) {
System.out.println("Directory: " + strDirectory + " created");
}
is = new URL(jarUrl).openStream();
jis = new JarInputStream(is);
while (true) {
JarEntry ent = jis.getNextJarEntry();
if (ent == null) {
break;
}
if (ent.isDirectory()) {
continue;
}
if (ent.getName().contains("someFile")) {
outStream = new BufferedOutputStream(new FileOutputStream(
strDirectory + "\\" + "someFile" + i));
while(ent.)
System.out.println("**************************************************************");
System.out.println(i);
i++;
}
}
} catch (Exception ex) {
}
}
So, in upper code, how can I save the file I found(the last if) into directory.
Assuming that by "repository", you mean a Maven repository, then i'm afraid this can't be done. Maven repositories let you download artifacts, like jar files, but won't look inside them for you.

How to deal with corrupted files that were created but IOException occured?

Could you please suggest how to deal with these situations ? I understand that in the second example, it is very rare that it would happen on unix, is it ? If access rights are alright. Also the file wouldn't be even created. I don't understand why the IOException is there, either it is created or not, why do we have to bother with IOException ?
But in the first example, there will be a corrupted zombie file. Now if you tell the user to upload it again, the same thing may happen. If you can't do that, and the inputstream has no marker. You loose your data ? I really don't like how this is done in Java, I hope the new IO in Java 7 is better
Is it usual to delete it
public void inputStreamToFile(InputStream in, File file) throws SystemException {
OutputStream out;
try {
out = new FileOutputStream(file);
} catch (FileNotFoundException e) {
throw new SystemException("Temporary file created : " + file.getAbsolutePath() + " but not found to be populated", e);
}
boolean fileCorrupted = false;
int read = 0;
byte[] bytes = new byte[1024];
try {
while ((read = in.read(bytes)) != -1) {
out.write(bytes, 0, read);
}
} catch (IOException e) {
fileCorrupted = true;
logger.fatal("IO went wrong for file : " + file.getAbsolutePath(), e);
} finally {
IOUtils.closeQuietly(in);
IOUtils.closeQuietly(out);
if(fileCorrupted) {
???
}
}
}
public File createTempFile(String fileId, String ext, String root) throws SystemException {
String fileName = fileId + "." + ext;
File dir = new File(root);
if (!dir.exists()) {
if (!dir.mkdirs())
throw new SystemException("Directory " + dir.getAbsolutePath() + " already exists most probably");
}
File file = new File(dir, fileName);
boolean fileCreated = false;
boolean fileCorrupted = false;
try {
fileCreated = file.createNewFile();
} catch (IOException e) {
fileCorrupted = true;
logger.error("Temp file " + file.getAbsolutePath() + " creation fail", e);
} finally {
if (fileCreated)
return file;
else if (!fileCreated && !fileCorrupted)
throw new SystemException("File " + file.getAbsolutePath() + " already exists most probably");
else if (!fileCreated && fileCorrupted) {
}
}
}
I really don't like how this is done in Java, I hope the new IO in Java 7 is better
I'm not sure how Java is different than any other programming language/environment in the way you are using it:
a client sends some data to your over the wire
as you read it, you write it to a local file
Regardless of the language/tools/environment, it's possible for the connection to be interrupted or lost, for the client to go away, for the disk to die, or for any other error to occur. I/O errors can occur in any and all environments.
What you can do in this situation is highly dependent on the situation and the error that occured. For example, is the data structured in some way where you could ask the user to resume uploading from record 1000, for example? However, there is no single solution that fits all here.

Problem with FTPClient class in java

I'm using org.apache.commons.net.ftp.FTPClient and seeing behavior that is, well... perplexing.
The method beneath intends to go through an FTPFile list, read them in and then do something with the contents. That's all working. What is not (really) working is that the FTPClient object does the following...
1) Properly retrieves and stores the FIRST file in the list
2) List item evaluates to NULL for x number of successive iterations of the loop (x varies on successive attempts
3) manages to retrieve exactly 1 more file in the list
4) reports that it is null for exactly 1 more file in the list
5) hangs indefinitely, reporting no further activity.
public static String mergeXMLFiles(List<FTPFile> files, String rootElementNodeName, FTPClient ftp){
String ret = null;
String fileAsString = null;
//InputStream inStream;
int c;
if(files == null || rootElementNodeName == null)
return null;
try {
System.out.println("GETTING " + files.size() + " files");
for (FTPFile file : files) {
fileAsString = "";
InputStream inStream = ftp.retrieveFileStream(file.getName());
if(inStream == null){
System.out.println("FtpUtil.mergeXMLFiles() couldn't initialize inStream for file:" + file.getName());
continue;//THIS IS THE PART THAT I SEE FOR files [1 - arbitrary number (usually around 20)] and then 1 more time for [x + 2] after [x + 1] passes successfully.
}
while((c = inStream.read()) != -1){
fileAsString += Character.valueOf((char)c);
}
inStream.close();
System.out.println("FILE:" + file.getName() + "\n" + fileAsString);
}
} catch (Exception e) {
System.out.println("FtpUtil.mergeXMLFiles() failed:" + e);
}
return ret;
}
has anyone seen anything like this? I'm new to FTPClient, am I doing something wrong with it?
According to the API for FTPClient.retrieveFileStream(), the method returns null when it cannot open the data connection, in which case you should check the reply code (e.g. getReplyCode(), getReplyString(), getReplyStrings()) to see why it failed. Also, you are suppose to finalize file transfers by calling completePendingCommand() and verifying that the transfer was indeed successful.
It works ok when I add after the "retrieve" command :
int response = client.getReply();
if (response != FTPReply.CLOSING_DATA_CONNECTION){
//TODO
}

Categories

Resources