I'm having some problems with a certain segment of my code. What is supposed to happen is that the Java program takes some predetermined variables and uses UNIX's "sed" function to replace the Strings "AAA" and "BBB" in a pre-written shell script. I have three methods to do this: one that replaces the Strings in the file using "sed" and writes the output to a different file; one that removes the original file with the "rm" command; and one that renames the output file to the name of the original file using "mv". There are three copies of the shell script in three different directories, and each one should be replaced with it's own specific variables.
The replacement should occur for all three shell script files, but it only occurs for two. On the third shell script, it seems as if the process did not complete, because the byte size of that file is 0. The file that is not replaced is completely random, so it's not the same file not working during each run.
I'm not sure why this error is occuring. Does anyone have any possible solutions? Here is the code:
public void modifyShellScript(String firstParam, String secondParam, int thirdParam, int fourthParam, String outfileDirectoryPath) throws IOException{
String thirdDammifParamString = "";
String fourthDammifParamString = "";
thirdDammifParamString = Integer.toString(thirdDammifParam);
fourthDammifParamString = Integer.toString(fourthDammifParam);
String[] cmdArray3 = {"/bin/tcsh","-c", "sed -e 's/AAA/"+firstDammifParam+"/' -e 's/BBB/"+secondDammifParam+"/' -e 's/C/"+thirdDammifParamString+"/' -e 's/D/"+fourthDammifParam+"/' "+outfileDirectoryPath+"runDammifScript.sh > "+outfileDirectoryPath+"runDammifScript.sh2"};
Process p;
p = Runtime.getRuntime().exec(cmdArray3);
}
public void removeOriginalShellScript(String outfileDirectoryPath) throws IOException{
String[] removeCmdArray = {"/bin/tcsh", "-c", "rm "+outfileDirectoryPath+"runDammifScript.sh"};
Process p1;
p1 = Runtime.getRuntime().exec(removeCmdArray);
}
public void reconvertOutputScript(String outfileDirectoryPath) throws IOException{
String[] reconvertCmdArray = {"/bin/tcsh","-c","mv "+outfileDirectoryPath+"runDammifScript.sh2 "+outfileDirectoryPath+"runDammifScript.sh"};
Process reconvert;
reconvert = Runtime.getRuntime().exec(reconvertCmdArray);
}
If you haven't already, take a look at When Runtime.exec() won't. One or more Process might be hanging because you aren't consuming the output and error streams. In particular, look at the StreamGobbler in that article's examples.
It could also be the case that you're forgetting to include a trailing slash in outfileDirectoryPath. Read the Process' error stream to see what's going wrong:
InputStream err = p.getErrorStream();
// read the stream and print its contents to the console, or whatever
Keep in mind that you'll want to read the streams in separate threads.
That said, I would personally just do all of this natively in Java instead of relying on external, platform-specific dependencies.
For substring replacement, read the file to a String, then use String.replace and/or String.replaceAll.
You can replace removeOriginalShellScript's body with a call to File.delete:
public void removeOriginalShellScript(String outfileDirectoryPath) throws IOException{
File f = new File(outfileDirectoryPath, "runDammifScript.sh");
f.delete();
}
You can replace reconvertOutputScript's body with a call to Files.move:
public void reconvertOutputScript(String outfileDirectoryPath) throws IOException{
File src = new File(outfileDirectoryPath, "runDammifScript.sh2");
File dst = new File(outfileDirectoryPath, "runDammifScript.sh");
Files.move(src, dst);
}
Or just replace both removeOriginalShellScript and reconvertOoutputScript with a call to Files.move, specifying the REPLACE_EXISTING option:
File src = new File(outfileDirectoryPath, "runDammifScript.sh2");
File dst = new File(outfileDirectoryPath, "runDammifScript.sh");
Files.move(src, dst, REPLACE_EXISTING);
Related
I am automating a particular process where in one of the steps I need to copy a .gitignore file from one directory to another.
I am using Apache's FileUtils class to achieve the same, however it is not able to recognise this particular file (it is although present in the folder). The code is working for other files.
Here is my code:
public void copyFile(String destinationPath, String file) throws IOException {
ClassPathResource classPathResourceAPIUtils = new ClassPathResource(file);
String fileName = file.substring(file.lastIndexOf("/"));
InputStream inputStreamapiUtils = classPathResourceAPIUtils.getInputStream();
BufferedReader readUtils = new BufferedReader(new InputStreamReader(inputStreamapiUtils));
List<String> utilsLines = readUtils.lines().collect(Collectors.toList());
FileUtils.writeLines(new File(destinationPath+fileName), utilsLines, false);
}
Why are you rewriting the files. Just use:
Files.move(from, to, StandardCopyOption.REPLACE_EXISTING)
or
Files.copy(from, to, StandardCopyOption.REPLACE_EXISTING)
I have below code where i am reading the file from particular directory, processing it and once processed i am moving the file to archive directory. This is working fine. I am receiving new file everyday and i am using Control-M scheduler job to run this process.
Now in next run i am reading the new file from that particularly directory again and checking this file with the file in the archive directory and if the content is different then only process the file else dont do anything. There is shell script written to do this job and we dont see any log for this process.
Now i want to produce log message in my java code if the files are identical from the particular directory and in the archive directory then generate log that 'files are identical'. But i dont know exactly how to do this. I dont want to write the the logic to process or move anything in the file ..i just need to check the files are equal and if it is then
produce log message. The file which i recieve are not very big and the max size can be till 10MB.
Below is my code:
for(Path inputFile : pathsToProcess) {
// read in the file:
readFile(inputFile.toAbsolutePath().toString());
// move the file away into the archive:
Path archiveDir = Paths.get(applicationContext.getEnvironment().getProperty(".archive.dir"));
Files.move(inputFile, archiveDir.resolve(inputFile.getFileName()),StandardCopyOption.REPLACE_EXISTING);
}
return true;
}
private void readFile(String inputFile) throws IOException, FileNotFoundException {
log.info("Import " + inputFile);
try (InputStream is = new FileInputStream(inputFile);
Reader underlyingReader = inputFile.endsWith("gz")
? new InputStreamReader(new GZIPInputStream(is), DEFAULT_CHARSET)
: new InputStreamReader(is, DEFAULT_CHARSET);
BufferedReader reader = new BufferedReader(underlyingReader)) {
if (isPxFile(inputFile)) {
Importer.processField(reader, tablenameFromFilename(inputFile));
} else {
Importer.processFile(reader, tablenameFromFilename(inputFile));
}
}
log.info("Import Complete");
}
}
Based on the limited information about the size of file or performance needs, something like this can be done. This may not be 100% optimized, but just an example. You may also have to do some exception handling in the main method, since the new method might throw an IOException:
import org.apache.commons.io.FileUtils; // Add this import statement at the top
// Moved this statement outside the for loop, as it seems there is no need to fetch the archive directory path multiple times.
Path archiveDir = Paths.get(applicationContext.getEnvironment().getProperty("betl..archive.dir"));
for(Path inputFile : pathsToProcess) {
// Added this code
if(checkIfFileMatches(inputFile, archiveDir); {
// Add the logger here.
}
//Added the else condition, so that if the files do not match, only then you read, process in DB and move the file over to the archive.
else {
// read in the file:
readFile(inputFile.toAbsolutePath().toString());
Files.move(inputFile, archiveDir.resolve(inputFile.getFileName()),StandardCopyOption.REPLACE_EXISTING);
}
}
//Added this method to check if the source file and the target file contents are same.
// This will need an import of the FileUtils class. You may change the approach to use any other utility file, or read the data byte by byte and compare. If the files are very large, probably better to use Buffered file reader.
private boolean checkIfFileMatches(Path sourceFilePath, Path targetDirectoryPath) throws IOException {
if (sourceFilePath != null) { // may not need this check
File sourceFile = sourceFilePath.toFile();
String fileName = sourceFile.getName();
File targetFile = new File(targetDirectoryPath + "/" + fileName);
if (targetFile.exists()) {
return FileUtils.contentEquals(sourceFile, targetFile);
}
}
return false;
}
I'm trying to make a loop that pass through a folder for files, then the script only takes the files with the extension .wav, so I call an exe with two parameters and converts the audio. Without a loop works, because the command is a String[] variable and I just have to put my parameters in parentheses, but when I tried to make it all dynamic nothing happens, even I tried the normal static version, but the parameters in two separated strings, then I've added those strings to the String[] that contains the command to execute the application and it just doesnt' work. This is the code (With loop):
File dir = new File("moved");
File[] dirlist = dir.listFiles();
for(File f3 : dirlist)
{
if(f3.getName().endsWith(".wav"))
{
String firstnam = f3.getName();
String secondnam = firstnam.replaceFirst(".wav", "_converted.wav")
String[] command = {"cmd", "/c", "AdpcmEncode.exe", firstnam, secondnam};
Runtime rt = Runtime.getRuntime();
Process process = rt.exec(command, null, dir);
}
}
What I need most is to know how to pass these dynamic parameters into the command, if it's posible also know how to change names through audio conversions (the input and output can't be the same).
Well after a research about this, I have finally put my code in a separate class from the main JFrame (That fixed the dynamic parameters, for some reason I don't understand, I have to say that code I had it into another JFrame Class), now because the code is into an IOException, the thing was a little bit complicated, but I realize that it was simple like this:
try{
ConvertAllClass.converter(null);
}catch(IOException e){
//e.printStackTrace();
}
Maybe the null is not necessary, but all works. This is the "test" function (I just wanted to know if converts one audio at least):
public class ConvertAllClass {
public static void converter(String args[]) throws IOException {
File f = new File("C:\\convertion_kit\\bin");
String firstnam = "mx_game_over.wav";
String secondnam = "mx_done.wav";
ProcessBuilder pb = new ProcessBuilder("cmd", "/c","start","AdpcmEncode.exe", firstnam, secondnam );
pb.directory(f);
Process process = pb.start();
}
}
I would like to merge csv files present in some folder into one file. Suppose there are 12000 files in folder and every file have 20000 record. Could any one please give me some best idea/solution using multi-threading concept.
I have written below code but i think for large data it is not fine :
public class Test {
public static void main(String[] args) throws IOException {
String path="Desktop//Files//";
List<Path> listofFile=getListOfFileInFolder(path);
List<Path> paths = listofFile;
List<String> mergedLines = getMergedLines(paths);
Path target = Paths.get(path+"temp.csv");
System.out.println(target);
Files.write(target, mergedLines, Charset.forName("UTF-8")); }
public static List<Path> getListOfFileInFolder(String path){
List<Path> results = new ArrayList<Path>();
File[] files = new File(path).listFiles();
for (File file : files) {
if (file.isFile()) {
results.add(Paths.get(path+file.getName()));
}
}
return results;
}
private static List<String> getMergedLines(List<Path> paths) throws IOException {
List<String> mergedLines = new ArrayList<> ();
for (Path p : paths){
List<String> lines = Files.readAllLines(p, Charset.forName("UTF-8"));
if (!lines.isEmpty()) {
if (mergedLines.isEmpty()) {
mergedLines.add(lines.get(0)); //add header only once
}
mergedLines.addAll(lines.subList(1, lines.size()));
}
}
return mergedLines;
}
}
It is unlikely that multi-threading will improve performance in this case. Multi-threading can speed up batch-operations when CPU is the bottleneck, by utilising more than one core. But in your process, the bottleneck will be disk reads. A single CPU core will handle the merge as quickly as the filesystem can deliver the bytes.
The biggest concern with the number of files you're proposing, is that the initial directory listFiles() will take some time, and the resulting File[20000] will consume a lot of memory.
Likewise, for 10,000 record files, slurping the whole thing into memory with readAllLines() is going to use a lot of memory, working the GC hard for no good reason.
And, you're collecting the results into a list of String, which will have 200,000,000 entries by the time it's got 20,000 * 10,000 lines in it. For an 80 column file, this is 16GB of Strings, plus object overheads.
Better to read in a small amount at a time, get it written to your output file, then dropped from memory as early as possible.
You could do this by going event-driven, and using java.nio.Files.walkFileTree():
try(OutputStream out = new FileOutputStream(outpath)) {
Files.walkFileTree(inputDirectoryPath, Collections.emptySet(), 1, new MergeToOutputStreamVisitor(out));
}
... where MergeToOutputStreamVisitor is something along the lines of:
public class MergeToOutputStreamVisitor extends SimpleFileVisitor {
private final OutputStream outstream;
#Override
public FileVisitResult visitFile(T file, BasicFileAttributes attrs) {
FileUtils.copyFile(file, outstream);
return FileVisitResult.CONTINUE;
}
}
I've used Apache Commons-IO's FileUtils.copyFile() to squirt each file's contents into the OutputStream. If you can't use Commons-IO, write your own version of this. If you need to do more, for example, skip the first line, you can also roll your own, using something like:
try(BufferedReader reader = new BufferedReader(new FileReader(file)); Writer writer = new OutputStreamWriter(outstream)) {
reader.readLine(); // header - throw it away
String line = reader.readLine();
while(line != null) {
writer.write(line);
line = reader.readLine();
}
}
Using this approach, at any moment, your program has at most one File entry and one line of data in memory at a time (plus any buffering the library routines use -- this is good). It will stream the lines directly from input files to output file. I promise that you'll get nowhere near 100% usage of a single core, therefore multi-threading will not make it any faster.
I'm using PhantomJS to do headless testing of a website. Since the exe will be bundled inside the jar file I decided to read it and write it to a temporary file so that I can access it normally via absolute path.
Here's code for converting an InputStream into a String referring to the new temporary file:
public String getFilePath(InputStream inputStream, String fileName)
throws IOException
{
String fileContents = readFileToString(inputStream);
File file = createTemporaryFile(fileName);
String filePath = file.getAbsolutePath();
writeStringToFile(fileContents, filePath);
return file.getAbsolutePath();
}
private void writeStringToFile(String text, String filePath)
throws FileNotFoundException
{
PrintWriter fileWriter = new PrintWriter(filePath);
fileWriter.print(text);
fileWriter.close();
}
private File createTemporaryFile(String fileName)
{
String tempoaryFileDirectory = System.getProperty("java.io.tmpdir");
File temporaryFile = new File(tempoaryFileDirectory + File.separator
+ fileName);
return temporaryFile;
}
private String readFileToString(InputStream inputStream)
throws UnsupportedEncodingException, IOException
{
StringBuilder inputStringBuilder = new StringBuilder();
BufferedReader bufferedReader = new BufferedReader(
new InputStreamReader(inputStream, "UTF-8"));
String line;
while ((line = bufferedReader.readLine()) != null)
{
inputStringBuilder.append(line);
inputStringBuilder.append(System.lineSeparator());
}
String fileContents = inputStringBuilder.toString();
return fileContents;
}
This works but when I'm trying to launch PhantomJS it'll give me an ExecuteException:
SERVERE: org.apache.commons.exec.ExecuteException: Execution failed (Exit value: -559038737. Caused by java.io.IOException: Cannot run program "C:\Users\%USERPROFILE%\AppData\Local\Temp\phantomjs.exe" (in directory "."): CreateProcess error=216, the version of %1 is not compatible with this Windows version. Check the system information of your computer and talk to the distributor of this software)
If I don't try to read PhantomJS out of the jar hence using a relative path it works fine. The question is how I can read and execute PhantomJS from within a jar file or at least get the workaround with reading and writing a new (temporary) file to work.
You can't execute a JAR entry, because a JAR is a zip file and operating systems don't support running executables from inside a zip file. They could in principle, but it would boil down to "copy the exe out of the zip and then run it".
The exe is getting corrupted because you're storing it in a String. Strings aren't binary data, they're UTF-16, which is why you can't read straight from an InputStream into a String--encoding conversion is required. Your code is reading the exe as UTF-8, converting it to UTF-16, then writing it back out with the default character set. Even if the default character set happens to be UTF-8 on your machine, this will result in mangled data because an exe isn't valid UTF-8.
Try this on for size. Java 7 introduced NIO.2, which (among other things), has a lot of convenience methods for common file operations. Including putting an InputStream into a file! I'm also using the temp file API, which will prevent collisions if multiple instances of your app are run at the same time.
public String getFilePath(InputStream inputStream, String prefix, String suffix)
throws IOException
{
java.nio.file.Path p = java.nio.file.Files.createTempFile(prefix, suffix);
p.toFile().deleteOnExit();
java.nio.file.Files.copy(inputStream, p, java.nio.file.StandardCopyOption.REPLACE_EXISTING);
return p.toAbsolutePath().toString();
}