I have written an AWS Lambda Handler as below :
package com.lambda;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import java.io.*;
public class TestDetailsHandler implements RequestStreamHandler {
public void handleRequest(InputStream input,OutputStream output,Context context){
// Get Lambda Logger
LambdaLogger logger = context.getLogger();
// Receive the input from Inputstream throw exception if any
File starting = new File(System.getProperty("user.dir"));
System.out.println("Source Location" + starting);
File cityFile = new File(starting + "City.db");
FileInputStream fis = null;
try {
fis = new FileInputStream(cityFile);
System.out.println("Total file size to read (in bytes) : "
+ fis.available());
int content;
while ((content = fis.read()) != -1) {
// convert to char and display it
System.out.print((char) content);
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (fis != null)
fis.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
Its read a file : City.db , available in resources folder, even I kept to everywhere see below :
But it showing following message on execution of this lambda function :
START RequestId: 5216ea47-fc43-11e5-96d5-83c1dcdad75d Version: $LATEST
Source Location/
java.io.FileNotFoundException: /city.db (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at com.lambda.TestDetailsHandler.handleRequest(TestDetailsHandler.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at lambdainternal.EventHandlerLoader$StreamMethodRequestHandler.handleRequest(EventHandlerLoader.java:511)
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:972)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:231)
at lambdainternal.AWSLambda.<clinit>(AWSLambda.java:59)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at lambdainternal.LambdaRTEntry.main(LambdaRTEntry.java:93)
END RequestId: 5216ea47-fc43-11e5-96d5-83c1dcdad75d
REPORT RequestId: 5216ea47-fc43-11e5-96d5-83c1dcdad75d Duration: 58.02 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 50 MB
Contents of the Pom.xml file :
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.lambda</groupId>
<artifactId>testdetails</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>test-handler</name>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
I have used various ways to keep file here and there , but at the end its not working. May you please let me know what is wrong here ?
However in my another project where I have kept xyz.properties file in resources folder and reading from a PropertyManager file, its working fine. When I tested it on my system its working fine, but on AWS Lambda function it doesn't work.
I have made following changes in my code and now its works perfect :
Majorly changed following two lines :
ClassLoader classLoader = getClass().getClassLoader();
File cityFile = new File(classLoader.getResource("City.db").getFile());
package com.lambda;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import java.io.*;
public class TestDetailsHandler implements RequestStreamHandler {
public void handleRequest(InputStream input,OutputStream output,Context context){
// Get Lambda Logger
LambdaLogger logger = context.getLogger();
// Receive the input from Inputstream throw exception if any
ClassLoader classLoader = getClass().getClassLoader();
File cityFile = new File(classLoader.getResource("City.db").getFile());
FileInputStream fis = null;
try {
fis = new FileInputStream(cityFile);
System.out.println("Total file size to read (in bytes) : "
+ fis.available());
int content;
while ((content = fis.read()) != -1) {
// convert to char and display it
System.out.print((char) content);
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (fis != null)
fis.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
This is how I did it, let's say this is how your project structure looks like -
And you want to read the file config.properties which is inside the project-dir/resources directory.
The code for reading the content of the file would be -
InputStream input = null;
try {
Path path = Paths.get(PropertyUtility.class.getResource("/").toURI());
// The path for config file in Lambda Instance -
String resourceLoc = path + "/resources/config.properties";
input = new FileInputStream(resourceLoc);
} catch(Exception e) {
// Do whatever
}
If you are following this project structure and using this code, then it will work in AWS Lambda.
PropertyUtility is just a utility class that I have created to read the contents of the config file. The PropertyUtility class looks like this -
As you can see in the above code, the path of the config file is different in the local system and in Lambda Instance.
In your local machine, PropertyUtility.class.getResource("/") points to bin, that is why you have to do path.getParent(), to point it to the project-directory which is HelloLambda in this example.
For the Lambda Instance, PropertyUtility.class.getResource("/") points directly to the project-directory.
If the file in located under resources directory, then the following solution should work:
String fileName = "resources/config.json";
Path path = Paths.get(this.getClass().getResource("/").toURI());
Path resourceLocation = path.resolve(fileName);
try(InputStream configStream = Files.newInputStream(resourceLocation)) {
//use your file stream as you need.
}
Here the most important part is "resources/config.json", it must not be "/resources/config.json", because the file location is /var/task/resources/config.json in lambda, I checked.
Hope this helps who still face problem in reading file in aws lambda.
If the file is located under resources folder, you can use it directly in lambda by using something like the following code:
final BufferedReader br = new BufferedReader(new FileReader("/flows/cancellation/MessageArray.json"));
I wanted to read a json file, you can have different use case, but the code works.
Ideally, one should read read from S3 as much as possible to have dynamic reads. Plus the reads are pretty fast.
However, as if your Java code is Maven based, your root classpath location starts from src/main/resources location.
So you can read, as you read in any web/core app, from the classpath as given below -
ClassLoader classLoader = YourClass.class.getClassLoader();
File cityFile = new
File(classLoader.getResource("yourFile").getFile());
This has worked for me pretty well!
Related
Previously I was working in AWS and I am new in Google Cloud, in AWS there was a way to upload directories/folder to bucket. I have done bit of research for uploading directory/folder in Google Cloud bucket but couldn't find. Can someone help me, how to achieve this in Google Cloud using Java.
There is no embedded functions in the Google Cloud Storage client library (or even in the API) to perform this automatically. You have to recursively upload all your files, managing yourselves the folders tree exploration.
With the gcloud CLI, you can use the command gsutil cp -r .... The -r stands for "recursive" and it performs exactly the same operation.
In Google Cloud Storage client library for java is not built-in the functionality to upload folders, but I crafted this java code to upload folders to GCS, I used Open JDK 8 and Debian
App.java
package com.example.app;
import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.io.File;
public class App {
public static void main(String[] args) throws IOException {
//directory that you want to upload
String dir = "/home/user/repo";
// get the name of the parent directory
String[] path = dir.split("/");
String folder = path[path.length - 1];
//get files in main directory
File[] files = new File(dir).listFiles();
// define your projectID & bucket name
String bucket = "myawesomefolder";
String projectId = "myawesomeprojectID";
Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
System.out.println("Uploading folder: " + folder);
uploadFolder(files, folder, bucket, storage);
}
static void uploadFolder(File[] files, String folder, String bucket, Storage storage) throws IOException {
for (File file : files) {
if (!file.isHidden()) {
// if it is a directory read the files within the subdirectory
if (file.isDirectory()) {
String[] lpath = file.getAbsolutePath().split("/");
String lfolder = lpath[lpath.length - 1];
String xfolder = folder + "/" + lfolder;
uploadFolder(file.listFiles(), xfolder, bucket, storage); // Calls same method again.
} else {
// add directory/subdirectory to the file name to create the file structure
BlobId blobId = BlobId.of(bucket, folder + "/" + file.getName());
//prepare object
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();
// upload object
storage.create(blobInfo, Files.readAllBytes(Paths.get(file.getAbsolutePath())));
System.out.println("Uploaded: gs://" + bucket + "/" + folder + "/" + file.getName());
}
}
}
}
}
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>testGcs</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>7</source>
<target>7</target>
</configuration>
</plugin>
</plugins>
</build>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
<version>1.111.2</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-nio</artifactId>
<version>0.121.2</version>
</dependency>
</dependencies>
</project>
I have a project that has a resource file in test folder:
src/test/resources/myfolder/testfile.txt
I have:
#Test
public void test() {
String args[] = { "myfolder/testfile.txt" };
MyClass.load(args);
}
And this is MyClass.java method:
public void load(String filePath)
ClassLoader classloader = Thread.currentThread().getContextClassLoader();
InputStream inputStream = classloader.getResourceAsStream(filePath);
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
reader = new BufferedReader(streamReader);
//...
}
If I launch the tests from Eclipse, all the tests goes well.
I I launch the maven clean install, test fails with java.lang.NullPointerException at this line:
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
What I have to do?
Thanks
Your testfile.txt resource is in the right place. This should work unless you have custom Maven resource filtering rules e.g. to exclude .txt files. Check what's in the target/test-classes after the failed build.
You could try to use absolute resource path /myfolder/testfile.txt instead and stop using ContextClassLoader:
String path = "/myfolder/testfile.txt";
InputStream inputStream = MyClass.class.getResourceAsStream(path);
You can try adding the below line with the build tag in pom.xml.
<directory>src/test/resources</directory>
I have created a same code for the same and its working for me. Please find below code, this might help you.
my pom file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.radhey</groupId>
<artifactId>junitTest</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<java-version>1.8</java-version>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<source>${java-version}</source>
<target>${java-version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
Main class
public class TestJunit {
public void load(String filePath) {
ClassLoader classloader = Thread.currentThread().getContextClassLoader();
InputStream inputStream = classloader.getResourceAsStream(filePath);
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
BufferedReader reader = new BufferedReader(streamReader);
String strCurrentLine;
try {
while ((strCurrentLine = reader.readLine()) != null) {
System.out.println(strCurrentLine);
}
}catch (Exception e)
{
e.printStackTrace();
}
}
}
and Test class
public class Test {
#org.junit.Test
public void test() {
String args[] = { "test/testfile.txt" };
TestJunit test2 = new TestJunit();
test2.load(args[0]);
}
}
I have also added this code to git
https://github.com/itthought/junitTest
I have a maven project with several modules, i.e.
<module>backend</module> <!-- provides annotations -->
<module>annotationProcessor</module> <!-- processes ann., generates files -->
<module>mainprog</module> <!-- uses annotations/files -->
backend provides an annotation class MyAnnotation for annotating classes.
mainprog contains Mainprog.java which defines a class with a #MyAnnotation annotation. At runtime this class tries to load a file via getResourceAsStream("Mainprog.properties") (which does not exist yet).
The annotationProcessor has a class MyAnnotationProcessor which maven executes and finds my annotations.
The processor should create the file Mainprog.properties from information gathered by the annotation processor.
I can not manage to put the properties file in a place where it is found when executing/testing Mainprog.
Where should I generate the file to into, being in a maven workflow?
How do I tell maven this file is used in tests or at runtime? Eventually
is has to be packaged in the jar.
Mainprog
package demo;
#MyAnnotation
public class Mainprog {
}
Use the properties file
Currently I do it in the testing class, but later this will be in the class itself.
package demo;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
import org.junit.Test;
public class MainprogTest {
Class testclass = Mainprog.class;
#Test
public void testPropertiesFile() throws IOException {
String fn = testclass.getCanonicalName().replace('.', '/') + ".properties";
System.err.println("loading: '"+fn+"'");
InputStream in = getClass().getResourceAsStream(fn);
Properties prop = new Properties();
prop.load(in);
in.close();
}
}
This currently runs as such:
loading: 'demo/Mainprog.properties'
Tests in error:
testPropertiesFile(demo.MainprogTest)
with a NullPointerException, because the stream returns null, i.e. does not exist.
Despite the file is there (but is it in the right place?):
towi#havaloc:~/git/project/mainprog$ find . -name Mainprog.properties
./src/java/demo/Mainprog.properties
./target/classes/demo/Mainprog.properties
Processor
package demo;
import com.github.javaparser.*;
import com.github.javaparser.ast.*;
import javax.annotation.processing.*;
import javax.lang.model.element.*;
#SupportedAnnotationTypes({"demo.MyAnnotation"})
public class MyAnnotationProcessor extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> elements, RoundEnvironment env) {
for (TypeElement te : elements) {
for (Element e : env.getElementsAnnotatedWith(te))
{
processAnnotation(e);
}
}
return true;
}
private void processAnnotation(Element elem) {
final TypeElement classElem = (TypeElement) elem;
...
final String prefix = System.getProperty("user.dir").endsWith("/"+"mainprog") ? "." : "mainprog";
final String className = classElem.getQualifiedName().toString();
String fileName = prefix + "/src/java/" + className.replace('.', '/') + ".java";
FileInputStream in = new FileInputStream(fileName);
final CompilationUnit cu = JavaParser.parse(in);
final CallGraph graph = ...
generateInfoProperties(classElem, fileName, graph);
}
private void generateInfoProperties(TypeElement classElem, String inFilename, CallGraph graph) throws IOException {
final File outFile = new File(inFilename
.replace("/src/java/", "/src/java/") // <<< WHERE TO ???
.replace(".java", ".properties"));
outFile.getParentFile().mkdirs();
try (PrintWriter writer = new PrintWriter(outFile, "UTF-8")) {
final Properties ps = new Properties();
graph.storeAsProperties(ps);
ps.store(writer, inFilename);
writer.close();
}
}
}
As you can see, there is a lot of guesswork and "heuristics" going on when handling directory names. All that System.getProperty("user.dir") and replace("/src/java/", "/src/java/") is probably wrong, but what is better?
maven
In Maven I have 4 poms, of course
pom.xml
backend/pom.xml
annotationProcessor/pom.xml
mainprog/pom.xml
Only one of seems to me contains anything of note, i.e., the execution of the annotation processor in mainprog/pom.xml:
<project>
....
<dependencies>
<dependency>
<groupId>project</groupId>
<artifactId>backend</artifactId>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>project</groupId>
<artifactId>annotationProcessor</artifactId>
<scope>compile</scope>
</dependency>
</dependencies>
<build>
<finalName>mainprog</finalName>
<sourceDirectory>src/java</sourceDirectory>
<resources>
<resource>
<directory>${basedir}/src/conf</directory>
<targetPath>META-INF</targetPath>
</resource>
<resource>
<directory>${basedir}/web</directory>
</resource>
<resource>
<directory>${basedir}/src/java</directory>
<includes>
<include>**/*.xml</include>
<include>**/*.properties</include>
<include>**/*.wsdl</include>
<include>**/*.xsd</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<annotationProcessors>
<annotationProcessor>demo.MyAnnotationProcessor
</annotationProcessor>
</annotationProcessors>
</configuration>
</plugin>
...
</plugins>
</build>
</project>
I thought by generating the file into /src/java/ and then having <resource><directory>${basedir}/src/java and <include>**/*.properties is enough, but it does not seem so. Why is that?
Use the provided Filer, which can be obtained using processingEnv.getFiler(). If you create a source file using it, the compiler will compile it on the next round and you won't need to worry about configuring Maven to compile generated source files.
I've just been playing around with the Java 7 WatchService for monitoring a file for change.
Here's a little bit of code I knocked up:
WatchService watcher = FileSystems.getDefault().newWatchService();
Path path = Paths.get("c:\\testing");
path.register(watcher, StandardWatchEventKinds.ENTRY_MODIFY);
while (true) {
WatchKey key = watcher.take();
for (WatchEvent event : key.pollEvents()) {
System.out.println(event.kind() + ":" + event.context());
}
boolean valid = key.reset();
if (!valid) {
break;
}
}
This seems to be working, and I get notifications as to when a file 'changethis.txt' gets modified.
However, in addition to being able to notify when a file changes, is there anyway of being notified as to the location within the file that the modification occurred?
I've had a look through the Java docs but I can't seem to find anything.
Is this possible using the WatchService, or would something custom have to be implemented?
Thanks
For what it is worth, I have hacked a little proof of concept which is able to
detect added, modified and deleted files in a watched directory,
displaying unified diffs for each change (also full diffs when files were added/deleted),
keeping track of successive changes by keeping a shadow copy of the source directory,
work in a user-defined rhythm (default is 5 seconds) so as not to print too many small diffs in a short period of time, but rather somewhat bigger ones once in a while.
There are several limitations which would be impediments in production environments:
In order to not complicate the sample code more than necessary, subdirectories are copied at the beginning when the shadow directory is created (because I have recycled an existing method to create a deep directory copy), but ignored during runtime. Only files right below the watched directory are being monitored so as to avoid recursion.
Your requirement not to use external libraries is not met because I really wanted to avoid re-inventing the wheel for unified diff creation.
This solution's biggest advantage - it is able to detect changes anywhere in a text file, not only at the end of file like tail -f - is also its biggest disadvantage: Whenever a file changes it must be fully shadow-copied because otherwise the program cannot detect the subsequent change. So I would not recommend this solution for very big files.
How to build:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>de.scrum-master.tools</groupId>
<artifactId>SO_WatchServiceChangeLocationInFile</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>com.googlecode.java-diff-utils</groupId>
<artifactId>diffutils</artifactId>
<version>1.3.0</version>
</dependency>
</dependencies>
</project>
Source code (sorry, a bit lengthy):
package de.scrum_master.app;
import difflib.DiffUtils;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.LinkedList;
import java.util.List;
import static java.nio.file.StandardWatchEventKinds.*;
public class FileChangeWatcher {
public static final String DEFAULT_WATCH_DIR = "watch-dir";
public static final String DEFAULT_SHADOW_DIR = "shadow-dir";
public static final int DEFAULT_WATCH_INTERVAL = 5;
private Path watchDir;
private Path shadowDir;
private int watchInterval;
private WatchService watchService;
public FileChangeWatcher(Path watchDir, Path shadowDir, int watchInterval) throws IOException {
this.watchDir = watchDir;
this.shadowDir = shadowDir;
this.watchInterval = watchInterval;
watchService = FileSystems.getDefault().newWatchService();
}
public void run() throws InterruptedException, IOException {
prepareShadowDir();
watchDir.register(watchService, ENTRY_CREATE, ENTRY_MODIFY, ENTRY_DELETE);
while (true) {
WatchKey watchKey = watchService.take();
for (WatchEvent<?> event : watchKey.pollEvents()) {
Path oldFile = shadowDir.resolve((Path) event.context());
Path newFile = watchDir.resolve((Path) event.context());
List<String> oldContent;
List<String> newContent;
WatchEvent.Kind<?> eventType = event.kind();
if (!(Files.isDirectory(newFile) || Files.isDirectory(oldFile))) {
if (eventType == ENTRY_CREATE) {
if (!Files.isDirectory(newFile))
Files.createFile(oldFile);
} else if (eventType == ENTRY_MODIFY) {
Thread.sleep(200);
oldContent = fileToLines(oldFile);
newContent = fileToLines(newFile);
printUnifiedDiff(newFile, oldFile, oldContent, newContent);
try {
Files.copy(newFile, oldFile, StandardCopyOption.REPLACE_EXISTING);
} catch (Exception e) {
e.printStackTrace();
}
} else if (eventType == ENTRY_DELETE) {
try {
oldContent = fileToLines(oldFile);
newContent = new LinkedList<>();
printUnifiedDiff(newFile, oldFile, oldContent, newContent);
Files.deleteIfExists(oldFile);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
watchKey.reset();
Thread.sleep(1000 * watchInterval);
}
}
private void prepareShadowDir() throws IOException {
recursiveDeleteDir(shadowDir);
Runtime.getRuntime().addShutdownHook(
new Thread() {
#Override
public void run() {
try {
System.out.println("Cleaning up shadow directory " + shadowDir);
recursiveDeleteDir(shadowDir);
} catch (IOException e) {
e.printStackTrace();
}
}
}
);
recursiveCopyDir(watchDir, shadowDir);
}
public static void recursiveDeleteDir(Path directory) throws IOException {
if (!directory.toFile().exists())
return;
Files.walkFileTree(directory, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
Files.delete(file);
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
Files.delete(dir);
return FileVisitResult.CONTINUE;
}
});
}
public static void recursiveCopyDir(final Path sourceDir, final Path targetDir) throws IOException {
Files.walkFileTree(sourceDir, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
Files.copy(file, Paths.get(file.toString().replace(sourceDir.toString(), targetDir.toString())));
return FileVisitResult.CONTINUE;
}
#Override
public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
Files.createDirectories(Paths.get(dir.toString().replace(sourceDir.toString(), targetDir.toString())));
return FileVisitResult.CONTINUE;
}
});
}
private static List<String> fileToLines(Path path) throws IOException {
List<String> lines = new LinkedList<>();
String line;
try (BufferedReader reader = new BufferedReader(new FileReader(path.toFile()))) {
while ((line = reader.readLine()) != null)
lines.add(line);
}
catch (Exception e) {}
return lines;
}
private static void printUnifiedDiff(Path oldPath, Path newPath, List<String> oldContent, List<String> newContent) {
List<String> diffLines = DiffUtils.generateUnifiedDiff(
newPath.toString(),
oldPath.toString(),
oldContent,
DiffUtils.diff(oldContent, newContent),
3
);
System.out.println();
for (String diffLine : diffLines)
System.out.println(diffLine);
}
public static void main(String[] args) throws IOException, InterruptedException {
String watchDirName = args.length > 0 ? args[0] : DEFAULT_WATCH_DIR;
String shadowDirName = args.length > 1 ? args[1] : DEFAULT_SHADOW_DIR;
int watchInterval = args.length > 2 ? Integer.getInteger(args[2]) : DEFAULT_WATCH_INTERVAL;
new FileChangeWatcher(Paths.get(watchDirName), Paths.get(shadowDirName), watchInterval).run();
}
}
I recommend to use the default settings (e.g. use a source directory named "watch-dir") and play around with it for a while, watching the console output as you create and edit some text files in an editor. It helps understand the software's inner mechanics. If something goes wrong, e.g. within one 5 second rhythm a file is created but also quickly deleted again, there is nothing to copy or diff, so the program will just print a stack trace to System.err.
Okay, here is another answer as a variation of my previous one for changes at any file position (diff). Now the somewhat simpler case is files only being appended (tail).
How to build:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>de.scrum-master.tools</groupId>
<artifactId>SO_WatchServiceChangeLocationInFile</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<build>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<!-- Use snapshot because of the UTF-8 problem in https://issues.apache.org/jira/browse/IO-354 -->
<version>2.5-SNAPSHOT</version>
</dependency>
</dependencies>
<repositories>
<repository>
<id>apache.snapshots</id>
<url>http://repository.apache.org/snapshots/</url>
</repository>
</repositories>
</project>
As you can see, we use Apache Commons IO here. (Why a snapshot version? Follow the link in the XML comment if you are interested.)
Source code:
package de.scrum_master.app;
import org.apache.commons.io.input.Tailer;
import org.apache.commons.io.input.TailerListenerAdapter;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.*;
import static java.nio.file.StandardWatchEventKinds.ENTRY_CREATE;
public class FileTailWatcher {
public static final String DEFAULT_WATCH_DIR = "watch-dir";
public static final int DEFAULT_WATCH_INTERVAL = 5;
private Path watchDir;
private int watchInterval;
private WatchService watchService;
public FileTailWatcher(Path watchDir, int watchInterval) throws IOException {
if (!Files.isDirectory(watchDir))
throw new IllegalArgumentException("Path '" + watchDir + "' is not a directory");
this.watchDir = watchDir;
this.watchInterval = watchInterval;
watchService = FileSystems.getDefault().newWatchService();
}
public static class MyTailerListener extends TailerListenerAdapter {
public void handle(String line) {
System.out.println(line);
}
}
public void run() throws InterruptedException, IOException {
try (DirectoryStream<Path> dirEntries = Files.newDirectoryStream(watchDir)) {
for (Path file : dirEntries)
createTailer(file);
}
watchDir.register(watchService, ENTRY_CREATE);
while (true) {
WatchKey watchKey = watchService.take();
for (WatchEvent<?> event : watchKey.pollEvents())
createTailer(watchDir.resolve((Path) event.context()));
watchKey.reset();
Thread.sleep(1000 * watchInterval);
}
}
private Tailer createTailer(Path path) {
if (Files.isDirectory(path))
return null;
System.out.println("Creating tailer: " + path);
return Tailer.create(
path.toFile(), // File to be monitored
Charset.defaultCharset(), // Character set (available since Commons IO 2.5)
new MyTailerListener(), // What should happen for new tail events?
1000, // Delay between checks in ms
true, // Tail from end of file, not from beginning
true, // Close & reopen files in between reads,
// otherwise file is locked on Windows and cannot be deleted
4096 // Read buffer size
);
}
public static void main(String[] args) throws IOException, InterruptedException {
String watchDirName = args.length > 0 ? args[0] : DEFAULT_WATCH_DIR;
int watchInterval = args.length > 2 ? Integer.getInteger(args[2]) : DEFAULT_WATCH_INTERVAL;
new FileTailWatcher(Paths.get(watchDirName), watchInterval).run();
}
}
Now try appending to existing files and/or creating new ones. Everything will be printed to standard output. In a production environment you would maybe display multiple windows or tabs, one for each log file. Whatever...
#Simon: I hope this one suits your situation better than the more general case and is worth a bounty. :-)
I am using jdbc to connect to an Oracle 10g database. Building the connection in Eclipse/Java works fine. However when I move the code to a Lotus 8.5.2 agent I end up with the following error(s):
Java.lang.ArrayIndexOutOfBoundsException: Array index out of range: -1
at oracle.jdbc.driver.T4CTTIoauthenticate.setSessionFields(T4CTTIoauthenticate.java:1019)
at oracle.jdbc.driver.T4CTTIoauthenticate.<init>(T4CTTIoauthenticate.java:186)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:354)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:454)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:802)
at oracle.jdbc.pool.OracleDataSource.getPhysicalConnection(OracleDataSource.java:298)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:222)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:166)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
This is the code used to connect:
Class.forName("oracle.jdbc.OracleDriver");
Connection conn = DriverManager.getConnection(
"jdbc:oracle:thin:#:xx.xx.xx.xx:1521:xx", "xx", "xx");
I have tried to solve this in different ways:
- use the Lotus JVM in eclipse
- use different jdbc jars in eclipse
- use different ways to build the connection in Lotus
- use different jdbc jars jars in lotus
Finally I moved the ojdbc14.jar file Lotus\Notes\jvm\lib\ext directory and it works fine now.
This solution will work, but obviously I prefer to distribute this jar along with the nsf. Is there a way I can make this happen?
As suggested by leyrer. I tried adding the following line to the "/jvm/lib/security/java.policy" file
permission java.security.AllPermission;
This does results in the same error message.
For now I will stick with placing the ojdbc5.jar in the /ext directory.
If you are using the ojdbc jar unpacked, be sure you are not excluding the oracle/sql/converter_xcharset/*.glb files. I was getting this same error when my executable jar was built using Maven, but was not including these files. The block below explicitly includes them.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack-dependencies</id>
<phase>prepare-package</phase>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<configuration>
<excludeTypes>pom</excludeTypes>
<includes>**/*.class,**/*.glb</includes>
<outputDirectory>${project.build.directory}/classes</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>
I would guess, that the JVM's Security Manager is not allowing access to the network because the security policy does not specify to allow this action.
See Flying Saucer in Lotus Notes for more details.
Long time ago this issue arise... But I had to discover it for a customer this month. And nowhere I found a solution but some false pretences and incomplete analyses. Therefore and for all of you run into it, I will share my findings, the root cause of the issue and your opportunities to get it solved. I have tested it with the version 11.2.0.4 of the driver (ojdbc6.jar). Further, if your database uses UTF-8 encoding, it seems to work just with the java.policy adjustments. In my case, it was a database with windows 1252 encoding.
First of all, the oracle jdbc driver needs some security adjustments... it's better to set them explicitly and not by permission java.security.AllPermission;. Use this permissions, taken from the ocacle jdbc driver download page (ojdbc.policy file):
permission java.util.PropertyPermission "user.name", "read";
permission java.util.PropertyPermission "oracle.jdbc.*", "read";
permission java.util.PropertyPermission "oracle.net.wallet_location", "read";
permission java.util.PropertyPermission "oracle.net.tns_admin", "read";
permission javax.management.MBeanServerPermission "createMBeanServer";
permission javax.management.MBeanPermission "oracle.jdbc.driver.OracleDiagnosabilityMBean#[com.oracle.jdbc:type=diagnosability,*]", "registerMBean";
permission javax.management.MBeanTrustPermission "register";
After this settings are in place, you will run into the Java.lang.ArrayIndexOutOfBoundsException: Array index out of range: -1 issue. The root cause of that is, the the class loader of java agents (lotus.domino.AgentLoader) does not implement getResource(String name) and that leads to always returning null to the calling method. Since the orcale jdbc driver needs the glb files from the oracle.sql.converter_xcharset folder within the jar to work properly and they were loaded by the getRousource method mentioned above, this will not work! The result is the ArrayIndexOutOfBoundsException.
So the only solutions are either to use the driver from the file system (and use a jvm default class loader) or you change the class loading process as follows:
create a custom class loader:
public class CustomLoader extends ClassLoader {
private final AgentLoader loader;
public CustomLoader(AgentLoader agentLoader, ClassLoader parent) {
super(parent);
loader = agentLoader;
}
#Override
public URL getResource(String name) {
InputStream is = loader.getResourceAsStream(name);
if (is == null) {
return super.getResource(name);
}
try {
is.close();
} catch (IOException e) {
e.printStackTrace();
return null;
}
try {
URL url = new URL("dominoinmemory", "", -1, name, new DominoInMemoryStreamHandler(name));
System.out.println(url);
return url;
} catch (MalformedURLException e) {
e.printStackTrace();
return null;
}
}
private class DominoInMemoryStreamHandler extends URLStreamHandler {
private String resName;
byte[] content = null;
public DominoInMemoryStreamHandler(String resName) {
this.resName = resName;
}
#Override
protected URLConnection openConnection(final URL u) throws IOException {
if (!u.getProtocol().equals("dominoinmemory"))
throw new IOException("Cannot handle protocol: " + u.getProtocol());
InputStream is = loader.getResourceAsStream(resName);
content = toByteArray(is);
return new URLConnection(u) {
#Override
public int getContentLength() {
if (content != null) {
return content.length;
} else {
return super.getContentLength();
}
}
#Override
public void connect() throws IOException {
if (content != null) {
connected = true;
} else {
throw new IOException("The resource '" + resName + "' was not found");
}
}
#Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(content);
}
};
}
}
public static byte[] toByteArray(InputStream input) throws IOException {
ByteArrayOutputStream output = new ByteArrayOutputStream();
byte[] buffer = new byte[4096];
long count = 0;
int n = 0;
while (-1 != (n = input.read(buffer))) {
output.write(buffer, 0, n);
count += n;
}
return output.toByteArray();
}
In the domino agent, before any other operation occurs, change the parent class loader of the AgentLoader with reflection
public void NotesMain() {
try {
AgentLoader agentLoader = (AgentLoader) getClass().getClassLoader();
Field f1 = agentLoader.getClass().getSuperclass().getDeclaredField("parent");
f1.setAccessible(true);
ClassLoader parent = (ClassLoader) f1.get(agentLoader);
f1.set(agentLoader, new CustomLoader(agentLoader, parent));
...
Attention:
Use this at your own risk!
This code required two additional entries in the policy file:
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
permission java.net.NetPermission "specifyStreamHandler";