File lock exception when using graphHopper in java program - java

I'm using GraphHopper in the following way:
GraphHopper hopper = new GraphHopper().forServer();
hopper.setCHEnable(false);
hopper.setGraphHopperLocation(GraphHoperMasterFile);
hopper.setOSMFile(OSMFile);
hopper.setEncodingManager(new EncodingManager("car,bike"));
hopper.importOrLoad();
GHRequest req = new GHRequest().addPoint(new GHPoint (latFrom, lonFrom)).addPoint(new GHPoint(latTo, lonTo))
.setVehicle("car")
.setWeighting("fastest")
.setAlgorithm(AlgorithmOptions.ASTAR_BI);;
req.getHints().put("pass_through", true);
GHResponse res = hopper.route(req);
I obtained the GraphHoperMasterFile by downloading the zip from https://github.com/graphhopper/graphhopper/blob/0.5/docs/core/routing.md.
I obtained the .osm file from http://download.geofabrik.de/europe/great-britain/england/greater-london.html.
I also added the maven dependancy from http://mvnrepository.com/artifact/com.graphhopper/graphhopper-web/0.5.0. I get the sense that it's wrong to have the maven dependancy and reference the graphHopperLocation, but i'm not sure.
When I run this code sometime (not all the time) get the following errors:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: To avoid reading partial data we need to obtain the read lock but it failed.
Caused by: java.lang.RuntimeException: To avoid reading partial data we need to obtain the read lock but it failed.
Caused by: java.nio.channels.OverlappingFileLockException
When it works I get the following:
2016-01-28 08:48:14,551 [pool-1-thread-8] INFO com.graphhopper.GraphHopper - version 0.5.0|2015-08-12T12:33:51+0000 (4,12,3,2,2,1)
2016-01-28 08:48:14,551 [pool-1-thread-8] INFO com.graphhopper.GraphHopper - graph car,bike|RAM_STORE|2D|NoExt|4,12,3,2,2, details:edges:387 339(12MB), nodes:291 068(4MB), name:(2MB), geo:960 828(4MB), bounds:-0.5177850019436703,0.33744369456418666,51.28324388600686,51.69833101402963
I see the thrown error over here https://github.com/graphhopper/graphhopper/blob/master/core/src/main/java/com/graphhopper/GraphHopper.java
How can I stop this error from happening?

Related

java: Error initializing PCKS11 provider getting IOException C_GetFunctionList == NULL

I'm trying to write a Java application for digitally signing documents using a bit4Id miniLector token.
I'm in a Linux development environment.
The token is correctly installed, I can sign my documents also with the app downloaded from the manufacturer, but I have to write a new one for other purposes. The driver used is located at
/usr/lib/x86_64-linux-gnu/engines-1.1/pkcs11.so
I'm stuck with this error:
/usr/lib/jvm/jdk1.8.0_111/bin/java ...
Exception in thread "main" java.security.ProviderException: Initialization failed
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:376)
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:103)
at com.itextpdf.samples.signatures.chapter02.C2_01_SignHelloWorld.main
(C2_01_SignHelloWorld.java:83)
Caused by: java.io.IOException: ERROR: C_GetFunctionList == NULL
at sun.security.pkcs11.wrapper.PKCS11.connect(Native Method)
at sun.security.pkcs11.wrapper.PKCS11.<init>(PKCS11.java:138)
at sun.security.pkcs11.wrapper.PKCS11.getInstance(PKCS11.java:151)
at sun.security.pkcs11.SunPKCS11.<init>(SunPKCS11.java:313)
... 2 more
The provider is listed in $JAVA_HOME/jre/lib/security/java.security file as:
security.provider.10=sun.security.pkcs11.SunPKCS11
The code behaving this way is this:
String configFile = "/opt/bar/cfg/pkcs11.cfg";
Provider provider = new sun.security.pkcs11.SunPKCS11(configFile); <-- line 83
The needed libraries are all imported by my IDE and I have no compile/link errors.
I didn't find this exact type of error in hours of googling.
If you need any further information let me know, any kind help is very appreciated, thanks.
For visual clarity I add all missing information with respect to the original question here below
Updates
Content of the pkcs11.cfg file:
$ cat /opt/bar/cfg/pkcs11.cfg
name="bit4id miniLector-EVO"
library=/usr/lib/x86_64-linux-gnu/engines-1.1/pkcs11.so
Ok, I got it.
The problem is the driver.
Replacing
/usr/lib/x86_64-linux-gnu/engines-1.1/pkcs11.so
with
/opt/Firma4NG/System/Firma4NG_Linux/Firma4/drivers/mu-x64/libbit4xpki.so
that is one of the manufacturer's driver, now I can go further and, for example, dumping all info about the card:
Information for provider SunPKCS11-bit4id miniLector-EVO
Library info:
cryptokiVersion: 2.20
manufacturerID: bit4id srl
flags: 0
libraryDescription: bit4id PKCS#11
libraryVersion: 1.02
...
This question can be closed.

Failed to delete the state directory in IDE for Kafka Stream Application

I am developing a simple Kafka Stream application which extracting messages from a topic and put it into another topic after transformation. I am using Intelij for my development.
When I debug/run this application, it works perfect if my IDE and the Kafka Server sitting in the SAME machine
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = localhost:9092 and
SCHEMA_REGISTRY_URL_CONFIG = localhost:8081)
However, when I try to use another machine to do the development
(i.e. with the BOOTSTRAP_SERVERS_CONFIG = XXX.XXX.XXX:9092 and
SCHEMA_REGISTRY_URL_CONFIG = XXX.XXX.XXX:8081 where XXX.XXX.XXX is the
ip address of my Kafka),
the debug process run without problem at the 1st time. However, when I run 2nd time after resetting the offset, I received the following error:
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
java.nio.file.DirectoryNotEmptyException: \tmp\kafka-streams\my_application_id\0_0
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException:
If I changed my_application_id as my_application_id2, and run it, it works again at the 1st time but receiving error again if I run it again.
I have the following code in my last sentence in my application:
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
Any advice how to solve this problem?
UPDATE:
I have reviewed the state directory created in my development machine (Windows Platform) and if I delete these directory manually before running 2nd time, no error found. I have tried to run my IDE as Administrator because I think this could be something about the permission on the folder. However, this doesn't help.
Full stack for reference:
INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser:109)
INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser:110)
INFO stream-thread [main] Deleting state directory 0_0 for task 0_0 as user calling cleanup. (org.apache.kafka.streams.processor.internals.StateDirectory:281)
Disconnected from the target VM, address: '127.0.0.1:16552', transport: 'socket'
Exception in thread "main" org.apache.kafka.streams.errors.StreamsException: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:231)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
Caused by: java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
ERROR stream-thread [main] Failed to delete the state directory. (org.apache.kafka.streams.processor.internals.StateDirectory:297)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
java.nio.file.DirectoryNotEmptyException: C:\workspace\bennychan\kafka-streams\my_application_001\0_0
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:266)
... 3 more
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:651)
at org.apache.kafka.common.utils.Utils$1.postVisitDirectory(Utils.java:634)
at java.nio.file.Files.walkFileTree(Files.java:2688)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:634)
at org.apache.kafka.streams.processor.internals.StateDirectory.cleanRemovedTasks(StateDirectory.java:287)
at org.apache.kafka.streams.processor.internals.StateDirectory.clean(StateDirectory.java:228)
at org.apache.kafka.streams.KafkaStreams.cleanUp(KafkaStreams.java:931)
at com.macroviewhk.financialreport.simpleStream.start(simpleStream.java:60)
at com.macroviewhk.financialreport.simpleStream.main(simpleStream.java:45)
UPDATE 2 :
After another detailed check, the line below throwing IOException
Files.walkFileTree(file.toPath(), new SimpleFileVisitor<Path>() {
This line is located at kafka-clients-1.1.0.jar org.apache.kafka.common.utilsUtils.class
May be this is the problem with Windows system (sorry that I am not an experienced JAVA programmer).
For googlers..
I'm currently using this Scala code for helping windows guys to handle deletion of state store.
if (System.getProperty("os.name").toLowerCase.contains("windows")) {
logger.info("WINDOWS OS MODE - Cleanup state store.")
try {
FileUtils.deleteDirectory(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
FileUtils.forceMkdir(new File("/tmp/kafka-streams/" + config.getProperty("application.id")))
} catch {
case e: Exception => logger.error(e.toString)
}
}
else {
streams.cleanUp()
}
I agree with #ideano1 that is seems to be related to https://issues.apache.org/jira/browse/KAFKA-6647 -- what you can try is, to explicitly call KafkaStreams#cleanUp() between tests. It's unclear why there are issues at Window-OS. Atm, all testing happens on Linux.
This is what we've implemented that works on Windows. This is written in Kotlin.
Version used : kafka-streams-test-utils:2.3.0.
The key is to catch the exception. The tests will pass as long as you catch the exception raised by testDriver.close()even if you don't delete the directory. However, cleaning up the directory makes your unit tests independent and repeatable.
val directory = "test"
#BeforeEach
fun setup(){
//other code omitted for setting the props
props.setProperty(StreamsConfig.STATE_DIR_CONFIG,directory)
}
#AfterEach
fun tearDown(){
try{
testDriver.close()
}catch(exception: Exception){
FileUtils.deleteDirectory(File(directory)) //there is a bug on Windows that does not delete the state directory properly. In order for the test to pass, the directory must be deleted manually
}
}
For tests (but not only if you afford so) one could use an IN_MEMORY("in-memory") store for each KTable created (directly or indirectly, by e.g. aggregations); this avoids the creation of any directory such that the error no longer occurs.

"heros.solver.CountingThreadPoolExecutor - Worker thread execution failed: null Exceptions" when running nightly build soot-trunk

I was converting apk file to jimple files and then converting they back to the .dex file immediately. But I cannot do the second the step successfully.
Command line used:
java -Xmx4g -jar soot-trunk.jar soot.Main -p cg.spark verbose:true,on-fly-cg:true -w -allow-phantom-refs -force-android-jar /home/xia/Downloads/android-platforms-master/android-17 -src-prec apk -f jimple -process-dir com.halfbrick.fruitninjafree.apk
In the beginning, it keeps on throwing this exception:
java.lang.RuntimeException: Error parsing class com.google.android.gms.plus.model.moments.MomentBuffer [22,40] expecting: quoted name, identifier
at soot.JimpleClassSource.resolve(JimpleClassSource.java:61)
at soot.SootResolver.bringToHierarchy(SootResolver.java:239)
at soot.SootResolver.bringToSignatures(SootResolver.java:266)
at soot.SootResolver.bringToBodies(SootResolver.java:304)
at soot.SootResolver.processResolveWorklist(SootResolver.java:163)
at soot.SootResolver.resolveClass(SootResolver.java:131)
at soot.Scene.loadClass(Scene.java:725)
at soot.Scene.loadClassAndSupport(Scene.java:710)
at soot.Scene.loadNecessaryClasses(Scene.java:1448)
at soot.Main.run(Main.java:243)
at soot.Main.main(Main.java:147)
Caused by: soot.jimple.parser.parser.ParserException: [22,40] expecting: quoted name, identifier
at soot.jimple.parser.parser.Parser.parse(Parser.java:1454)
at soot.jimple.parser.JimpleAST.(JimpleAST.java:57)
at soot.JimpleClassSource.resolve(JimpleClassSource.java:42)
... 10 more
I found that there are some syntax mistakes (might be) in the converted jimple file. For example, in the jimple files, there are some classes named like this:
com.google.android.gms.internal.'if'
java.lang.'annotation'.Annotation
Then I fixed the mistakes manually (by remove single quote and replace 'if' with other variable name like iff).
After I fixed the above mistakes, it shows another exception:
Warning: android.content.OperationApplicationException is a phantom class!
Warning: android.database.MatrixCursor$RowBuilder is a phantom class!
Warning: android.content.ContentProviderResult is a phantom class!
[Call Graph] For information on where the call graph may be incomplete, use the verbose option to the cg phase.
[Thread-1] ERROR heros.solver.CountingThreadPoolExecutor - Worker thread execution failed: null
[Thread-8] ERROR heros.solver.CountingThreadPoolExecutor - Worker thread execution failed: null
[Thread-7] ERROR heros.solver.CountingThreadPoolExecutor - Worker thread execution failed: null
java.lang.NullPointerException
at soot.toolkits.graph.UnitGraph.(UnitGraph.java:76)
at soot.toolkits.graph.ExceptionalUnitGraph.(ExceptionalUnitGraph.java:158)
at soot.jimple.toolkits.scalar.UnreachableCodeEliminator.internalTransform(UnreachableCodeEliminator.java:79)
at soot.BodyTransformer.transform(BodyTransformer.java:51)
at soot.Transform.apply(Transform.java:105)
at soot.JimpleBodyPack.applyPhaseOptions(JimpleBodyPack.java:61)
at soot.JimpleBodyPack.internalApply(JimpleBodyPack.java:95)
at soot.Pack.apply(Pack.java:125)
at soot.jimple.JimpleMethodSource.getBody(JimpleMethodSource.java:49)
at soot.SootMethod.getBodyFromMethodSource(SootMethod.java:91)
at soot.SootMethod.retrieveActiveBody(SootMethod.java:322)
at soot.PackManager$3.run(PackManager.java:1223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[Thread-6] ERROR heros.solver.CountingThreadPoolExecutor - Worker thread execution failed: null
java.lang.NullPointerException
at soot.toolkits.graph.UnitGraph.(UnitGraph.java:76)
at soot.toolkits.graph.ExceptionalUnitGraph.(ExceptionalUnitGraph.java:158)
at soot.jimple.toolkits.scalar.UnreachableCodeEliminator.internalTransform(UnreachableCodeEliminator.java:79)
at soot.BodyTransformer.transform(BodyTransformer.java:51)
at soot.Transform.apply(Transform.java:105)
at soot.JimpleBodyPack.applyPhaseOptions(JimpleBodyPack.java:61)
...
At first I think it's the bug of the tool, but the answer of other question says that the bug has been fixed. But I still get this problem. Does any one know where the problem is? Thank you very much.

org.elasticsearch.common.jackson.core.JsonGenerationException: No current event to copy

We are using Apache Flume for indexing data into Elasticsearch.
Recently we are facing this exception.
org.elasticsearch.common.jackson.core.JsonGenerationException: No current event to copy
at org.elasticsearch.common.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1487)
at org.elasticsearch.common.jackson.core.JsonGenerator.copyCurrentStructure(JsonGenerator.java:1390)
at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.copyCurrentStructure(JsonXContentGenerator.java:332)
at org.elasticsearch.common.xcontent.XContentBuilder.copyCurrentStructure(XContentBuilder.java:1105)
at org.apache.flume.sink.elasticsearch.ContentBuilderUtil.addComplexField(ContentBuilderUtil.java:63)
at org.apache.flume.sink.elasticsearch.ContentBuilderUtil.appendField(ContentBuilderUtil.java:47)
at org.jai.flume.sinks.elasticsearch.serializer.ElasticSearchJsonBodyEventSerializer.appendHeaders(ElasticSearchJsonBodyEventSerializer.java:48)
at org.jai.flume.sinks.elasticsearch.serializer.ElasticSearchJsonBodyEventSerializer.getContentBuilder(ElasticSearchJsonBodyEventSerializer.java:35)
at org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.addEvent(ElasticSearchTransportClient.java:164)
at org.css.cssElasticsearchSink.process(cssElasticsearchSink.java:121)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 12:32:36,402 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to commit transaction. Transaction rolled back.
at org.css.cssElasticsearchSink.process(cssElasticsearchSink.java:166)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.jackson.core.JsonGenerationException: No current event to copy
at org.elasticsearch.common.jackson.core.JsonGenerator._reportError(JsonGenerator.java:1487)
at org.elasticsearch.common.jackson.core.JsonGenerator.copyCurrentStructure(JsonGenerator.java:1390)
at org.elasticsearch.common.xcontent.json.JsonXContentGenerator.copyCurrentStructure(JsonXContentGenerator.java:332)
at org.elasticsearch.common.xcontent.XContentBuilder.copyCurrentStructure(XContentBuilder.java:1105)
at org.apache.flume.sink.elasticsearch.ContentBuilderUtil.addComplexField(ContentBuilderUtil.java:63)
at org.apache.flume.sink.elasticsearch.ContentBuilderUtil.appendField(ContentBuilderUtil.java:47)
at org.jai.flume.sinks.elasticsearch.serializer.ElasticSearchJsonBodyEventSerializer.appendHeaders(ElasticSearchJsonBodyEventSerializer.java:48)
at org.jai.flume.sinks.elasticsearch.serializer.ElasticSearchJsonBodyEventSerializer.getContentBuilder(ElasticSearchJsonBodyEventSerializer.java:35)
at org.apache.flume.sink.elasticsearch.client.ElasticSearchTransportClient.addEvent(ElasticSearchTransportClient.java:164)
at org.css.cssElasticsearchSink.process(cssElasticsearchSink.java:121)
What is the cause of this exception ? is it a malformed JSON object ?
Flume version : 1.5.0-cdh5.3.2
Elasticsearch version : "1.5.2"

Embeded Firebird Database and Hibernate

I am trying to use the Firebird embeded DB together with Hibernate, but I get the following error when trying to create the database:
Caused by: org.firebirdsql.jdbc.FBSQLException: GDS Exception. 335544344. I/O error during "CreateFile (open)" operation for file "D:\DB\FIREBIRD.FDB"
Error while trying to open file
null
at org.firebirdsql.jdbc.FBDataSource.getConnection(FBDataSource.java:123)
at org.firebirdsql.jdbc.AbstractDriver.connect(AbstractDriver.java:126)
at org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl.getConnection(DriverManagerConnectionProviderImpl.java:204)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:292)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214)
... 32 more
Caused by: org.firebirdsql.gds.GDSException: I/O error during "CreateFile (open)" operation for file "D:\DB\FIREBIRD.FDB"
Error while trying to open file
null
at org.firebirdsql.gds.impl.jni.JniGDSImpl.native_isc_attach_database(Native Method)
at org.firebirdsql.gds.impl.jni.BaseGDSImpl.iscAttachDatabase(BaseGDSImpl.java:158)
at org.firebirdsql.jca.FBManagedConnection.<init>(FBManagedConnection.java:105)
at org.firebirdsql.jca.FBManagedConnectionFactory.createManagedConnection(FBManagedConnectionFactory.java:490)
at org.firebirdsql.jca.FBStandAloneConnectionManager.allocateConnection(FBStandAloneConnectionManager.java:69)
at org.firebirdsql.jdbc.FBDataSource.getConnection(FBDataSource.java:120)
... 36 more
What I have done until now:
I've set the hibernate configuration.
Driver = "org.firebirdsql.jdbc.FBDriver",
Dialect = "org.hibernate.dialect.FirebirdDialect",
Url = "jdbc:firebirdsql:embedded:D:\DB\FIREBIRD.FDB",
I have added the jaybird-full jar to my classpath.
I have added jaybird22.dll, fbembed.dll (the whole folder) to my path.
The dlls seem to be loaded since if I delete the dlls I get and exception telling me that jaybird22.dll cannot be found.
Any idea what could be wrong ?
It seems the step I was missing was creating the database file manually:
FBManager manager = new FBManager(GDSType.getType("EMBEDDED"));
manager.start();
manager.createDatabase(myDbFile, username, password);
manager.stop();

Categories

Resources