JAVAPNS 2.2 success push but not receive on device - java

enter code here I try to push from Java by using JAVAPNS 2.2
this is my code
import java.util.*;
import org.apache.log4j.BasicConfigurator;
import org.json.JSONException;
import javapns.*;
import javapns.communication.exceptions.CommunicationException;
import javapns.communication.exceptions.KeystoreException;
import javapns.notification.PushNotificationPayload;
import javapns.notification.PushedNotification;
import javapns.notification.ResponsePacket;
public class Test {
/**
* #param args
*/
public static void main(String[] args) {
BasicConfigurator.configure();
try {
PushNotificationPayload payload = PushNotificationPayload.complex();
payload.addAlert("Hello World");
payload.addBadge(1);
payload.addSound("default");
payload.addCustomDictionary("id", "1");
System.out.println(payload.toString());
List<PushedNotification> NOTIFICATIONS = Push.payload(payload, "D:\\keystore1.p12", "123456", true, "AA67F2F18586D2C83398F9E5E5BE8BA1CD3FF80257CC74BACF2938CE144BA71D");
for (PushedNotification NOTIFICATION : NOTIFICATIONS) {
if (NOTIFICATION.isSuccessful()) {
/* APPLE ACCEPTED THE NOTIFICATION AND SHOULD DELIVER IT */
System.out.println("PUSH NOTIFICATION SENT SUCCESSFULLY TO: " +
NOTIFICATION.getDevice().getToken());
/* STILL NEED TO QUERY THE FEEDBACK SERVICE REGULARLY */
}
else {
String INVALIDTOKEN = NOTIFICATION.getDevice().getToken();
/* ADD CODE HERE TO REMOVE INVALIDTOKEN FROM YOUR DATABASE */
/* FIND OUT MORE ABOUT WHAT THE PROBLEM WAS */
Exception THEPROBLEM = NOTIFICATION.getException();
THEPROBLEM.printStackTrace();
/* IF THE PROBLEM WAS AN ERROR-RESPONSE PACKET RETURNED BY APPLE, GET IT */
ResponsePacket THEERRORRESPONSE = NOTIFICATION.getResponse();
if (THEERRORRESPONSE != null) {
System.out.println(THEERRORRESPONSE.getMessage());
}
}
}
} catch (CommunicationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (KeystoreException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
The library sent successfully but I never get a push on my device. I've try many tokens of my devices and even try to regenerate the certificate .12 but still no use.
Log
0 [main] DEBUG javapns.notification.Payload - Adding alert [Hello World]
1 [main] DEBUG javapns.notification.Payload - Adding badge [1]
1 [main] DEBUG javapns.notification.Payload - Adding sound [default]
1 [main] DEBUG javapns.notification.Payload - Adding custom Dictionary [id] = [1]
{"id":"1","aps":{"sound":"default","alert":"Hello World","badge":1}}
292 [main] DEBUG javapns.communication.ConnectionToAppleServer - Creating SSLSocketFactory
341 [main] DEBUG javapns.communication.ConnectionToAppleServer - Creating SSLSocket to gateway.push.apple.com:2195
972 [main] DEBUG javapns.notification.PushNotificationManager - Initialized Connection to
Host: [gateway.push.apple.com] Port: [2195]: 1923e91b[SSL_NULL_WITH_NULL_NULL: Socket[addr=gateway.push.apple.com/17.149.35.173,port=2195,localport=65225]]
974 [main] DEBUG javapns.notification.PushNotificationManager - Building Raw message from deviceToken and payload
974 [main] DEBUG javapns.notification.PushNotificationManager - Built raw message ID 1 of total length 113
974 [main] DEBUG javapns.notification.PushNotificationManager - Attempting to send notification: {"id":"1","aps":{"sound":"default","alert":"Hello World","badge":1}}
974 [main] DEBUG javapns.notification.PushNotificationManager - to device: AA67F2F18586D2C83398F9E5E5BE8BA1CD3FF80257CC74BACF2938CE144BA71D
1743 [main] DEBUG javapns.notification.PushNotificationManager - Flushing
1743 [main] DEBUG javapns.notification.PushNotificationManager - At this point, the entire 113-bytes message has been streamed out successfully through the SSL connection
1743 [main] DEBUG javapns.notification.PushNotificationManager - Notification sent on first attempt
1743 [main] DEBUG javapns.notification.PushNotificationManager - Reading responses
1744 [main] DEBUG javapns.notification.PushNotificationManager - Closing connection
PUSH NOTIFICATION SENT SUCCESSFULLY TO: AA67F2F18586D2C83398F9E5E5BE8BA1CD3FF80257CC74BACF2938CE144BA71D
anyone has any idea ?

this line should
List<PushedNotification> NOTIFICATIONS = Push.payload(payload, "D:\\keystore1.p12", "123456", true, "AA67F2F18586D2C83398F9E5E5BE8BA1CD3FF80257CC74BACF2938CE144BA71D");
should change to this
List<PushedNotification> NOTIFICATIONS = Push.payload(payload, "D:\\keystore1.p12", "123456", false, "AA67F2F18586D2C83398F9E5E5BE8BA1CD3FF80257CC74BACF2938CE144BA71D");

Related

Working OPC UA Simulator example using OPCFoundation/UA-Java project

Anyone having simulator example of OPC UA, I am using OPC UA project https://github.com/OPCFoundation/UA-Java .
I have tried all the server mentioned on this git hub page, but none of them are working for me https://github.com/node-opcua/node-opcua/wiki/publicly-available-OPCUA-servers.
I am using org.opcfoundation.ua.examples.SampleClient utility to check connection and sample values, but not able to do so. If any one having working exampe w.r.t this please share along with code. Once this working I need to configure this setting in apache Nifi to make data pipeline.
code:
public class SampleClient {
public static final Locale ENGLISH = Locale.ENGLISH;
public static final Locale ENGLISH_FINLAND = new Locale("en", "FI");
public static final Locale ENGLISH_US = new Locale("en", "US");
public static final Locale FINNISH = new Locale("fi");
public static final Locale FINNISH_FINLAND = new Locale("fi", "FI");
public static final Locale GERMAN = Locale.GERMAN;
public static final Locale GERMAN_GERMANY = new Locale("de", "DE");
public static void main(String[] args)
throws Exception {
// if (args.length==0) {
// System.out.println("Usage: SampleClient [server uri]");
// return;
// }
//String url = /*args[0]*/"opc.tcp://uademo.prosysopc.com:53530/OPCUA/SimulationServer";
//String url = /*args[0]*/"opc.tcp://uademo.prosysopc.com:53530";
//String url = /*args[0]*/"opc.tcp://opcua.demo-this.com:51210/UA/SampleServer";
String url="opc.tcp://mfactorengineering.com:4840";
//String url="opc.tcp://commsvr.com:51234/UA/CAS_UA_Server";
//String url="opc.tcp://uademo.prosysopc.com:53530/OPCUA/SimulationServer";
//String url="opc.tcp://opcua.demo-this.com:51210/UA/SampleServer";
//String url="opc.tcp://opcua.demo-this.com:51211/UA/SampleServer";
//String url="opc.tcp://opcua.demo-this.com:51212/UA/SampleServer";
//String url="opc.tcp://demo.ascolab.com:4841";
//String url="opc.tcp://alamscada.dynu.com:4096";
System.out.print("SampleClient: Connecting to "+url+" .. ");
////////////// CLIENT //////////////
// Create Client
Application myApplication = new Application();
Client myClient = new Client(myApplication);
myApplication.addLocale( ENGLISH );
myApplication.setApplicationName( new LocalizedText("Java Sample Client", Locale.ENGLISH) );
myApplication.setProductUri( "urn:JavaSampleClient" );
CertificateUtils.setKeySize(1024); // default = 1024
KeyPair pair = ExampleKeys.getCert("SampleClient");
myApplication.addApplicationInstanceCertificate( pair );
// The HTTPS SecurityPolicies are defined separate from the endpoint securities
myApplication.getHttpsSettings().setHttpsSecurityPolicies(HttpsSecurityPolicy.ALL);
// Peer verifier
myApplication.getHttpsSettings().setHostnameVerifier( SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER );
myApplication.getHttpsSettings().setCertificateValidator( CertificateValidator.ALLOW_ALL );
// The certificate to use for HTTPS
KeyPair myHttpsCertificate = ExampleKeys.getHttpsCert("SampleClient");
myApplication.getHttpsSettings().setKeyPair( myHttpsCertificate );
// Connect to the given uri
SessionChannel mySession = myClient.createSessionChannel(url);
// mySession.activate("username", "123");
mySession.activate();
//////////////////////////////////////
///////////// EXECUTE //////////////
// Browse Root
BrowseDescription browse = new BrowseDescription();
browse.setNodeId( Identifiers.RootFolder );
browse.setBrowseDirection( BrowseDirection.Forward );
browse.setIncludeSubtypes( true );
browse.setNodeClassMask( NodeClass.Object, NodeClass.Variable );
browse.setResultMask( BrowseResultMask.All );
BrowseResponse res3 = mySession.Browse( null, null, null, browse );
System.out.println(res3);
// Read Namespace Array
ReadResponse res5 = mySession.Read(
null,
null,
TimestampsToReturn.Neither,
new ReadValueId(Identifiers.Server_NamespaceArray, Attributes.Value, null, null )
);
String[] namespaceArray = (String[]) res5.getResults()[0].getValue().getValue();
System.out.println(Arrays.toString(namespaceArray));
// Read a variable
ReadResponse res4 = mySession.Read(
null,
500.0,
TimestampsToReturn.Source,
new ReadValueId(new NodeId(6, 1710), Attributes.Value, null, null )
);
System.out.println(res4);
res4 = mySession.Read(
null,
500.0,
TimestampsToReturn.Source,
new ReadValueId(new NodeId(6, 1710), Attributes.DataType, null, null )
);
System.out.println(res4);
///////////// SHUTDOWN /////////////
mySession.close();
mySession.closeAsync();
//////////////////////////////////////
}
}
exception:
SampleClient: Connecting to opc.tcp://mfactorengineering.com:4840 .. 2017-07-20 11:24:34,909 [main] INFO CryptoUtil - SecurityProvider initialized from org.bouncycastle.jce.provider.BouncyCastleProvider
2017-07-20 11:24:34,909 [main] INFO CryptoUtil - Using SecurityProvider BC
2017-07-20 11:24:35,549 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connecting
2017-07-20 11:24:36,142 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connected
2017-07-20 11:24:36,753 [main] INFO SecureChannelTcp - 1804305022 Closed
2017-07-20 11:24:36,768 [TcpConnection/Read] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Closed (expected)
2017-07-20 11:24:36,768 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Closed
2017-07-20 11:24:36,768 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connecting
2017-07-20 11:24:37,408 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connect failed
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:223)
at org.opcfoundation.ua.utils.bytebuffer.InputStreamReadable._get(InputStreamReadable.java:53)
at org.opcfoundation.ua.utils.bytebuffer.InputStreamReadable.getInt(InputStreamReadable.java:144)
at org.opcfoundation.ua.transport.tcp.io.TcpConnection.open(TcpConnection.java:500)
at org.opcfoundation.ua.transport.tcp.io.SecureChannelTcp.open(SecureChannelTcp.java:565)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:641)
at org.opcfoundation.ua.application.Client.createSecureChannel(Client.java:555)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:370)
at org.opcfoundation.ua.application.Client.createSessionChannel(Client.java:345)
at org.opcfoundation.ua.examples.SampleClient.main(SampleClient.java:120)
2017-07-20 11:24:37,464 [main] WARN SecureChannelTcp - Connection failed: Bad_CommunicationError (code=0x80050000, description="2147811328, Connection reset")
2017-07-20 11:24:37,464 [main] WARN SecureChannelTcp - Bad_CommunicationError: Retrying
2017-07-20 11:24:37,464 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connecting
2017-07-20 11:24:38,056 [main] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Connected
2017-07-20 11:24:38,368 [TcpConnection/Read] WARN TcpConnection - mfactorengineering.com/184.173.118.46:4840 Error
org.opcfoundation.ua.common.ServiceResultException: Bad_SecurityChecksFailed (code=0x80130000, description="An error occurred verifying security")
at org.opcfoundation.ua.transport.tcp.io.TcpConnection$ReadThread.run(TcpConnection.java:782)
2017-07-20 11:24:38,368 [TcpConnection/Read] INFO TcpConnection - mfactorengineering.com/184.173.118.46:4840 Closed
Exception in thread "main" org.opcfoundation.ua.common.ServiceResultException: Bad_SecurityChecksFailed (code=0x80130000, description="An error occurred verifying security")
at org.opcfoundation.ua.transport.tcp.io.TcpConnection$ReadThread.run(TcpConnection.java:782)
The example is written such that it always selects the most secure connection, which on the other hand requires that the server accepts the Application Instance Certificate of the client application, before it enables the connection. Bad_SecurityChecksFailed is the standard error code from the server, when it does not accept the client connection.
Since you cannot control these publicly available servers to make them trust your client application, your only alternative is to try to connect without security, if the server allows that.
For this, you need to change the code so that it picks an insecure endpoint.
Replace
SessionChannel mySession = myClient.createSessionChannel(url);
with
EndpointDescription[] endpoints = myClient.discoverEndpoints(url);
// Filter out all but opc.tcp protocol endpoints
endpoints = selectByProtocol(endpoints, "opc.tcp");
// Filter out all but Signed & Encrypted endpoints
endpoints = selectByMessageSecurityMode(endpoints, MessageSecurityMode.None);
// Choose one endpoint
if (endpoints.length == 0)
throw new Exception("The server does not support insecure connections");
EndpointDescription endpoint = endpoints[0];
//////////////////////////////////////
SessionChannel mySession = myClient.createSessionChannel(endpoint);
(according to the lines of ClientExample1)

java.io.IOException: ensureRemaining: Only 0 bytes remaining, trying to read 1

i'm having some problems with custom classes in giraph. I made a VertexInput and Output format, but i always getting the following error:
java.io.IOException: ensureRemaining: Only * bytes remaining, trying to read *
with different values where the "*" are placed.
This was tested on a Single Node Cluster.
This problem happen when a vertexIterator do next(), and there aren't any more vertex left. This iterator it's invocated from a flush method, but i don't understand, basically, why the "next()" method is failing. Here are some logs and classes...
My log is the following:
15/09/08 00:52:21 INFO bsp.BspService: BspService: Connecting to ZooKeeper with job giraph_yarn_application_1441683854213_0001, 1 on localhost:22181
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:host.name=localhost
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.class.path=.:${CLASSPATH}:./**/
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/l$
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:os.version=3.13.0-62-generic
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:user.name=hduser
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hduser
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop/tmp/nm-local-dir/usercache/hduser/appcache/application_1441683854213_0001/container_1441683854213_0001_01_000003
15/09/08 00:52:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:22181 sessionTimeout=60000 watcher=org.apache.giraph.worker.BspServiceWorker#4256d3a0
15/09/08 00:52:21 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:22181. Will not attempt to authenticate using SASL (unknown error)
15/09/08 00:52:21 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:22181, initiating session
15/09/08 00:52:21 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:22181, sessionid = 0x14fab0de0bb0002, negotiated timeout = 40000
15/09/08 00:52:21 INFO bsp.BspService: process: Asynchronous connection complete.
15/09/08 00:52:21 INFO netty.NettyServer: NettyServer: Using execution group with 8 threads for requestFrameDecoder.
15/09/08 00:52:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/09/08 00:52:21 INFO netty.NettyServer: start: Started server communication server: localhost/127.0.0.1:30001 with up to 16 threads on bind attempt 0 with sendBufferSize = 32768 receiveBufferSize = 524288
15/09/08 00:52:21 INFO netty.NettyClient: NettyClient: Using execution handler with 8 threads after request-encoder.
15/09/08 00:52:21 INFO graph.GraphTaskManager: setup: Registering health of this worker...
15/09/08 00:52:21 INFO yarn.GiraphYarnTask: [STATUS: task-1] WORKER_ONLY starting...
15/09/08 00:52:22 INFO bsp.BspService: getJobState: Job state already exists (/_hadoopBsp/giraph_yarn_application_1441683854213_0001/_masterJobState)
15/09/08 00:52:22 INFO bsp.BspService: getApplicationAttempt: Node /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_applicationAttemptsDir already exists!
15/09/08 00:52:22 INFO bsp.BspService: getApplicationAttempt: Node /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_applicationAttemptsDir already exists!
15/09/08 00:52:22 INFO worker.BspServiceWorker: registerHealth: Created my health node for attempt=0, superstep=-1 with /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_applicationAttemptsDir/0/_superstepD$
15/09/08 00:52:22 INFO netty.NettyServer: start: Using Netty without authentication.
15/09/08 00:52:22 INFO bsp.BspService: process: partitionAssignmentsReadyChanged (partitions are assigned)
15/09/08 00:52:22 INFO worker.BspServiceWorker: startSuperstep: Master(hostname=localhost, MRtaskID=0, port=30000)
15/09/08 00:52:22 INFO worker.BspServiceWorker: startSuperstep: Ready for computation on superstep -1 since worker selection and vertex range assignments are done in /_hadoopBsp/giraph_yarn_application_1441683854$
15/09/08 00:52:22 INFO yarn.GiraphYarnTask: [STATUS: task-1] startSuperstep: WORKER_ONLY - Attempt=0, Superstep=-1
15/09/08 00:52:22 INFO netty.NettyClient: Using Netty without authentication.
15/09/08 00:52:22 INFO netty.NettyClient: Using Netty without authentication.
15/09/08 00:52:22 INFO netty.NettyClient: connectAllAddresses: Successfully added 2 connections, (2 total connected) 0 failed, 0 failures total.
15/09/08 00:52:22 INFO netty.NettyServer: start: Using Netty without authentication.
15/09/08 00:52:22 INFO handler.RequestDecoder: decode: Server window metrics MBytes/sec received = 0, MBytesReceived = 0.0001, ave received req MBytes = 0.0001, secs waited = 1.44168435E9
15/09/08 00:52:22 INFO worker.BspServiceWorker: loadInputSplits: Using 1 thread(s), originally 1 threads(s) for 1 total splits.
15/09/08 00:52:22 INFO worker.InputSplitsHandler: reserveInputSplit: Reserved input split path /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_vertexInputSplitDir/0, overall roughly 0.0% input splits rese$
15/09/08 00:52:22 INFO worker.InputSplitsCallable: getInputSplit: Reserved /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_vertexInputSplitDir/0 from ZooKeeper and got input split 'hdfs://hdnode01:54310/u$
15/09/08 00:52:22 INFO worker.InputSplitsCallable: loadFromInputSplit: Finished loading /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_vertexInputSplitDir/0 (v=6, e=10)
15/09/08 00:52:22 INFO worker.InputSplitsCallable: call: Loaded 1 input splits in 0.16241108 secs, (v=6, e=10) 36.94329 vertices/sec, 61.572155 edges/sec
15/09/08 00:52:22 ERROR utils.LogStacktraceCallable: Execution of callable failed
java.lang.IllegalStateException: next: IOException
at org.apache.giraph.utils.VertexIterator.next(VertexIterator.java:101)
at org.apache.giraph.partition.BasicPartition.addPartitionVertices(BasicPartition.java:99)
at org.apache.giraph.comm.requests.SendWorkerVerticesRequest.doRequest(SendWorkerVerticesRequest.java:115)
at org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.doRequest(NettyWorkerClientRequestProcessor.java:466)
at org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.flush(NettyWorkerClientRequestProcessor.java:412)
at org.apache.giraph.worker.InputSplitsCallable.call(InputSplitsCallable.java:241)
at org.apache.giraph.worker.InputSplitsCallable.call(InputSplitsCallable.java:60)
at org.apache.giraph.utils.LogStacktraceCallable.call(LogStacktraceCallable.java:51)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: ensureRemaining: Only 0 bytes remaining, trying to read 1
at org.apache.giraph.utils.UnsafeReads.ensureRemaining(UnsafeReads.java:77)
at org.apache.giraph.utils.UnsafeArrayReads.readByte(UnsafeArrayReads.java:123)
at org.apache.giraph.utils.UnsafeReads.readLine(UnsafeReads.java:100)
at pruebas.TextAndDoubleComplexWritable.readFields(TextAndDoubleComplexWritable.java:37)
at org.apache.giraph.utils.WritableUtils.reinitializeVertexFromDataInput(WritableUtils.java:540)
at org.apache.giraph.utils.VertexIterator.next(VertexIterator.java:98)
... 11 more
15/09/08 00:52:22 ERROR worker.BspServiceWorker: unregisterHealth: Got failure, unregistering health on /_hadoopBsp/giraph_yarn_application_1441683854213_0001/_applicationAttemptsDir/0/_superstepDir/-1/_workerHea$
15/09/08 00:52:22 ERROR yarn.GiraphYarnTask: GiraphYarnTask threw a top-level exception, failing task
java.lang.RuntimeException: run: Caught an unrecoverable exception waitFor: ExecutionException occurred while waiting for org.apache.giraph.utils.ProgressableUtils$FutureWaitable#4bbf48f0
at org.apache.giraph.yarn.GiraphYarnTask.run(GiraphYarnTask.java:104)
at org.apache.giraph.yarn.GiraphYarnTask.main(GiraphYarnTask.java:183)
Caused by: java.lang.IllegalStateException: waitFor: ExecutionException occurred while waiting for org.apache.giraph.utils.ProgressableUtils$FutureWaitable#4bbf48f0
at org.apache.giraph.utils.ProgressableUtils.waitFor(ProgressableUtils.java:193)
at org.apache.giraph.utils.ProgressableUtils.waitForever(ProgressableUtils.java:151)
at org.apache.giraph.utils.ProgressableUtils.waitForever(ProgressableUtils.java:136)
at org.apache.giraph.utils.ProgressableUtils.getFutureResult(ProgressableUtils.java:99)
at org.apache.giraph.utils.ProgressableUtils.getResultsWithNCallables(ProgressableUtils.java:233)
at org.apache.giraph.worker.BspServiceWorker.loadInputSplits(BspServiceWorker.java:316)
at org.apache.giraph.worker.BspServiceWorker.loadVertices(BspServiceWorker.java:409)
at org.apache.giraph.worker.BspServiceWorker.setup(BspServiceWorker.java:629)
at org.apache.giraph.graph.GraphTaskManager.execute(GraphTaskManager.java:284)
at org.apache.giraph.yarn.GiraphYarnTask.run(GiraphYarnTask.java:92)
... 1 more
Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: next: IOException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:202)
at org.apache.giraph.utils.ProgressableUtils$FutureWaitable.waitFor(ProgressableUtils.java:312)
at org.apache.giraph.utils.ProgressableUtils.waitFor(ProgressableUtils.java:185)
... 10 more
Caused by: java.lang.IllegalStateException: next: IOException
at org.apache.giraph.utils.VertexIterator.next(VertexIterator.java:101)
at org.apache.giraph.partition.BasicPartition.addPartitionVertices(BasicPartition.java:99)
at org.apache.giraph.comm.requests.SendWorkerVerticesRequest.doRequest(SendWorkerVerticesRequest.java:115)
at org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.doRequest(NettyWorkerClientRequestProcessor.java:466)
at org.apache.giraph.comm.netty.NettyWorkerClientRequestProcessor.flush(NettyWorkerClientRequestProcessor.java:412)
at org.apache.giraph.worker.InputSplitsCallable.call(InputSplitsCallable.java:241)
at org.apache.giraph.worker.InputSplitsCallable.call(InputSplitsCallable.java:60)
at org.apache.giraph.utils.LogStacktraceCallable.call(LogStacktraceCallable.java:51)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: ensureRemaining: Only 0 bytes remaining, trying to read 1
at org.apache.giraph.utils.UnsafeReads.ensureRemaining(UnsafeReads.java:77)
at org.apache.giraph.utils.UnsafeArrayReads.readByte(UnsafeArrayReads.java:123)
at org.apache.giraph.utils.UnsafeReads.readLine(UnsafeReads.java:100)
at pruebas.TextAndDoubleComplexWritable.readFields(TextAndDoubleComplexWritable.java:37)
at org.apache.giraph.utils.WritableUtils.reinitializeVertexFromDataInput(WritableUtils.java:540)
at org.apache.giraph.utils.VertexIterator.next(VertexIterator.java:98)
... 11 more
My Input format:
package pruebas;
import org.apache.giraph.edge.Edge;
import org.apache.giraph.edge.EdgeFactory;
import org.apache.giraph.io.formats.AdjacencyListTextVertexInputFormat;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
/**
* #author hduser
*
*/
public class IdTextWithComplexValueInputFormat
extends
AdjacencyListTextVertexInputFormat<Text, TextAndDoubleComplexWritable, DoubleWritable> {
#Override
public AdjacencyListTextVertexReader createVertexReader(InputSplit split,
TaskAttemptContext context) {
return new TextComplexValueDoubleAdjacencyListVertexReader();
}
protected class TextComplexValueDoubleAdjacencyListVertexReader extends
AdjacencyListTextVertexReader {
/**
* Constructor with
* {#link AdjacencyListTextVertexInputFormat.LineSanitizer}.
*
* #param lineSanitizer
* the sanitizer to use for reading
*/
public TextComplexValueDoubleAdjacencyListVertexReader() {
super();
}
#Override
public Text decodeId(String s) {
return new Text(s);
}
#Override
public TextAndDoubleComplexWritable decodeValue(String s) {
TextAndDoubleComplexWritable valorComplejo = new TextAndDoubleComplexWritable();
valorComplejo.setVertexData(Double.valueOf(s));
valorComplejo.setIds_vertices_anteriores("");
return valorComplejo;
}
#Override
public Edge<Text, DoubleWritable> decodeEdge(String s1, String s2) {
return EdgeFactory.create(new Text(s1),
new DoubleWritable(Double.valueOf(s2)));
}
}
}
TextAndDoubleComplexWritable:
package pruebas;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.Writable;
public class TextAndDoubleComplexWritable implements Writable {
private String idsVerticesAnteriores;
private double vertexData;
public TextAndDoubleComplexWritable() {
super();
this.idsVerticesAnteriores = "";
}
public TextAndDoubleComplexWritable(double vertexData) {
super();
this.vertexData = vertexData;
}
public TextAndDoubleComplexWritable(String ids_vertices_anteriores,
double vertexData) {
super();
this.idsVerticesAnteriores = ids_vertices_anteriores;
this.vertexData = vertexData;
}
public void write(DataOutput out) throws IOException {
out.writeUTF(idsVerticesAnteriores);
}
public void readFields(DataInput in) throws IOException {
idsVerticesAnteriores = in.readLine();
}
public String getIds_vertices_anteriores() {
return idsVerticesAnteriores;
}
public void setIds_vertices_anteriores(String ids_vertices_anteriores) {
this.idsVerticesAnteriores = ids_vertices_anteriores;
}
public double getVertexData() {
return vertexData;
}
public void setVertexData(double vertexData) {
this.vertexData = vertexData;
}
}
My input file:
Portada 0.0 Sugerencias 1.0
Sugerencias 3.0 Portada 1.0
and i execute it with this command:
$HADOOP_HOME/bin/yarn jar $GIRAPH_HOME/giraph-examples/target/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar org.apache.giraph.GiraphRunner lectura_de_grafo.BusquedaDeCaminosNavegacionalesWikiquote -vif pruebas.IdTextWithComplexValueInputFormat -vip /user/hduser/input/wiki-graph-chiquito.txt -op /user/hduser/output/caminosNavegacionales -w 2 -yh 250
Any help would be appreciated!
UPDATE:
My input file was wrong. Giraph (or my example of it) doesn't handle very well
outgoing to non listed vertex.
But the problem still happen. I updated the file data on my original question .
UPDATE 2:
The OutputFormat it's not used, and the algorithm for computation is never executed either. I remove both for helping to clarify the question.
Update 3, 19/11/2015:
The problem wasn't in the input format, the input format worked well and read the data entirely.
The problem was in the class TextAndDoubleComplexWritable, i add it to my original question, for a better explanation of the final solution for this (i added an answer too).
Here's the root cause of the exception org.apache.giraph.utils.UnsafeReads.ensureRemaining. Notice this is called by the giraph utils.
The exception means that the reader insists it needs that much more input from the input stream, but the input stream doesn't have that much input left (i.e. it hit EOF).
Just a shot in the dark but have you tried checking if the next() is returning null. As it's getting to the end of the reading?
Like
if(method == null){
//Continue
}
else{
//It's Null
}
The problem was in the class TextAndDoubleComplexWritable. I wasn't aware of the importance of methods readFields and write when we are implementing the Writable interface. This are crucial because are the methods that let us send and receive information in giraph. I was writing a empty string in the readFields method, and i should use that method for writing both values of my vertex. I updated both methods in the following way:
public void write(DataOutput out) throws IOException {
out.writeDouble(this.vertexData);
out.writeUTF(this.idsVerticesAnteriores != "" ? "hola"
: this.idsVerticesAnteriores);
}
public void readFields(DataInput in) throws IOException {
this.vertexData = in.readDouble();
this.idsVerticesAnteriores = in.readUTF();
// idsVerticesAnteriores = in.readLine();
}
and this is working, finally!!

Sqlite java and sqlite4java

I am trying to build an app as a learning experience. I am getting a null pointer exception, but it is happening the second time the code is being called.
So my code calls this as part of it's startup.
// Sanity checks
sanity = new SanityChecks();
logger.info("Checking sanity...");
try {
sanity.doBasicChecks();
logger.info("It all looks sane.");
}
catch( final Exception ex){
logger.error("Something is wrong, we are not sane. Aborting...", ex);
stop();
}
Other classes are:
package nz.co.great_ape.tempusFugit;
import com.almworks.sqlite4java.SQLite;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
/**
* Checks to see if all is ok for standard usage of tempusFugit.
*
* Checks for valid paths, existing database etc, if not assumes that
* this is the first time tempusFugit has been run for this user and
* tries to set up the basic files and data.
*
* #author Rob Brown-Bayliss
* Created on 2/11/14
*/
public class SanityChecks {
public static final Logger logger = LoggerFactory.getLogger(AppConsts.LOGGER_NAME);
public SanityChecks() {
// todo
// todo
SQLite.setLibraryPath(AppConsts.HOME_DIR); // TODO: Why is this required? can we not place the native libs somewhere else or in the jar?
// todo
// todo
}
/**
* Performs basic checks to see if it is ok to run the app. If not it will attempt to
* set up for first time use.
*
* #return true if all is ok
*/
public boolean doBasicChecks() {
logger.info("Starting basic checks...");
if (!HomeDirExists()) {
return false;
}
if (!DatabaseExists()) {
logger.info("Trying to create a new database.");
DatabaseUpgrade dug = new DatabaseUpgrade();
if (!dug.upGrade()) {
return false;
}
logger.info("Created a new database.");
// At this point a usable database should exist and it should be current
}
if (!DatabaseVersionCorrect()) {
// // If the database is old we will upgrade it to the current version.
// DatabaseUpgrade dug = new DatabaseUpgrade();
// if (!dug.upGrade()) {
// return false;
// }
logger.info("is this it?.");
}
// At this point all basic checks have passed and we are good to go...
logger.info("Basic checks are complete. All is good in the universe...");
return true;
}
/**
* Checks if the app home directory exists, if not it tries to create it.
*
* #return true if exists or was able to create the directory
*/
private boolean HomeDirExists() {
if (!Files.isDirectory(Paths.get(AppConsts.HOME_DIR))) {
try {
Files.createDirectory(Paths.get(AppConsts.HOME_DIR));
}
catch (IOException io) {
logger.error("The directory " + AppConsts.HOME_DIR + " does not exist and could not be created. This is not the best but we can survive.", io);
return false;
}
}
logger.info("The directory " + AppConsts.HOME_DIR + " exists. This is good.");
return true;
}
/**
* Checks if the SQLite database exists, if not it tries to create it.
* or was able to create the database
*
* #return true if the database exists
*/
private boolean DatabaseExists() {
if (Files.notExists(Paths.get(AppConsts.TF_DATABASE))) {
logger.error("The database " + AppConsts.TF_DATABASE + " does not exist. This is bad.");
return false;
}
logger.info("The database " + AppConsts.TF_DATABASE + " exists. This is good.");
return true;
}
/**
* Checks if the SQLite database is correct for this version.
*
* #return true if correct version
*/
private boolean DatabaseVersionCorrect() {
Integer expectedVersion = AppConsts.TF_DATABASE_VERSION;
logger.info("Checking the database version is correct. Looking for version "+ expectedVersion + "." );
DatabaseUpgrade dug = new DatabaseUpgrade();
logger.info("Checking the database version is correct. Looking for version "+ expectedVersion + "." );
Integer currentVersion = dug.getVersion();
if (currentVersion < expectedVersion) {
logger.info("Expected version " + expectedVersion + ", but database is version " + currentVersion + ". This is bad.");
return false;
}
logger.info("The database version is correct. This is good.");
return true;
}
}
And:
package nz.co.great_ape.tempusFugit;
import com.almworks.sqlite4java.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
/**
* Setup the database for current version tempusFugit. This will upgrade an
* existing database or create a new empty database if required.
*
* #author Rob Brown-Bayliss
* Created on 4/11/14
*/
public class
DatabaseUpgrade {
public static final Logger logger = LoggerFactory.getLogger(AppConsts.LOGGER_NAME);
private SQLiteConnection dbConn = null;
private SQLiteQueue sQueue = null;
private int currentVersion;
public DatabaseUpgrade() {
}
/**
* Attempts to upgrade the existing database to the current version, or if
* there is no existing database then create one suitable for the current app version.
*
* #return true if there is a current and usable database
*/
public boolean upGrade() {
logger.info(" Started an upgrade on the database, if the database does not exist it will be created at the current version, ");
logger.info(" if it exists but is an older version it will be upgraded to the current version.");
if (openDatabase()) {
currentVersion = getVersion();
if (currentVersion == AppConsts.FAIL) {
logger.info("Database version is unknown. The file will be deleted and a new database created. We can survive this.");
// TODO: Ask user if we should delete the old one or make a backup?
closeDatabase();
deleteDatabase();
openDatabase();
}
if (currentVersion != AppConsts.TF_DATABASE_VERSION) {
logger.info("Current Database version is " + currentVersion);
// TODO: Backup current database.
if (currentVersion < 1) {
if (!makeVersionOne()) {
logger.error("Unable to upgrade the database. This is VERY bad.");
return false;
}
}
currentVersion = 1;
}
closeDatabase(); // good practice
}
logger.info("The database is the current version. This is good.");
return true;
}
/**
* Turns a blank SQLite database into a tempusFugit version 1 database by
* adding the required tables and data.
*/
private boolean makeVersionOne() {
logger.info("Attempting to upgrade to version 1.");
String CT_SQL = "CREATE TABLE IF NOT EXISTS dbVersion (id INTEGER PRIMARY KEY AUTOINCREMENT, version INTEGER NOT NULL UNIQUE, ";
CT_SQL = CT_SQL + "upgraded INTEGER(4) NOT NULL DEFAULT (strftime('%s','now'))); ";
String IN_SQL = "INSERT INTO dbVersion(version) values (1); ";
try {
execSQL("BEGIN TRANSACTION; ");
execSQL(CT_SQL); // create the table
execSQL(IN_SQL); // insert the record
execSQL("COMMIT; ");
}
catch (Exception ex) {
logger.error("Attempted upgrade of " + AppConsts.TF_DATABASE + " to version 1 has failed. This is VERY bad.", ex);
return false;
}
logger.info("The database has been upgraded to version 1. This is good.");
return true;
}
private Integer execSQL(String SQL) {
return sQueue.execute(new SQLiteJob<Integer>() {
protected Integer job(SQLiteConnection con) throws SQLiteException {
SQLiteStatement st = null;
try {
st = con.prepare(SQL);
st.step();
return AppConsts.SUCCESS;
}
catch (Exception ex) {
logger.error("Tried to execute SQL: " + SQL, ex);
return AppConsts.FAIL;
}
finally {
if (st != null) {
st.dispose();
}
}
}
}).complete();
}
/**
* Gets the current database version
*
* #return version as an int
*/
public int getVersion() {
return sQueue.execute(new SQLiteJob<Integer>() {
protected Integer job(SQLiteConnection con) throws SQLiteException {
SQLiteStatement st = null;
try {
st = con.prepare("SELECT version, upgraded FROM dbVersion ORDER BY upgraded DESC LIMIT 1;");
st.step();
return st.columnInt(0);
}
catch (Exception ex) {
logger.error("The database version of " + AppConsts.TF_DATABASE + " is unknown. This is bad.", ex);
return AppConsts.FAIL;
}
finally {
if (st != null) {
st.dispose();
}
}
}
}).complete();
}
/**
* Opens an existing SQLite database or creates a new blank database
* if none exists.
*
* #return false if there is a problem. // TODO: Is it possible to have a problem?
*/
private boolean openDatabase() {
try {
dbConn = new SQLiteConnection(new File(AppConsts.TF_DATABASE));
dbConn.open(true);
sQueue = new SQLiteQueue(new File(AppConsts.TF_DATABASE));
sQueue.start();
return true;
}
catch (Exception ex) {
logger.error("The database " + AppConsts.TF_DATABASE + " could not be opened or created. This is VERY bad.", ex);
return false;
}
}
/**
* Closes an open database.
*
* #return false if there is a problem. // TODO: Is it possible to have a problem?
*/
private boolean closeDatabase() {
try {
if (dbConn != null) {
dbConn.dispose();
}
return true;
}
catch (Exception ex) {
logger.error("The database " + AppConsts.TF_DATABASE + " could not be closed.", ex);
return false;
}
}
/**
* Deletes an existing database.
*
* #return false if there is a problem. // TODO: Is it possible to have a problem?
*/
private boolean deleteDatabase() {
try {
Files.delete(Paths.get(AppConsts.TF_DATABASE));
logger.info("The database " + AppConsts.TF_DATABASE + " has been deleted.");
return true;
}
catch (Exception ex) {
logger.error("The database " + AppConsts.TF_DATABASE + " could not be deleted.", ex);
return false;
}
}
}
Plus the constants:
package nz.co.great_ape.tempusFugit;
/**
* Constants used by tempusFugit
*
* #author Rob Brown-Bayliss
* Created on 31/10/14
*/
public class AppConsts {
// Application
public static final String APP_NAME = "Tempus Fugit";
public static final String VERSION_NUMBER = "0.0.1";
// Debug Mode On-Off
public static final boolean DEBUG_MODE = true;
// Data files and paths
public static final String FS = System.getProperty("file.separator");
public static final String HOME_DIR = System.getProperty("user.home") + FS + ".tempusFugit"; // This is the tempusFugit home, not the users home.
public static final String USER_NAME = System.getProperty("user.name");
public static final String LOGGER_NAME = "nz.co.great_ape.tempusFugit";
public static final String LOG_FILE = HOME_DIR + FS + "tempusFugit.log";
// Database
public static final String TF_DATABASE = HOME_DIR + FS + "tfData.sql";
public static final int TF_DATABASE_VERSION = 1; // This is the current version of the database
// Error codes
public static final int UNKNOWN = -1;
public static final int FAIL = 0;
public static final int SUCCESS = 1;
}
What I don't know is why the call if (!DatabaseVersionCorrect()) crashes with a null pointer.
Can anyone help here?
This is the stack trace
/usr/lib/jvm/java-8-oracle/bin/java -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:49764,suspend=y,server=n -javaagent:/home/rob/Projects/IntelliJ/plugins/Groovy/lib/agent/gragent.jar -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/java-8-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-8-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-8-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-8-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-8-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-8-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jfxswt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-8-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunjce_provider.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/jfxrt.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/nashorn.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-8-oracle/jre/lib/ext/cldrdata.jar:/home/rob/Projects/tempusFugit/build/classes/main:/home/rob/Projects/tempusFugit/build/resources/main:/home/rob/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.7/2b8019b6249bb05d81d3a3094e468753e2b21311/slf4j-api-1.7.7.jar:/home/rob/.gradle/caches/modules-2/files-2.1/com.almworks.sqlite4java/sqlite4java/1.0.392/d6234e08ff4e1607ff5321da2579571f05ff778d/sqlite4java-1.0.392.jar:/home/rob/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-classic/1.1.2/b316e9737eea25e9ddd6d88eaeee76878045c6b2/logback-classic-1.1.2.jar:/home/rob/.gradle/caches/modules-2/files-2.1/ch.qos.logback/logback-core/1.1.2/2d23694879c2c12f125dac5076bdfd5d771cc4cb/logback-core-1.1.2.jar:/home/rob/Projects/IntelliJ/lib/idea_rt.jar nz.co.great_ape.tempusFugit.MainApp
Connected to the target VM, address: '127.0.0.1:49764', transport: 'socket'
10:43:43.371 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Time flies...
10:43:43.379 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - https://www.youtube.com/watch?v=ESto79osxOY
10:43:43.379 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Tempus Fugit Version: 0.0.1
10:43:43.379 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - javafx.runtime.version: 8.0.25-b17
10:43:43.380 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - logged on as: rob
10:43:43.383 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Checking sanity...
10:43:43.383 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Starting basic checks...
10:43:43.393 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - The directory /home/rob/.tempusFugit exists. This is good.
10:43:43.394 [JavaFX Application Thread] ERROR nz.co.great_ape.tempusFugit - The database /home/rob/.tempusFugit/tfData.sql does not exist. This is bad.
10:43:43.394 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Trying to create a new database.
10:43:43.397 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Started an upgrade on the database, if the database does not exist it will be created at the current version,
10:43:43.397 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - if it exists but is an older version it will be upgraded to the current version.
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[1]: instantiated [/home/rob/.tempusFugit/tfData.sql]
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] Internal: loaded sqlite4java-linux-amd64-1.0.392 from /home/rob/.tempusFugit/libsqlite4java-linux-amd64-1.0.392.so
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] Internal: loaded sqlite 3.8.7, wrapper 1.3
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[1]: opened
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[2]: instantiated [/home/rob/.tempusFugit/tfData.sql]
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[2]: opened
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[1]: connection closed
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[3]: instantiated [/home/rob/.tempusFugit/tfData.sql]
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[3]: opened
10:43:43.507 [SQLiteQueue[tfData.sql]] ERROR nz.co.great_ape.tempusFugit - The database version of /home/rob/.tempusFugit/tfData.sql is unknown. This is bad.
com.almworks.sqlite4java.SQLiteException: [1] DB[2] prepare() SELECT version, upgraded FROM dbVersion ORDER BY upgraded DESC LIMIT 1; [no such table: dbVersion]
at com.almworks.sqlite4java.SQLiteConnection.throwResult(SQLiteConnection.java:1436) ~[sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteConnection.prepare(SQLiteConnection.java:580) ~[sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteConnection.prepare(SQLiteConnection.java:635) ~[sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteConnection.prepare(SQLiteConnection.java:622) ~[sqlite4java-1.0.392.jar:392]
at nz.co.great_ape.tempusFugit.DatabaseUpgrade$2.job(DatabaseUpgrade.java:121) [main/:na]
at nz.co.great_ape.tempusFugit.DatabaseUpgrade$2.job(DatabaseUpgrade.java:117) [main/:na]
at com.almworks.sqlite4java.SQLiteJob.execute(SQLiteJob.java:372) [sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteQueue.executeJob(SQLiteQueue.java:534) [sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteQueue.queueFunction(SQLiteQueue.java:667) [sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteQueue.runQueue(SQLiteQueue.java:623) [sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteQueue.access$000(SQLiteQueue.java:77) [sqlite4java-1.0.392.jar:392]
at com.almworks.sqlite4java.SQLiteQueue$1.run(SQLiteQueue.java:205) [sqlite4java-1.0.392.jar:392]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
10:43:43.508 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Database version is unknown. The file will be deleted and a new database created. We can survive this.
10:43:43.509 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - The database /home/rob/.tempusFugit/tfData.sql has been deleted.
10:43:43.510 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Current Database version is 0
10:43:43.510 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Attempting to upgrade to version 1.
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[4]: instantiated [/home/rob/.tempusFugit/tfData.sql]
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[4]: opened
Nov 30, 2014 10:43:43 AM com.almworks.sqlite4java.Internal log
INFO: [sqlite] DB[3]: connection closed
10:43:43.640 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - The database has been upgraded to version 1. This is good.
10:43:43.640 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - The database is the current version. This is good.
10:43:43.640 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Created a new database.
10:43:43.640 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Checking the database version is correct. Looking for version 1.
10:43:43.640 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Checking the database version is correct. Looking for version 1.
10:43:43.641 [JavaFX Application Thread] ERROR nz.co.great_ape.tempusFugit - Something is wrong, we are not sane. Aborting...
java.lang.NullPointerException: null
at nz.co.great_ape.tempusFugit.DatabaseUpgrade.getVersion(DatabaseUpgrade.java:117) ~[main/:na]
at nz.co.great_ape.tempusFugit.SanityChecks.DatabaseVersionCorrect(SanityChecks.java:111) ~[main/:na]
at nz.co.great_ape.tempusFugit.SanityChecks.doBasicChecks(SanityChecks.java:54) ~[main/:na]
at nz.co.great_ape.tempusFugit.MainApp.start(MainApp.java:78) ~[main/:na]
at com.sun.javafx.application.LauncherImpl.lambda$launchApplication1$153(LauncherImpl.java:821) [jfxrt.jar:na]
at com.sun.javafx.application.LauncherImpl$$Lambda$56/1015064561.run(Unknown Source) [jfxrt.jar:na]
at com.sun.javafx.application.PlatformImpl.lambda$runAndWait$166(PlatformImpl.java:323) [jfxrt.jar:na]
at com.sun.javafx.application.PlatformImpl$$Lambda$50/591723622.run(Unknown Source) [jfxrt.jar:na]
at com.sun.javafx.application.PlatformImpl.lambda$null$164(PlatformImpl.java:292) [jfxrt.jar:na]
at com.sun.javafx.application.PlatformImpl$$Lambda$52/1657335803.run(Unknown Source) [jfxrt.jar:na]
at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_25]
at com.sun.javafx.application.PlatformImpl.lambda$runLater$165(PlatformImpl.java:291) [jfxrt.jar:na]
at com.sun.javafx.application.PlatformImpl$$Lambda$51/1166726978.run(Unknown Source) [jfxrt.jar:na]
at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:95) [jfxrt.jar:na]
at com.sun.glass.ui.gtk.GtkApplication._runLoop(Native Method) [jfxrt.jar:na]
at com.sun.glass.ui.gtk.GtkApplication.lambda$null$45(GtkApplication.java:126) [jfxrt.jar:na]
at com.sun.glass.ui.gtk.GtkApplication$$Lambda$42/1167116739.run(Unknown Source) [jfxrt.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
10:43:43.641 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Game Over...
10:43:44.563 [JavaFX Application Thread] INFO nz.co.great_ape.tempusFugit - Game Over...
The problem is te fact that you call
DatabaseUpgrade dug = new DatabaseUpgrade();
logger.info(...);
Integer currentVersion = dug.getVersion();
But your dbConn and sQueue in DatabaseUpgrade are still null. Since you didn't called the private openDatabase() method which initializes your varaibles. So When you call getVersion() your sQueue.execute(...) blowups because you cannot call a method on a null object.

FTP Client keeps getting denied permission to upload by Server

I am trying to write an FTP Client and Server that will allow me to send a file from the client to the server via anonymous FTP. However, I keep getting 550 Permission Denied. I am able to do other things like download a file from the server, or get a list of the files in the directory, but whenever I try to do a download it says 550 Permission Denied. The result is the same whether I log in or use anonymous FTP.
I don't see any problems with my code, but I have tried running it on different networks and computers with the same result. Is there a problem with the code that I don't see, or do I have to do something with the router/firewall?
I am writing both the client and server in Java and running Windows. The libraries I am using are Apache Commons FTP Client and Apache FTP Server.
Here is the client. The commented out code is for uploading and getting a list of the files in the directory.
import org.apache.commons.net.ftp.*;
import java.io.*;
import java.net.*;
public class Client
{
public Client()
{
// Do nothing
}
public void transferFile(String ipAddress)
{
// For uploading
FileInputStream file = null;
// For downloading
// FileOutputStream file = null;
try
{
InetAddress address = InetAddress.getByName(ipAddress);
FTPClient ftpClient = new FTPClient();
ftpClient.connect(address, 5000);
ftpClient.login("anonymous", "");
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
ftpClient.enterLocalPassiveMode();
// For uploading
file = new FileInputStream(new File("C:/SoundFiles/Send/Test1.txt"));
// For downloading
// file = new FileOutputStream(new File("C:/SoundFiles/Receive/Test2.txt"));
// For uploading
boolean success = ftpClient.storeFile("/Receive/Test2.txt", file);
// For downloading
// boolean success = ftpClient.retrieveFile("/Send/Test1.txt", file);
// For listing the files on the server's directory
/*
FTPFile[] directoryFiles = ftpClient.listFiles();
System.out.println("There are " + directoryFiles.length + " files");
for(int i = 0; i < directoryFiles.length; i++)
{
System.out.println(i + ": " + directoryFiles[i].getName());
}
*/
if(success)
System.out.println("Success.");
else
System.out.println("Fail.");
ftpClient.logout();
ftpClient.disconnect();
}
catch(IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if(file != null)
file.close();
}
catch(IOException e)
{
e.printStackTrace();
}
}
}
public static void main(String[] args)
{
Client client = new Client();
client.transferFile("Enter your IP address here");
}
}
Here is the server.
import org.apache.ftpserver.FtpServer;
import org.apache.ftpserver.FtpServerFactory;
import org.apache.ftpserver.ConnectionConfigFactory;
import org.apache.ftpserver.usermanager.impl.BaseUser;
import org.apache.ftpserver.listener.ListenerFactory;
import org.apache.ftpserver.ftplet.FtpException;
import org.apache.ftpserver.ftplet.UserManager;
public class Server
{
public Server()
{
// Do nothing
}
public void startServer()
{
FtpServer server = null;
try
{
FtpServerFactory ftpServerFactory = new FtpServerFactory();
ListenerFactory listenerFactory = new ListenerFactory();
listenerFactory.setPort(5000);
ftpServerFactory.addListener("default", listenerFactory.createListener());
ConnectionConfigFactory configFactory = new ConnectionConfigFactory();
configFactory.setAnonymousLoginEnabled(true);
ftpServerFactory.setConnectionConfig(configFactory.createConnectionConfig());
BaseUser user = new BaseUser();
user.setName("anonymous");
user.setPassword("");
user.setHomeDirectory("C:/SoundFiles");
UserManager userManager = ftpServerFactory.getUserManager();
userManager.save(user);
server = ftpServerFactory.createServer();
server.start();
}
catch (FtpException e)
{
e.printStackTrace();
}
// Stop the server after 10 seconds
try
{
Thread.sleep(10000);
}
catch(InterruptedException e)
{
e.printStackTrace();
}
if(server != null)
server.stop();
}
public static void main(String[] args)
{
Server server = new Server();
server.startServer();
}
}
This is the output I keep getting from the server.
[main] INFO org.apache.ftpserver.impl.DefaultFtpServer - FTP server started
[NioProcessor-3] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - CREATED
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - OPENED
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 220 Service ready for new user.
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: USER anonymous
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 331 Guest login okay, send your complete e-mail address as password.
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: PASS
[pool-3-thread-1] INFO org.apache.ftpserver.command.impl.PASS - Anonymous login success - null
[pool-3-thread-2] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 230 User logged in, proceed.
[pool-3-thread-2] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: TYPE I
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 200 Command TYPE okay.
[pool-3-thread-2] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: PASV
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 227 Entering Passive Mode
[pool-3-thread-2] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: STOR /Receive/Test2.txt
[pool-3-thread-2] WARN org.apache.ftpserver.impl.PassivePorts - Releasing unreserved passive port: 2133
[pool-3-thread-2] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 550 /Receive/Test2.txt: Permission denied.
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - RECEIVED: QUIT
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - SENT: 221 Goodbye.
[pool-3-thread-1] INFO org.apache.ftpserver.listener.nio.FtpLoggingFilter - CLOSED
Any suggestions on how I can get this to work are much appreciated.
I ended up finding out what was missing. I just needed to add the following lines of code in the Server class.
List<Authority> authorities = new ArrayList<Authority>();
authorities.add(new WritePermission());
user.setAuthorities(authorities);

Error when execute MavenCli in the loop (maven-embedder)?

What is the problem when I execute the maven command in the loop ? The goal is to update the version of pom.xml of the list of bundles. The first iteration, maven execute correctly (update pom.xml), but it makes error for all item after.
for (String bundlePath: bundlesToUpdate)
{
MavenCli cli = new MavenCli();
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);
}
Same error with the code:
`MavenCli cli = new MavenCli();
for (String bundlePath: bundlesToUpdate)
{
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);
}`
First time, it's ok:
[main] INFO org.eclipse.tycho.versions.manipulation.PomManipulator - pom.xml//project/version: 2.2.6-SNAPSHOT => 2.2.7-SNAPSHOT
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - Reactor Summary:
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger -
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - XXXX project ....................... SUCCESS [ 0.216 s]
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - com.sungard.valdi.bus.fixbroker.client.bnp ........ SKIPPED
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - XXX project Feature ...................... SKIPPED
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - ------------------------------------------------------------------------
[main] INFO org.apache.maven.cli.event.ExecutionEventLogger - BUILD SUCCESS
After the errors are:
[main] ERROR org.apache.maven.cli.MavenCli - Error executing Maven.
[main] ERROR org.apache.maven.cli.MavenCli - java.util.NoSuchElementException
role: org.apache.maven.eventspy.internal.EventSpyDispatcher
roleHint:
[main] ERROR org.apache.maven.cli.MavenCli - Caused by: null
[main] ERROR org.apache.maven.cli.MavenCli - Error executing Maven.
[main] ERROR org.apache.maven.cli.MavenCli - java.util.NoSuchElementException
role: org.apache.maven.eventspy.internal.EventSpyDispatcher
roleHint:
The solution I found is to use Maven Invoker and it works fine for the same functionality:
public class MavenInvoker {
public static void main(String[] args) throws IOException, NoHeadException, GitAPIException
{
MavenInvoker toto = new MavenInvoker();
toto.updateVersionMavenInvoker("2.2.8-SNAPSHOT", "TECHNICAL\\WEB" );
}
private InvocationRequest request = new DefaultInvocationRequest();
private DefaultInvoker invoker = new DefaultInvoker();
public InvocationResult updateVersionMavenInvoker(String newVersion, String folderPath)
{
InvocationResult result = null;
request.setPomFile( new File(folderPath+"\\pom.xml" ) );
String version = "-DnewVersion="+newVersion;
request.setGoals( Arrays.asList("-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
version) );
try {
result = invoker.execute( request );
} catch (MavenInvocationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return result;
}
}
This works for me inside a custom maven plugin (using Maven 3.5.0):
ClassRealm classRealm = (ClassRealm) Thread.currentThread().getContextClassLoader();
MavenCli cli = new MavenCli(classRealm.getWorld());
cli.doMain( ... );
The plexus Launcher sets the context class loader to its ClassRealm, which has access to the "global" ClassWorld.
Not sure how stable that solution is, but so far looking good.
Used imports:
import org.codehaus.plexus.classworlds.ClassWorld;
import org.codehaus.plexus.classworlds.realm.ClassRealm;
import org.apache.maven.cli.MavenCli;
See the email thread for a more detail explanation: https://dev.eclipse.org/mhonarc/lists/sisu-users/msg00063.html
It seems the correct way is to give MainCli a ClassWorld instance on construction so it can maintain a proper state through multiple calls.
Example:
final ClassWorld classWorld = new ClassWorld("plexus.core", getClass().getClassLoader());
MavenCli cli = new MavenCli(classWorld);
String[] arguments = {
"-Dtycho.mode=maven",
"org.eclipse.tycho:tycho-versions-plugin:set-version",
"-DgenerateBackupPoms=false",
"-DnewVersion=" + version};
int result = cli.doMain(arguments, bundlePath, System.out, System.err);

Categories

Resources