Wait for server response in Jena custom PropertyFunction - java

A bit of background: PropertyFunction is an interface in Jena API that allows doing performing custom operations using SPARQL syntax. Example:
select ?result { ?result f:myPropertyFunction 'someObject' . }
So I made a class Launch that implements this interface and extends a class Client. Within the body of the exec method of my Launch class I establish a connection to a server and, while sending information is no problem, waiting for the server to respond is. Whenever I try to wait() for server response I get the following exception: java.lang.IllegalMonitorStateException.
Here is the body of my exec method for reference:
QueryIterator it = null;
try {
this.connect(); // works well
this.send(algorithmAndArgs); // works well
this.wait(); // exception is thrown
#SuppressWarnings("unused")
ResultSet rs = ResultSetFactory.create(it, Arrays.asList(resultIdentifiers));
} catch (Exception e) {
e.printStackTrace();
}
return it;
Anyone know what the problem may be? Thank you for your answer.
EDIT 1: One thing that I forgot to mention is that the Client class has a method called onObjectReceived(Object o, Socket s) that is triggered each time something is received from the server. I tried using a isDone variable with a while loop in the exec method and set it to true once an object is received, but it did not work.

I solved my own problem: I created an attribute private final CountDownLatch objectWasReceivedLatch = new CountDownLatch(1) and, in the exec method I do boolean objectWasReceived = objectWasReceivedLatch.await(60, TimeUnit.SECONDS); when I want to wait for a response; in the onObjectReceived method I call objectWasReceivedLatch.countDown().

Related

Is there a way to have Jedis automatically use a connection pool for command methods?

I came across a class in a project I'm working on that looks like
public class RedisClient {
Logger logger = LoggerFactory.getLogger(RedisClient.class);
JedisPool pool;
public RedisClient(String redisHost, int redisPort, String redisPassword) {
JedisPoolConfig poolConfig = buildPoolConfig();
try {
pool = new JedisPool(poolConfig, redisHost, redisPort, 10000, redisPassword, true);
} catch (Exception e) {
logger.error("There's been an error while Jedis attempted to retrieve a thread from the pool", e);
}
}
public void set(String key, String value) {
try (Jedis jedis = pool.getResource()) {
jedis.set(key, value);
}
}
// ... a few more command methods wrapped with try (Jedis jedis = pool.getResource())
// like get, expire, etc.
While this isn't a bad approach by any means (pretty standard adapter pattern to my eyes), I'm wondering if Jedis either takes care of this automatically (coming from Python, I was hoping it maybe hid this detail a la redis, or there was another way to configure an existing client to "use a connection pool for each command". I see that jedis.Jedis has a setDataSource method that accepts a JedisPool, but I'm having trouble determining what that actually does and if it helps me answer my question.
JedisPool is implemented on top of commons-pool2. According to the design of commons-pool2, you can borrow an object but you'd have to return that object after using it. Jedis incorporates close method in this returning process to ease users' works with the help of Java's try-with-resources feature. setDataSource method is used in the borrowing process, similar to close method being used in returning process.
Last but not least, a user must not use setDataSource method; unless s/he is implementing her own version of JedisPool.

Extending existing hybris cronjob to send custom result through email

I have a AbstractJobPerformable, which is an import job. The job itself works great, but sometimes it fails.
I saved the error entries into a List, but I don't know how to extend the Job itself to send me the list troguh email.
Firstly, AbstractJobPerformable is not an import job.Its an abstract class that is extended for writing cronjob logic.
To send an email from the perform() method of cronjob you put all your code in one big try{} block and send an email in finally{} or catch{} block of Exception.
#Override
public PerformResult perform(CSVImportCronJobModel csvImportCronJobModel) {
try {
//your code
return new PerformResult(CronJobResult.SUCCESS, CronJobStatus.FINISHED);
} catch (Exception e) {
emailService.sendEmail(csvImportCronJobModel.getLogs());
return new PerformResult(CronJobResult.FAILURE, CronJobStatus.FINISHED);
}
}

Inconvenient Robot framework test case using websocket and jms

I'm having trouble rewriting java test cases in robot framework.
in order to do this, i need to create new java keywords, but the way tests are implemented, don't make it easy !
this is an example of script that i need to rewrite in RF :
try
{
ServerSocket server = Utils.startSocketServer;
while(true)
{
Socket socket = server.accept();
ObjectInputStream ois = new ObjectInputStream(socket.getInputStream());
RequestX request = (RequestX) ois.readObject();
if(request.getSource().equals(String.INFO)
{
/** do something **/
}
else if(request.getSource().equals(String.X)
{
/** do something **/
}
else
{
/** do something **/
}
/** break on condition **/
}
Utils.closeSocketServer(server);
}catch(Exception e)
{
/** do something **/
}
Any suggestion on how i can make this into a RF test case !
Make the whole script into a single keyword is not an option because somewhere in that loop, in the do something comment, i also need to call keywords.
The main idea is to fragment this script into functions so that i can use them as java keywords in RF but i still can't figure this out!
So, i did further researches and this is what i came up with :
Split this code into functions so that i can call and use them as keywords in robot framework.
So code became like this :
public static String SendTask(String taskFile)
{
ServerSocket server = null;
try
{
server = startSocketServer();
if (taskFile != null)
{
Utils.sendJMSWakeUp();
while(true)
{
Socket socket = server.accept();
ObjectInputStream ois = getInputStream(socket);
RequestX request = (cast)ois.readObject();
if (getSource(request,Strings.INFO)
{
/** log info **/
}
/** if the current jms queue is Scheduler then send task !*/
else if (getSource(request,Strings.SCHEDULER))
{
/** send task **/
break;
}
}
}
else
{
assertion(false, "Illegal Argument Value null");
}
}catch (Exception e)
{
/** log errors **/
}finally
{
/** close socket server & return a task id **/
}
}
the same goes for every JMS queue that I am listening to
public static String getTaskAck(String taskId);
public static String getTaskresult(String taskId);
it did work in my case for synchronous task execution. But this is very incovenient for asynchronous task execution. Because each time i'll have to wait for response on keyword, so the next keyword may fail because the response that he is supposed to read was already sent !
i could look into process BuiltIn library or RobotFramework-Async library for parallel keyword execution but it will be harder to process for many asynchronous jms messages.
After further investigation, i think i will look into robotframework-jmsLibrary. some developpment enhancement has to be done like adding activeMq.
This way, i can send and consume many asynchronous messages via activeMq then process every message via robotframework-jmsLibrary
Example :
RF-jmsLibrary <==> synchronous <==> activeMq <==> asynchronous <==> system

SchemaCompiler bind() returns null

I' m writing a class to run xjc in java. my code goes as follows:
SchemaCompiler sc = XJC.createSchemaCompiler();
URL url = new URL("file://E:\\JAXB\\books.xsd");
sc.parseSchema(new InputSource(url.toExternalForm()));
S2JJAXBModel model = sc.bind();
JCodeModel cm = model.generateCode(null, null);
cm.build(new FileCodeWriter(new File("E:\\JAXBTest")));
i get model as null when i run this.
Can anyone pls help me or provide any link where i can know abt this.
If you look in the SchemaCompiler API for bind() method it says:
bind() returns null if the compilation
fails. The errors should have been
delivered to the registered error
handler in such a case.
So, you need to register an error listener using SchemaCompiler.setErrorListener() with something like this:
sc.setErrorListener(new ErrorListener(){
public void error(SAXParseException exception){
exception.printStackTrace();
}
});
And hopefully you will get more information on what is going wrong.

Does this program introduce a parallel execution?

Here is a simple server application using Bonjour and written in Java. The main part of the code is given here:
public class ServiceAnnouncer implements IServiceAnnouncer, RegisterListener {
private DNSSDRegistration serviceRecord;
private boolean registered;
public boolean isRegistered(){
return registered;
}
public void registerService() {
try {
serviceRecord = DNSSD.register(0,0,null,"_killerapp._tcp", null,null,1234,null,this);
} catch (DNSSDException e) {
// error handling here
}
}
public void unregisterService(){
serviceRecord.stop();
registered = false;
}
public void serviceRegistered(DNSSDRegistration registration, int flags,String serviceName, String regType, String domain){
registered = true;
}
public void operationFailed(DNSSDService registration, int error){
// do error handling here if you want to.
}
}
I understand it in the following way. We can try to register a service calling "registerService" method which, in its turn, calls "DNSSD.register" method. "DNSSD.register" try to register the service and, in general case, it can end up with two results: service was "successfully registered" and "registration failed". In both cases "DNSSD.register" calls a corresponding method (either "serviceRegistered" or "operationFailed") of the object which was given to the DNSSD.register as the last argument. And programmer decides what to put into "serviceRegistered" and "operationFailed". It is clear.
But should I try to register a service from the "operationFailed"? I am afraid that in this way my application will try to register the service too frequently. Should I put some "sleep" or "pause" into "operationFailed"? But in any case, it seems to me, that when the application is unable to register a service it will be also unable to do something else (for example to take care of GUI). Or may be DNSSD.register introduce some kind of parallelism? I mean it starts a new thread but that if I try to register service from "operation Failed", I could generate a huge number of the threads. Can it happen? If it is the case, should it be a problem? And if it is the case, how can I resolve this problem?
Yes, callbacks from the DNSSD APIs can come asynchronously from another thread. This exerpt from the O'Reilly book on ZeroConf networking gives some useful information.
I'm not sure retrying the registration from your operationFailed callback is a good idea. At least without some understanding of why the registration failed, is simply retrying it with the same parameters going to make sense?

Categories

Resources