ZeroMQ - jzmq .recvZeroCopy() fails to get any message while .recv() works - java

So I am writing my own piece of stuff using jzmq GIT master branch and ZeroMQ 3.2.3.
After installation I tried to test the following simple PUB/SUB program, where a publisher and a subscriber talk in a single process. Since the test is under Windows, I used TCP.
public class ZMQReadynessTest {
private ZMQ.Context context;
#Before
public void setUp() {
context = ZMQ.context(1);
}
#Test
public void testSimpleMessage() {
String topic = "tcp://127.0.0.1:31216";
final AtomicInteger counter = new AtomicInteger();
// _____________________________________ create a simple subscriber
final ZMQ.Socket subscribeSocket = context.socket(ZMQ.SUB);
subscribeSocket.connect(topic);
subscribeSocket.subscribe("TestTopic".getBytes());
Thread subThread = new Thread() {
#Override
public void run() {
while (true) {
String value = null;
// This would result in trouble /\/\/\/\/\/\/\/\/\
{
ByteBuffer buffer = ByteBuffer.allocateDirect(100);
if (subscribeSocket.recvZeroCopy( buffer,
buffer.remaining(),
ZMQ.DONTWAIT
) > 0 ) {
buffer.flip();
value = buffer.asCharBuffer().toString();
System.out.println(buffer.asCharBuffer().toString());
}
}
// This works perfectly + + + + + + + + + + + + +
/*
{
byte[] bytes = subscribeSocket.recv(ZMQ.DONTWAIT);
if (bytes == null || bytes.length == 0) {
continue;
}
value = new String(bytes);
}
*/
if (value != null && value.length() > 0) {
counter.incrementAndGet();
System.out.println(value);
break;
}
}
}
};
subThread.start();
// _____________________________ create a simple publisher
ZMQ.Socket publishSocket = context.socket(ZMQ.PUB);
publishSocket.bind("tcp://*:31216");
try {
Thread.sleep(3000); // + wait 3 sec to make sure its ready
} catch (InterruptedException e) {
e.printStackTrace();
fail();
}
// publish a sample message
try {
publishSocket.send("TestTopic".getBytes(), ZMQ.SNDMORE);
publishSocket.send("This is test string".getBytes(), 0);
subThread.join(100);
} catch (InterruptedException e) {
e.printStackTrace();
fail();
}
assertTrue(counter.get() > 0);
System.out.println(counter.get());
}
}
Now as you can see, in the subscriber if I use a simple .recv(ZMQ.DONTWAIT) method, it works perfectly. However, if I am using the direct byte buffer I got nothing returned - and I got the following exception, seems like on program exit:
Exception in thread "Thread-0" org.zeromq.ZMQException: Resource temporarily unavailable(0xb)
at org.zeromq.ZMQ$Socket.recvZeroCopy(Native Method)
at ZMQReadynessTest$1.run(ZMQReadynessTest.java:48)
I also tried to use a simple ByteBuffer (not a direct buffer), which doesn't throw the exception above; but also return me nothing.
Does anybody know how to resolve the above?
I don't want to create byte[] objects all around, as I am doing some high performance system. If this cannot be resolved, I might simply use Unsafe instead. But I really want to work in the "supposed way".
Thanks in advance.
Alex

Related

An Efficient concurrent data structure to wait for a computed value (or timeout)

I'm hoping some concurrency experts can advise as I'm not looking to rewrite something that likely exists.
Picture the problem; I have a web connection that comes calling looking for their unique computed result (with a key that they provide in order to retrieve their result) - however the result may not have been computed YET so I would like for the connection to wait (block) for UP TO n seconds before giving up and telling them I don't (yet) have their result (computation time to calculate value is non deterministic). something like;
String getValue (String key)
{
String value = [MISSING_PIECE_OF_PUZZLE].getValueOrTimeout(key, 10, TimeUnit.SECONDS)
if (value == null)
return "Not computed within 10 Seconds";
else
return "Value was computed and was " + value;
}
and then have another thread (the computation threads)that is doing the calculations - something like ;
public void writeValues()
{
....
[MISSING_PIECE_OF_PUZZLE].put(key, computedValue)
}
In this scenario, there are a number of threads working in the background to compute the values that will ultimately be picked up by a web connections. The web connections have NO control or authority over what is computed and when the computations execute - as I've said - this is being done in a pool in the background but these thread can publish when the computation has completed (how they do is the gist of this question). The publish message maybe consumed or not - depending if any subscribers are interested in this computed value.
As these are web connections that will be blocking - i could potentially have 1000s of concurrent connections waiting (subscribing) for their specific computed value so such a solution needs to be very light on blocking resources. The closest i've came to is this SO question which I will explore further but wanted to check i'm not missing something blindly obvious before writing this myself?
I think you should use a Future it gives an ability to compute data in a separate thread and block for the requested time period while waiting for an answer. Notice how it throws an exception if more then 3 seconds passed
public class MyClass {
// Simulates havy work that takes 10 seconds
private static int getValueOrTimeout() throws InterruptedException {
TimeUnit.SECONDS.sleep(10);
return 123;
}
public static void main(String... args) throws InterruptedException, ExecutionException {
Callable<Integer> task = () -> {
Integer val = null;
try {
val = getValueOrTimeout();
} catch (InterruptedException e) {
throw new IllegalStateException("task interrupted", e);
}
return val;
};
ExecutorService executor = Executors.newFixedThreadPool(1);
Future<Integer> future = executor.submit(task);
System.out.println("future done? " + future.isDone());
try {
Integer result = future.get(3, TimeUnit.SECONDS);
System.out.print("Value was computed and was : " + result);
} catch (TimeoutException ex) {
System.out.println("Not computed within 10 Seconds");
}
}
}
After looking in changes in your question I wanted to suggest a different approach using BlockingQueue in such case the producer logic completely separated from the consumer so you could do something like this
public class MyClass {
private static BlockingQueue<String> queue = new ArrayBlockingQueue<>(10);
private static Map<String, String> dataComputed = new ConcurrentHashMap<>();
public static void writeValues(String key) {
Random r = new Random();
try {
// Simulate working for long time
TimeUnit.SECONDS.sleep(r.nextInt(11));
String value = "Hello there fdfsd" + Math.random();
queue.offer(value);
dataComputed.putIfAbsent(key, value);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
private static String getValueOrTimeout(String key) throws InterruptedException {
String result = dataComputed.get(key);
if (result == null) {
result = queue.poll(10, TimeUnit.SECONDS);
}
return result;
}
public static void main(String... args) throws InterruptedException, ExecutionException {
String key = "TheKey";
Thread producer = new Thread(() -> {
writeValues(key);
});
Thread consumer = new Thread(() -> {
try {
String message = getValueOrTimeout(key);
if (message == null) {
System.out.println("No message in 10 seconds");
} else {
System.out.println("The message:" + message);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
});
consumer.start();
producer.start();
}
}
With that said I have to agree with #earned that making the client thread to wait is not a good approach instead I would suggest using a WebSocket which gives you an ability to push data to the client when it is ready you can find lots of tutorials on WebSocket here is one for example ws tutorial

Java socket time out does not work

I've a class which is responsible for listening two other machines which have exactly the same classes, so it's a network of three computers having the same code. The connection is there and I can see them passing data to each other. Everything until there works OK.
Things get tricky when I take out one of the machines and observe how the other two behave. Expectedly, when one of the machines stops working for some reason, other two should continue. And if two of them stop, the remaining should go on.
I tried to implement this mechanism below. However, when I take out one of the machines, the program keeps waiting, so it does not switch to "two-way comparison mode".
public void listen() {
try {
logger.info("Creating listener sockets");
while (isRunning) {
final byte[] buf = new byte[bufferSize];
final DatagramPacket packetOne = new DatagramPacket(buf, buf.length);
final DatagramPacket packetTwo = new DatagramPacket(buf, buf.length);
MediatorMessageMsg mediatorMessageOne = null;
MediatorMessageMsg mediatorMessageTwo = null;
try {
socketReceiverOne.receive(packetOne);
ByteArrayInputStream firstInput = new ByteArrayInputStream(buf);
mediatorMessageOne = MediatorMessageMsg.parseDelimitedFrom(firstInput);
socketReceiverTwo.receive(packetTwo);
ByteArrayInputStream secondInput = new ByteArrayInputStream(buf);
mediatorMessageTwo = MediatorMessageMsg.parseDelimitedFrom(secondInput);
logger.trace("Received packets");
} catch (final SocketTimeoutException e) {
logger.trace(e.getMessage());
continue;
} catch (final SocketException e) {
logger.warn(e);
logger.warn("Ignore the error and go on.");
continue;
} catch (final IOException e) {
logger.error("Incoming communication stopped!");
logger.error(e);
stop();
}
// if two mediators sent the data, it's OK
if (packetOne.getLength() > 0 && packetTwo.getLength() > 0) {
handlePackets(mediatorMessageOne, mediatorMessageTwo);
logger.info("Number of active mediators: 2. Comparison style: 1v1v1");
}
// if only one sent the data, compare it with our own
else if (packetOne.getLength() > 0 || packetTwo.getLength() > 0) {
// whicehever sent the data, compare its data with our own
logger.info("Number of active mediators: 1. Comparison style: 1v1");
if (packetOne.getLength() > 0) {
handlePackets(mediatorMessageOne);
} else {
handlePackets(mediatorMessageTwo);
}
}
// if no data is sent, then pass our own directly
else {
logger.info("Number of active mediators: 0. Comparison style: No Comparison");
// our datamodel to retrieve data on our own
DataModel modelOwn = DataModel.getInstance();
MediatorMessageMsg newMessage = MediatorMessageMsg.newBuilder().setHeading(modelOwn.getHeading()).setSpeed(modelOwn.getSpeed()).setSender(getId()).build();
// publish(topicName, newMessage);
}
Thread.sleep(1);
}
socketReceiverOne.close();
socketReceiverTwo.close();
logger.info("stopped");
} catch (final IllegalArgumentException e) {
logger.error("Illegal argument received: " + e);
} catch (final Exception e) {
logger.error("Unexpected error occured: " + e);
} finally {
if (socketReceiverOne instanceof DatagramSocket && socketReceiverTwo instanceof DatagramSocket) {
if (!socketReceiverOne.isClosed() || !socketReceiverTwo.isClosed()) {
socketReceiverOne.close();
socketReceiverTwo.close();
}
}
}
}
To save your time, let me share my opinion on the matter. I suspect the problem to be in this part:
socketReceiverOne.receive(packetOne);
ByteArrayInputStream firstInput = new ByteArrayInputStream(buf);
mediatorMessageOne = MediatorMessageMsg.parseDelimitedFrom(firstInput);
socketReceiverTwo.receive(packetTwo);
ByteArrayInputStream secondInput = new ByteArrayInputStream(buf);
mediatorMessageTwo = MediatorMessageMsg.parseDelimitedFrom(secondInput);
To me it seems like the program expects a package and when it cannot receive it, it keeps waiting. Although I have time out exception condition, I cannot get this done.
private int socketTimeout = 1000 * 2;// 2sec
socketReceiverOne.setSoTimeout(socketTimeout);
socketReceiverTwo.setSoTimeout(socketTimeout);
Any thoughts?
Okay I found where I was mistaken. I needed more ports (for in and out). Once I incorporated those ports, the problem did not occur again.

concurrent modification on arraylist

There are a lot of concurrent mod exception questions, but I'm unable to find an answer that has helped me resolve my issue. If you find an answer that does, please supply a link instead of just down voting.
So I originally got a concurrent mod error when attempting to search through an arraylist and remove elements. For a while, I had it resolved by creating a second arraylist, adding the discovered elements to it, then using removeAll() outside the for loop. This seemed to work, but as I used the for loop to import data from multiple files I started getting concurrent modification exceptions again, but intermittently for some reason. Any help would be greatly appreciated.
Here's the specific method having the problem (as well as the other methods it calls...):
public static void removeData(ServiceRequest r) {
readData();
ArrayList<ServiceRequest> targets = new ArrayList<ServiceRequest>();
for (ServiceRequest s : serviceQueue) {
//ConcurrentModification Exception triggered on previous line
if (
s.getClient().getSms() == r.getClient().getSms() &&
s.getTech().getName().equals(r.getTech().getName()) &&
s.getDate().equals(r.getDate())) {
JOptionPane.showMessageDialog(null, s.getClient().getSms() + "'s Service Request with " + s.getTech().getName() + " on " + s.getDate().toString() + " has been removed!");
targets.add(s);
System.out.print("targetted"); }
}
if (targets.isEmpty()) { System.out.print("*"); }
else {
System.out.print("removed");
serviceQueue.removeAll(targets);
writeData(); }
}
public static void addData(ServiceRequest r) {
readData();
removeData(r);
if (r.getClient().getStatus().equals("MEMBER") || r.getClient().getStatus().equals("ALISTER")) {
serviceQueue.add(r); }
else if (r.getClient().getStatus().equals("BANNED") || r.getClient().getStatus().equals("UNKNOWN")) {
JOptionPane.showMessageDialog(null, "New Request failed: " + r.getClient().getSms() + " is " + r.getClient().getStatus() + "!", "ERROR: " + r.getClient().getSms(), JOptionPane.WARNING_MESSAGE);
}
else {
int response = JOptionPane.showConfirmDialog(null, r.getClient().getSms() + " is " + r.getClient().getStatus() + "...", "Manually Overide?", JOptionPane.OK_CANCEL_OPTION);
if (response == JOptionPane.OK_OPTION) {
serviceQueue.add(r); }
}
writeData(); }
public static void readData() {
try {
Boolean complete = false;
FileReader reader = new FileReader(f);
ObjectInputStream in = xstream.createObjectInputStream(reader);
serviceQueue.clear();
while(complete != true) {
ServiceRequest test = (ServiceRequest)in.readObject();
if(test != null && test.getDate().isAfter(LocalDate.now().minusDays(180))) {
serviceQueue.add(test); }
else { complete = true; }
}
in.close(); }
catch (IOException | ClassNotFoundException e) { e.printStackTrace(); }
}
public static void writeData() {
if(serviceQueue.isEmpty()) { serviceQueue.add(new ServiceRequest()); }
try {
FileWriter writer = new FileWriter(f);
ObjectOutputStream out = xstream.createObjectOutputStream(writer);
for(ServiceRequest r : serviceQueue) { out.writeObject(r); }
out.writeObject(null);
out.close(); }
catch (IOException e) { e.printStackTrace(); }
}
EDIT
The changes cause the concurrent mod to trigger every time rather than intermittently, which I guess means the removal code is better but the error now triggers at it.remove();
public static void removeData(ServiceRequest r) {
readData();
for(Iterator<ServiceRequest> it = serviceQueue.iterator(); it.hasNext();) {
ServiceRequest s = it.next();
if (
s.getClient().getSms() == r.getClient().getSms() &&
s.getTech().getName().equals(r.getTech().getName()) &&
s.getDate().equals(r.getDate())) {
JOptionPane.showMessageDialog(null, s.getClient().getSms() + "'s Service Request with " + s.getTech().getName() + " on " + s.getDate().toString() + " has been removed!");
it.remove(); //Triggers here (line 195)
System.out.print("targetted"); }
}
writeData(); }
Exception in thread "AWT-EventQueue-0" java.util.ConcurrentModificatio
nException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:901)
at java.util.ArrayList$Itr.next(ArrayList.java:851)
at data.ServiceRequest.removeData(ServiceRequest.java:195)
at data.ServiceRequest.addData(ServiceRequest.java:209) <...>
EDIT
After some more searching, I've switch the for loop to:
Iterator<ServiceRequest> it = serviceQueue.iterator();
while(it.hasNext()) {
and it's back to intermittently triggering. By that I mean the first time I attempt to import data (the removeData method is being triggered from the addData method) it triggers the concurrent mod exception, but the next try it pushes past the failure and moves on to another file. I know there's a lot of these concurrent mod questions, but I'm not finding anything that helps in my situation so links to other answers are more than welcome...
This is not how to do it, to remove elements while going through a List you use an iterator. Like that :
List<ServiceRequest> targets = new ArrayList<ServiceRequest>();
for(Iterator<ServiceRequest> it = targets.iterator(); it.hasNext();) {
ServiceRequest currentServReq = it.next();
if(someCondition) {
it.remove();
}
}
And you will not get ConcurrentModificationException this way if you only have one thread.
If there is multiple threads involved in your code, you may still get ConcurrentModificationException. One way to solve this, is to use Collections.synchronizedCollection(...) on your collection (serviceQueue) and as a result you will get a synchronized collection that will not produce ConcurrentModificationException. But, you code may become very slow.

ZeroMQ + Java performance on pub/sub

Googled without luck, my problem is as the title. I got only around 50K message per second for some < 64 bytes message, with only a simple pub/sub test sending a const string through. The same box demonstrates quite stable performance regardless if I am doing TCP:// or INPROC://, or under Windows or Ubuntu.
The hardware and software configuration is:
i5-4670 3.4G x 4 Core
16GB 1666Mhz RAM
Windows 7 64bit
JDK 1.8.05
ZeroMQ 3.2.3
jzmq 2.2.2
And the sample program is as below where I created 2 threads, 1 doing pub and 1 doing sub. Delivering the message 1000 times and wait till the subscriber receive them all. The performance is constantly around 30 ms, which is only 1/10 of the performance I assume ZeroMQ should deliver. The code demonstrates same performance running stand-alone in IDE and running as a JUnit test.
Could anyone give me some hint on this?
EDIT1: Seems if I enlarge the message count to 5000 I got a core dump from JVM. That looks to me a hint to the real problem, though I don't think I got anything wrong in the threading model. What can be the culprit?
public class ZMQReadynessTest {
private ZMQ.Context context;
#Before
public void setUp() {
ZMQLoader.initialize();
context = ZMQ.context(2);
}
#Test
public void testSimpleMessage() {
final int totalCount = 1000;
String topic = "tcp://127.0.0.1:31216";
final CountDownLatch startLatch = new CountDownLatch(1);
final CountDownLatch latch = new CountDownLatch(totalCount);
// create a simple subscriber
final ZMQ.Socket subscribeSocket = context.socket(ZMQ.SUB);
subscribeSocket.connect(topic);
subscribeSocket.subscribe("TestTopic".getBytes());
Thread subThread = new Thread() {
#Override
public void run() {
try {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
fail();
}
startLatch.countDown();
ByteBuffer buffer = ByteBuffer.allocateDirect(100);
while (latch.getCount() > 0) {
// get the message
int count = 0;
if ((count = subscribeSocket.recvZeroCopy(buffer, buffer.remaining(), 0)) > 0) {
// another receive for content
count = subscribeSocket.recvZeroCopy(buffer, buffer.remaining(), 0);
buffer.flip();
byte[] b = new byte[count];
buffer.get(b);
assertEquals("This is test string", new String(b));
latch.countDown();
buffer.clear();
}
}
} catch (Exception e) {
e.printStackTrace();
System.out.println(latch.getCount());
} finally {
subscribeSocket.close();
}
}
};
// create a simple publisher - wait 3 sec to make sure its ready
ZMQ.Socket publishSocket = context.socket(ZMQ.PUB);
publishSocket.bind("tcp://*:31216");
try {
subThread.start();
try {
startLatch.await();
} catch (InterruptedException e) {
e.printStackTrace();
fail();
}
// publish a sample message
long before = System.currentTimeMillis(), after;
try {
for (int i = 0; i < totalCount; i++) {
publishSocket.send("NotTestTopic".getBytes(), ZMQ.SNDMORE | ZMQ.DONTWAIT);
publishSocket.send("Not received".getBytes(), 0);
publishSocket.send("TestTopic".getBytes(), ZMQ.SNDMORE | ZMQ.DONTWAIT);
publishSocket.send("This is test string".getBytes(), ZMQ.DONTWAIT);
}
latch.await(10000, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
fail();
} finally {
after = System.currentTimeMillis();
publishSocket.close();
}
assertEquals(latch.getCount(), 0);
System.out.println(String.valueOf(totalCount) + " messages took " + (after - before) + " ms.");
} finally {
publishSocket.close();
subscribeSocket.close();
}
}
}

java code to wait for parallel code to finish

I m having a server code to process an image.
Now there are n number of requests which tries to execute the code which results in OutOfMemory error or the server to hang and the server goes to not responding state.
To stop the code from executing at once all the requests I m limiting to execute the code one at a time using the below method where i have a variable
if the variable is 10 then wait for the variable to come at 0
if at 0 then set it to 10 then execute the code
run the code and finally set i to 0
The code here -
static newa.Counter cn;
public int getCounta() {
return cn.getCount();
}
public void setCounta(int i) {
cn = new newa.Counter();
cn.setCount(i);
}
at the function i m doing this -
public BufferedImage getScaledImage(byte[] imageBytes)
{
int i=0;
Boolean b = false;
BufferedImage scaledImage = null;
newa.NewClass1 sc = new newa.NewClass1();
try {
sc.getCounta();
} catch (NullPointerException ne) {
sc.setCounta(0);
}
i = sc.getCounta();
if(i==0)
{
sc.setCounta(10);
b = true;
}
else
{
while( b == false)
{
try
{
Thread.sleep(2000);
i = sc.getCounta();
if( i==0)
{
sc.setCounta(10);
b = true;
System.out.println("Out of Loop");
}
} catch (InterruptedException ex) {
System.out.println("getScaledImage Thread exception: " + ex);
}
}
}
..... execute further code
try { } catch { } finally { sc.setCounta(0); }
}
Is there any way I can have this simplified using the Runnable interface or something as I don't know how to do multi-threading.
Forget about the counter and use a synchronized method. Changed your method head to this:
public synchronized BufferedImage getScaledImage(byte[] imageBytes)
This lets all the threads entering the method wait until no other thread is executing the method.
If you want only a small number of threads doing the processing you can use Executor framework to have a thread pool of 10 threads. This will ensure that at one time maximum of 10 threads will be processing the requests.

Categories

Resources