Redisson + redis sentinel. how to handle failover and write into redis? - java

I have just edited my previous question, and I am providing more details, (hopefully someone would be able to help).
I have a Redis cluster with 1 master and 2 slaves. All 3 nodes are managed by Sentinel. The failover works fine and when the new master is elected, I can write on the new master (from the command line).
Now, I am trying to write a small Java program using Redisson, which ideally should write records into redis, and be able to handle the failover (which it should do as far as I have understood). This is my code until now.
import org.redisson.Redisson;
import org.redisson.RedissonNode;
import org.redisson.api.*;
import org.redisson.api.annotation.RInject;
import org.redisson.config.Config;
import org.redisson.config.RedissonNodeConfig;
import org.redisson.config.SubscriptionMode;
import java.util.Collections;
import java.util.UUID;
public class RedissonTest {
public static class RunnableTask implements Runnable {
#RInject
RedissonClient client;
#Override
public void run(){
System.out.println("I am in ..");
RMap<String, String> map = client.getMap("completeNewMap");
System.out.println("is thread interrupted?? " + Thread.currentThread().isInterrupted());
NodesGroup ngroup = client.getNodesGroup();
Collection<Node> nodes = ngroup.getNodes();
for(Node node : nodes){
System.out.println("Node ip "+ node.getAddr().toString()+" type: "+node.getType().toString());
}
for(int i=0; i < 10000; i++) {
String key = "bg_key_"+String.valueOf(i);
String value = String.valueOf(UUID.randomUUID());
String oldVal = map.get(key);
map.put(key, value);
RBucket<String> bck = client.getBucket(key);
bck.set(value);
System.out.println("I am going to replace the old value " + oldVal + " with new value " + value + " at key "+key);
}
System.out.println("I am outta here!!");
}
}
public static void main(String[] args) {
Config config = new Config();
config.useSentinelServers()
.setMasterName("redis-cluster")
.addSentinelAddress("192.168.56.101:26379")
.addSentinelAddress("192.168.56.102:26379")
.addSentinelAddress("192.168.56.103:26379")
.setPingTimeout(100)
.setTimeout(60000)
.setRetryAttempts(25)
.setReconnectionTimeout(45000)
.setRetryInterval(1500)
.setReadMode(ReadMode.SLAVE)
.setConnectTimeout(20000)
.setSubscriptionMode(SubscriptionMode.MASTER);
RedissonClient client = Redisson.create(config);
RedissonNodeConfig nodeConfig = new RedissonNodeConfig(config);
nodeConfig.setExecutorServiceWorkers(Collections.singletonMap("myExecutor6", 1));
RedissonNode node = RedissonNode.create(nodeConfig);
node.start();
System.out.println("Node address "+node.getRemoteAddress().toString());
RExecutorService e = client.getExecutorService("myExecutor6");
e.execute(new RunnableTask());
e.shutdown();
if(e.isShutdown()) {
e.delete();
}
client.shutdown();
node.shutdown();
System.out.println("Hello World!" );
}
}
Running the code, a couple of things that I don't understand happen.
The first one is:
why redisson recognise my 3 hosts as redis slaves??
why the key value pairs I created are not stored into redis??
The idea is that after I have been able to write into redis, I would start to test the failover killing the master and expecting that the program will manage it and continues to write to the new master, without losing a message(it would be nice to be able to cache the messages while the failover occurs).
What happen with this simple program is that I can write into redis, but when I kill the master, the execution just hangs for a time that seems to be close to the setTimeout and exits without completing the task.
Any suggestion?

You should set retryAttempts parameter big enough to make Redisson survive failover period.

Related

Using a Commonj Work Manager to send Asynchronous HTTP calls

I switched from making sequential HTTP calls to 4 REST services, to making 4 simultaneous calls using a commonj4 work manager task executor. I'm using WebLogic 12c. This new code works on my development environment, but in our test environment under load conditions, and occasionally while not under load, the results map is not populated with all of the results. The logging suggests that each work item did receive back the results though. Could this be a problem with the ConcurrentHashMap? In this example from IBM, they use their own version of Work and there's a getData() method, although it doesn't like that method really exists in their class definition. I had followed a different example that just used the Work class but didn't demonstrate how to get the data out of those threads into the main thread. Should I be using execute() instead of schedule()? The API doesn't appear to be well documented. The stuckthreadtimeout is sufficiently high. component.processInbound() actually contains the code for the HTTP call, but I the problem isn't there because I can switch back to the synchronous version of the class below and not have any issues.
http://publib.boulder.ibm.com/infocenter/wsdoc400/v6r0/index.jsp?topic=/com.ibm.websphere.iseries.doc/info/ae/asyncbns/concepts/casb_workmgr.html
My code:
public class WorkManagerAsyncLinkedComponentRouter implements
MessageDispatcher<Object, Object> {
private List<Component<Object, Object>> components;
protected ConcurrentHashMap<String, Object> workItemsResultsMap;
protected ConcurrentHashMap<String, Exception> componentExceptionsInThreads;
...
//components is populated at this point with one component for each REST call to be made.
public Object route(final Object message) throws RouterException {
...
try {
workItemsResultsMap = new ConcurrentHashMap<String, Object>();
componentExceptionsInThreads = new ConcurrentHashMap<String, Exception>();
final String parentThreadID = Thread.currentThread().getName();
List<WorkItem> producerWorkItems = new ArrayList<WorkItem>();
for (final Component<Object, Object> component : this.components) {
producerWorkItems.add(workManagerTaskExecutor.schedule(new Work() {
public void run() {
//ExecuteThread th = (ExecuteThread) Thread.currentThread();
//th.setName(component.getName());
LOG.info("Child thread " + Thread.currentThread().getName() +" Parent thread: " + parentThreadID + " Executing work item for: " + component.getName());
try {
Object returnObj = component.processInbound(message);
if (returnObj == null)
LOG.info("Object returned to work item is null, not adding to producer components results map, for this producer: "
+ component.getName());
else {
LOG.info("Added producer component thread result for: "
+ component.getName());
workItemsResultsMap.put(component.getName(), returnObj);
}
LOG.info("Finished executing work item for: " + component.getName());
} catch (Exception e) {
componentExceptionsInThreads.put(component.getName(), e);
}
}
...
}));
} // end loop over producer components
// Block until all items are done
workManagerTaskExecutor.waitForAll(producerWorkItems, stuckThreadTimeout);
LOG.info("Finished waiting for all producer component threads.");
if (componentExceptionsInThreads != null
&& componentExceptionsInThreads.size() > 0) {
...
}
List<Object> resultsList = new ArrayList<Object>(workItemsResultsMap.values());
if (resultsList.size() == 0)
throw new RouterException(
"The producer thread results are all empty. The threads were likely not created. In testing this was observed when either 1)the system was almost out of memory (Perhaps the there is not enough memory to create a new thread for each producer, for this REST request), or 2)Timeouts were reached for all producers.");
//** The problem is identified here. The results in the ConcurrentHashMap aren't the number expected .
if (workItemsResultsMap.size() != this.components.size()) {
StringBuilder sb = new StringBuilder();
for (String str : workItemsResultsMap.keySet()) {
sb.append(str + " ");
}
throw new RouterException(
"Did not receive results from all threads within the thread timeout period. Only retrieved:"
+ sb.toString());
}
LOG.info("Returning " + String.valueOf(resultsList.size()) + " results.");
LOG.debug("List of returned feeds: " + String.valueOf(resultsList));
return resultsList;
}
...
}
}
I ended up cloning the DOM document used as a parameter. There must be some downstream code that has side effects on the parameter.

Strange behaviour with indexOf method

We have deployed our Java EE web application in jboss 4.0.2 (we are using JDK 1.6.0_18).
In our code we are iterating through the Collection and doing string concatenation of userids in collection (refer below code).
On production server, it behaves inconsistently. The userid (2044157) is not found in final string (refer ServerLog line3). If we restart the production jboss server, then it works perfectly fine and it prints all users correctly in final string log. But the problem again reappears after heavy usage (after 5-6 hours). We are not able to replicate the issue on our QA environment.
When problem happens, Looks like the indexOf method incorrectly returns that 2044157 is there in strUsers string (even though 2044157 is not there) and hence it goes into else part and printed it in else part log(refer ServerLog line2 - highlighted in bold font). What could be reason for such inconsistent behavior?
Code:
public class UsersEJB implements SessionBean{
private void processSelectionData{
StringBuilder strUsers = new StringBuilder("");
Collection userVoCollection = dc.retProjectOrgUsers(projectID, strDistributionCompID, porjectDataSource); // This is returning List of 626 UserVO
if(log.isDebugEnabled())
log.debug("UserList Size="+userVoCollection.size()+",B4 strUsers="+strUsers.toString());
Iterator it = userVoCollection.iterator();
while(it.hasNext()) {
UserVO uVO = (UserVO)it.next();
if(!(strUsers.toString().indexOf("," + uVO.userID.toString() + ",") > -1)) {
strUsers.append(uVO.userID.toString()).append(",");
loopCntPos++;
} else {
loopCntNeg++;
if(log.isDebugEnabled())
log.debug("UserId="+uVO.userID.toString()+",strUsers="+loopCnt+"="+strUsers.toString());
}
loopCnt++;
}
if(log.isDebugEnabled())
log.debug("COMPANIES_ID1 strUsers="+strUsers.toString() + ",### loopCnt="+loopCnt + ",loopCntPos="+loopCntPos + ",loopCntNeg="+loopCntNeg);
}
}
ServerLog
UserList Size=626,B4 strUsers=,1732286,2066065,2096854,1952590,1731333,1732065,1734828,1852547,1732020,1733653,1731278,2079012,1733299,1765873,1733431,1960010,1828681,2047672,1731752,1733172,1784314,1989311,1734795,1732658,1731415,1785285,1785185,1738446,1733139,1732526,1733549,1731078,1804055,1732939,1663167,1732768,1732029,1732504,1989185,1882746,1785428,1731213,1732931,1731296,1733503,1753435,1731667,1936166,1747699,2099764,1482144,1747707,1732953,1771653,1731251,1989303,1755297,1731160,1901283,1782751,1733543,1882693,1733354,1974270,2044300,1732082,1907188,1731872,1955156,1732153,1733260,1731096,1604035,1731914,1731169,1732418,1731240,1989180,1731306,1733533,1882684,1821306,1731178,1731389,1733309,1733104,2078768,1989277,1732542,1733513,1733082,1732630,1733289,1733361,2077522,1733252,1732493,1978847,1733071,
UserId=2044157,strUsers=440=,1732286,2066065,2096854,1952590,1731333,1732065,1734828,1852547,1732020,1733653,1731278,2079012,1733299,1765873,1733431,1960010,1828681,2047672,1731752,1733172,1784314,1989311,1734795,1732658,1731415,1785285,1785185,1738446,1733139,1732526,1733549,1731078,1804055,1732939,1663167,1732768,1732029,1732504,1989185,1882746,1785428,1731213,1732931,1731296,1733503,1753435,1731667,1936166,1747699,2099764,1482144,1747707,1732953,1771653,1731251,1989303,1755297,1731160,1901283,1782751,1733543,1882693,1733354,1974270,2044300,1732082,1907188,1731872,1955156,1732153,1733260,1731096,1604035,1731914,1731169,1732418,1731240,1989180,1731306,1733533,1882684,1821306,1731178,1731389,1733309,1733104,2078768,1989277,1732542,1733513,1733082,1732630,1733289,1733361,2077522,1733252,1732493,1978847,1733071,1893797,2137701,2025815,1522850,2027582,1732833,1984513,2037965,1900381,1731514,2044357,2042751,1785407,2118267,2050509,2062445,1934909,1912411,1733673,1731956,1694916,1731951,2048024,1735552,2115155,1732777,2120796,2048007,1845970,1738356,1841988,2101099,2027667,2067876,1734628,1731739,1731893,2051612,1819645,1803654,2037906,1732047,1478544,2073677,2012435,2067977,2073669,1981390,1731124,15916,6766,1978916,1732750,1936298,1891936,1747650,1949484,2101161,1928883,1948164,2013726,1750718,1732164,1733700,1639298,1734968,1732007,1734723,1949403,2137692,1990151,1734617,2101130,1928888,2044163,1732042,1819543,2137672,1735463,1732716,1950975,2025826,1984507,2017645,1372949,1928719,1732684,1952358,1581015,2026878,1731622,1734036,2000528,1734611,2052691,1961286,2107121,1733335,1868846,2000469,1734771,1841953,2118224,2038924,1731609,1735396,2026033,1805573,2107214,1638397,1731502,1731581,2115171,2120903,1892076,2060862,2017603,2002514,1731351,1901274,1760679,1821298,1884485,1777244,1731204,1934917,2000497,1737101,2115043,2121909,2097818,1506144,1953947,1753401,1594875,2135263,1900276,1907168,1851867,1940057,1897000,1765857,2037953,1907085,2037911,2062548,1650062,1801180,1953696,2119602,1605403,1804076,1669286,1844334,1542596,2048001,1938656,1757959,1529666,2070447,1565121,1907065,1944060,2097808,2077490,1843170,1957289,1690800,1823148,1788987,1912477,1738344,1845866,2047996,1962156,1483244,2071932,2127277,1912419,1756748,1999518,1908161,1722312,1548164,1584044,2047896,1856844,1762432,2073439,1861949,1530755,1989292,1852455,2027658,1738380,2067996,1981507,1998543,1958859,1620837,1852555,2012357,1895444,2050380,1789210,1932156,1898948,2046841,2098171,1625335,2138533,2046655,1785464,2105080,2024935,1852446,2073682,1478644,2103660,1751154,1863254,1478332,1849259,1593399,1895334,2075182,2134365,2136657,
COMPANIES_ID1 strUsers=,1732286,2066065,2096854,1952590,1731333,1732065,1734828,1852547,1732020,1733653,1731278,2079012,1733299,1765873,1733431,1960010,1828681,2047672,1731752,1733172,1784314,1989311,1734795,1732658,1731415,1785285,1785185,1738446,1733139,1732526,1733549,1731078,1804055,1732939,1663167,1732768,1732029,1732504,1989185,1882746,1785428,1731213,1732931,1731296,1733503,1753435,1731667,1936166,1747699,2099764,1482144,1747707,1732953,1771653,1731251,1989303,1755297,1731160,1901283,1782751,1733543,1882693,1733354,1974270,2044300,1732082,1907188,1731872,1955156,1732153,1733260,1731096,1604035,1731914,1731169,1732418,1731240,1989180,1731306,1733533,1882684,1821306,1731178,1731389,1733309,1733104,2078768,1989277,1732542,1733513,1733082,1732630,1733289,1733361,2077522,1733252,1732493,1978847,1733071,1893797,2137701,2025815,1522850,2027582,1732833,1984513,2037965,1900381,1731514,2044357,2042751,1785407,2118267,2050509,2062445,1934909,1912411,1733673,1731956,1694916,1731951,2048024,1735552,2115155,1732777,2120796,2048007,1845970,1738356,1841988,2101099,2027667,2067876,1734628,1731739,1731893,2051612,1819645,1803654,2037906,1732047,1478544,2073677,2012435,2067977,2073669,1981390,1731124,15916,6766,1978916,1732750,1936298,1891936,1747650,1949484,2101161,1928883,1948164,2013726,1750718,1732164,1733700,1639298,1734968,1732007,1734723,1949403,2137692,1990151,1734617,2101130,1928888,2044163,1732042,1819543,2137672,1735463,1732716,1950975,2025826,1984507,2017645,1372949,1928719,1732684,1952358,1581015,2026878,1731622,1734036,2000528,1734611,2052691,1961286,2107121,1733335,1868846,2000469,1734771,1841953,2118224,2038924,1731609,1735396,2026033,1805573,2107214,1638397,1731502,1731581,2115171,2120903,1892076,2060862,2017603,2002514,1731351,1901274,1760679,1821298,1884485,1777244,1731204,1934917,2000497,1737101,2115043,2121909,2097818,1506144,1953947,1753401,1594875,2135263,1900276,1907168,1851867,1940057,1897000,1765857,2037953,1907085,2037911,2062548,1650062,1801180,1953696,2119602,1605403,1804076,1669286,1844334,1542596,2048001,1938656,1757959,1529666,2070447,1565121,1907065,1944060,2097808,2077490,1843170,1957289,1690800,1823148,1788987,1912477,1738344,1845866,2047996,1962156,1483244,2071932,2127277,1912419,1756748,1999518,1908161,1722312,1548164,1584044,2047896,1856844,1762432,2073439,1861949,1530755,1989292,1852455,2027658,1738380,2067996,1981507,1998543,1958859,1620837,1852555,2012357,1895444,2050380,1789210,1932156,1898948,2046841,2098171,1625335,2138533,2046655,1785464,2105080,2024935,1852446,2073682,1478644,2103660,1751154,1863254,1478332,1849259,1593399,1895334,2075182,2134365,2136657,2041203,2043944,2040358,2093521,1913544,2082455,2024959,2045812,1973980,1494485,1986446,1525605,2046849,1785194,1822210,2053401,1918823,2001794,1785258,2064339,1986338,1710198,1521244,1822292,1931276,2134370,2075073,2134300,2075068,1521210,2131493,1951008,1914649,1774999,1601557,1485584,2078975,1986330,1612190,2064410,2066054,1985760,1685075,1930273,2032161,1955161,,### loopCnt=626,loopCntPos=274,loopCntNeg=352
As it stands, your search won't pick up the first user as there won't be a preceding comma. However, I would suggest solving the problem in a far simpler manner, by using a set:
public class UsersEJB implements SessionBean
{
private void processSelectionData
{
Collection userVoCollection = dc.retProjectOrgUsers(projectID, strDistributionCompID, porjectDataSource); // This is returning List of 626 UserVO
Set<String> usersSet = new HashSet<String>(userVoCollection.size());
if(log.isDebugEnabled())
log.debug("UserList Size="+userVoCollection.size()+",B4 strUsers="+strUsers.toString());
for (UserVO uVO : userVoCollection)
{
if (usersSet.add(uVO.userID.toString())
loopCntPos++;
else
{
loopCntNeg++;
if(log.isDebugEnabled())
log.debug("UserId="+uVO.userID.toString()+",strUsers="+loopCnt+"="+strUsers.toString());
}
if(log.isDebugEnabled())
log.debug("COMPANIES_ID1 strUsers=" + usersSet + ",### loopCnt="+loopCnt + ",loopCntPos="+loopCntPos + ",loopCntNeg="+loopCntNeg);
}
}

Neo4J create relationship hangs on remote, but node creation succeeds

My relationship creation hangs, yet the nodes underneath manage to persist to my remote client.
public class Baz
{
private static enum CustomRelationships implements RelationshipType {
CATEGORY
}
public void foo()
{
RestGraphDatabse db = new RestGraphDatabase("http://remoteIp:7474/db/data",username,password);
Transaction tx = db.beginTx();
try{
Node a = db.createNode();
a.setProperty("foo", "foo"); // finishes
Node b = db.createNode();
b.setProperty("bar", "bar"); //finishes
a.createRelationshipTo(b, CustomRelationships .CATEGORY); // hangs
System.out.println("Finished relationship");
tx.success();
} finally {
tx.finish();
}
}
}
And I cannot figure out why. There is no stack and the connection doesn't time out.
a.createRelationshipTo(b, DynamicRelationshipType.withName("CATEGORY"));
also hangs
This query executes correctly from the admin shell:
start first=node(19), second=node(20) Create first-[r:RELTYPE {
linkage : first.Baz + '<-->' + second.BazCat }]->second return r
Yet when run in this fashion:
ExecutionResult result = engine.execute("start first=node("
+ entityNode.getId() + "), second=node("
+ categoryNode.getId() + ") "
+ " Create first-[r:RELTYPE { linkage : first.Baz"
+ " + '<-->' + second.BazCat" + " }]->second return r");
Also hangs.
There are no real transactions over rest.
It is a bug in the Java-Rest-Binding that internal threads are not started as daemon threads. It actually doesn't hang just the program is not ended.
You can System.exit(0) to end the program as a workaround.

Retrieve multiple messages from SQS

I have multiple messages in SQS. The following code always returns only one, even if there are dozens visible (not in flight). setMaxNumberOfMessages I thought would allow multiple to be consumed at once .. have i misunderstood this?
CreateQueueRequest createQueueRequest = new CreateQueueRequest().withQueueName(queueName);
String queueUrl = sqs.createQueue(createQueueRequest).getQueueUrl();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(queueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (Message message : messages) {
// i'm a message from SQS
}
I've also tried using withMaxNumberOfMessages without any such luck:
receiveMessageRequest.withMaxNumberOfMessages(10);
How do I know there are messages in the queue? More than 1?
Set<String> attrs = new HashSet<String>();
attrs.add("ApproximateNumberOfMessages");
CreateQueueRequest createQueueRequest = new CreateQueueRequest().withQueueName(queueName);
GetQueueAttributesRequest a = new GetQueueAttributesRequest().withQueueUrl(sqs.createQueue(createQueueRequest).getQueueUrl()).withAttributeNames(attrs);
Map<String,String> result = sqs.getQueueAttributes(a).getAttributes();
int num = Integer.parseInt(result.get("ApproximateNumberOfMessages"));
The above always is run prior and gives me an int that is >1
Thanks for your input
AWS API Reference Guide: Query/QueryReceiveMessage
Due to the distributed nature of the queue, a weighted random set of machines is sampled on a ReceiveMessage call. That means only the messages on the sampled machines are returned. If the number of messages in the queue is small (less than 1000), it is likely you will get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response; in which case you should repeat the request.
and
MaxNumberOfMessages: Maximum number of messages to return. SQS never returns more messages than this value but might return fewer.
There is a comprehensive explanation for this (arguably rather idiosyncratic) behaviour in the SQS reference documentation.
SQS stores copies of messages on multiple servers and receive message requests are made to these servers with one of two possible strategies,
Short Polling : The default behaviour, only a subset of the servers (based on a weighted random distribution) are queried.
Long Polling : Enabled by setting the WaitTimeSeconds attribute to a non-zero value, all of the servers are queried.
In practice, for my limited tests, I always seem to get one message with short polling just as you did.
I had the same problem. What is your Receive Message Wait Time for your queue set to? When mine was at 0, it only returned 1 message even if there were 8 in the queue. When I increased the Receive Message Wait Time, then I got all of them. Seems kind of buggy to me.
I was just trying the same and with the help of these two attributes setMaxNumberOfMessages and setWaitTimeSeconds i was able to get 10 messages.
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(myQueueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
receiveMessageRequest.setWaitTimeSeconds(20);
Snapshot of o/p:
Receiving messages from TestQueue.
Number of messages:10
Message
MessageId: 31a7c669-1f0c-4bf1-b18b-c7fa31f4e82d
...
receiveMessageRequest.withMaxNumberOfMessages(10);
Just to be clear, the more practical use of this would be to add to your constructor like this:
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(queueUrl).withMaxNumberOfMessages(10);
Otherwise, you might as well just do:
receiveMessageRequest.setMaxNumberOfMessages(10);
That being said, changing this won't help the original problem.
Thanks Caoilte!
I faced this issue also. Finally solved by using long polling follow the configuration here:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-long-polling-for-queue.html
Unfortunately, to use long polling, you must create your queue as FIFO one. I tried standard queue with no luck.
And when receiving, need also set MaxNumberOfMessages. So my code is like:
ReceiveMessageRequest receive_request = new ReceiveMessageRequest()
.withQueueUrl(QUEUE_URL)
.withWaitTimeSeconds(20)
.withMaxNumberOfMessages(10);
Although solved, still feel too wired. AWS should definitely provide a more neat API for this kind of basic receiving operation.
From my point, AWS has many many cool features but not good APIs. Like those guys are rushing out all the time.
For small task list I use FIFO queue like stackoverflow.com/a/55149351/13678017
for example modified AWS tutorial
// Create a queue.
System.out.println("Creating a new Amazon SQS FIFO queue called " + "MyFifoQueue.fifo.\n");
final Map<String, String> attributes = new HashMap<>();
// A FIFO queue must have the FifoQueue attribute set to true.
attributes.put("FifoQueue", "true");
/*
* If the user doesn't provide a MessageDeduplicationId, generate a
* MessageDeduplicationId based on the content.
*/
attributes.put("ContentBasedDeduplication", "true");
// The FIFO queue name must end with the .fifo suffix.
final CreateQueueRequest createQueueRequest = new CreateQueueRequest("MyFifoQueue4.fifo")
.withAttributes(attributes);
final String myQueueUrl = sqs.createQueue(createQueueRequest).getQueueUrl();
// List all queues.
System.out.println("Listing all queues in your account.\n");
for (final String queueUrl : sqs.listQueues().getQueueUrls()) {
System.out.println(" QueueUrl: " + queueUrl);
}
System.out.println();
// Send a message.
System.out.println("Sending a message to MyQueue.\n");
for (int i = 0; i < 4; i++) {
var request = new SendMessageRequest()
.withQueueUrl(myQueueUrl)
.withMessageBody("message " + i)
.withMessageGroupId("userId1");
;
sqs.sendMessage(request);
}
for (int i = 0; i < 6; i++) {
var request = new SendMessageRequest()
.withQueueUrl(myQueueUrl)
.withMessageBody("message " + i)
.withMessageGroupId("userId2");
;
sqs.sendMessage(request);
}
// Receive messages.
System.out.println("Receiving messages from MyQueue.\n");
var receiveMessageRequest = new ReceiveMessageRequest(myQueueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
receiveMessageRequest.setWaitTimeSeconds(20);
// what receive?
receiveMessageRequest.withMessageAttributeNames("userId2");
final List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (final Message message : messages) {
System.out.println("Message");
System.out.println(" MessageId: "
+ message.getMessageId());
System.out.println(" ReceiptHandle: "
+ message.getReceiptHandle());
System.out.println(" MD5OfBody: "
+ message.getMD5OfBody());
System.out.println(" Body: "
+ message.getBody());
for (final Entry<String, String> entry : message.getAttributes()
.entrySet()) {
System.out.println("Attribute");
System.out.println(" Name: " + entry
.getKey());
System.out.println(" Value: " + entry
.getValue());
}
}
Here's a workaround, you can call receiveMessageFromSQS method asynchronously.
bulkReceiveFromSQS (queueUrl, totalMessages, asyncLimit, batchSize, visibilityTimeout, waitTime, callback) {
batchSize = Math.min(batchSize, 10);
let self = this,
noOfIterations = Math.ceil(totalMessages / batchSize);
async.timesLimit(noOfIterations, asyncLimit, function(n, next) {
self.receiveMessageFromSQS(queueUrl, batchSize, visibilityTimeout, waitTime,
function(err, result) {
if (err) {
return next(err);
}
return next(null, _.get(result, 'Messages'));
});
}, function (err, listOfMessages) {
if (err) {
return callback(err);
}
listOfMessages = _.flatten(listOfMessages).filter(Boolean);
return callback(null, listOfMessages);
});
}
It will return you an array with a given number of messages

JAVA MYSQL chat performance issue with 100 users

i'm trying to develop a client-server chat application using java servlets and mysql(innoDB engine) and jetty server. i tested the connection code with 100 simulated users hitting the server at once using jmeter but i got 40 secs as average time :( for all of them to get connected with min time taken by thread( 2 secs ) and max time( 80 secs). My connection database table has the followng structure two columns connect(user,stranger) and my servlet code is shown below.I'm using innoDB engine for row level locking.I also used explicit write lock SELECT...... FOR UPDATE inside transaction.I'm looping the transaction if it rollbacks due to deadlock until it executes atleast once.Once two users get connected they update their stranger's column with eachother's randomly generated unique number.
i'm using c3p0 connection pooling with min 100 threads open and jetty with min 100 threads.
please help me to identify the bottle necks or tools needed to find them.
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.naming.*;
import javax.sql.*;
public class connect extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse res)
throws java.io.IOException {
String unumber=null;
String snumber=null;
String status=null;
InitialContext contxt1=null;
DataSource ds1=null;
Connection conxn1=null;
PreparedStatement stmt1=null;
ResultSet rs1=null;
PreparedStatement stmt2=null;
InitialContext contxt3=null;
DataSource ds3=null;
Connection conxn3=null;
PreparedStatement stmt3=null;
ResultSet rs3=null;
PreparedStatement stmt4=null;
ResultSet rs4=null;
PreparedStatement stmt5=null;
boolean checktransaction = true;
unumber=req.getParameter("number"); // GET THE USER's UNIQUE NUMBER
try {
contxt1 = new InitialContext();
ds1 =(DataSource)contxt1.lookup("java:comp/env/jdbc/user");
conxn1 = ds1.getConnection();
stmt1 = conxn1.prepareStatement("SELECT * FROM profiles WHERE number=?"); // GETTING USER DATA FROM PROFILE
stmt1.setString(1,unumber);
rs1 = stmt1.executeQuery();
if(rs1.next()) {
res.getWriter().println("user found in PROFILE table.........");
uage=rs1.getString("age");
usex=rs1.getString("sex");
ulocation=rs1.getString("location");
uaslmode=rs1.getString("aslmode");
stmt1.close();
stmt1=null;
conxn1.close();
conxn1 = null;
contxt3 = new InitialContext();
ds3 =(DataSource)contxt3.lookup("java:comp/env/jdbc/chat");
conxn3 = ds3.getConnection();
conxn3.setAutoCommit(false);
while(checktransaction) {
// TRANSACTION STARTS HERE
try {
stmt2 = conxn3.prepareStatement("INSERT INTO "+ulocation+" (user,stranger) VALUES (?,'')"); // INSERTING RECORD INTO LOCAL CHAT TABLE
stmt2.setString(1,unumber);
stmt2.executeUpdate();
stmt2.close();
stmt2 = null;
res.getWriter().println("inserting row into LOCAL CHAT TABLE.........");
System.out.println("transaction starting........."+unumber);
stmt3 = conxn3.prepareStatement("SELECT user FROM "+ulocation+" WHERE (stranger='' && user!=?) LIMIT 1 FOR UPDATE");
stmt3.setString(1,unumber); // SEARCHING FOR STRANGER
rs3=stmt3.executeQuery();
if (rs3.next()) { // stranger found
stmt4 = conxn3.prepareStatement("SELECT stranger FROM "+ulocation+" WHERE user=?");
stmt4.setString(1,unumber); //CHECKING FOR USER STATUS BEFORE CONNECTING TO STRANGER
rs4=stmt4.executeQuery();
if(rs4.next()) {
status=rs4.getString("stranger");
}
stmt4.close();
stmt4=null;
if(status.equals("")) { // user status is also null
snumber = rs3.getString("user");
stmt5 = conxn3.prepareStatement("UPDATE "+ulocation+" SET stranger=? WHERE user=?"); // CONNECTING USER AND STRANGER
stmt5.setString(1,snumber);
stmt5.setString(2,unumber);
stmt5.executeUpdate();
stmt5.setString(2,snumber);
stmt5.setString(1,unumber);
stmt5.executeUpdate();
stmt5.close();
stmt5=null;
}
} // end of stranger found
stmt3.close();
stmt3 = null;
conxn3.commit(); // TRANSACTION ENDING
checktransaction = false;
} // END OF TRY INSIDE WHILE
catch(java.sql.SQLTransactionRollbackException e) {
System.out.println("transaction restarted......."+unumber);
counttransaction = counttransaction+1;
}
} //END OF WHILE LOOP
conxn3.close();
conxn3 = null;
} // END OF USER FOUND IN PROFILE TABLE
} // end of try
catch(java.sql.SQLException sqlexe) {
try {conxn3.rollback();}
catch(java.sql.SQLException exe) {conxn3=null;}
sqlexe.printStackTrace();
res.getWriter().println("UNABE TO GET CONNECTION FROM POOL!");
}
catch(javax.naming.NamingException namexe) {
namexe.printStackTrace();
res.getWriter().println("DATA SOURCE LOOK UP FAILED!");
}
}
}
How many users do you have? Can you load them all into memory first and do a memory lookup?
If you separate you DB layer from your presentation layer, this is something you can change without changing the servlet (as it shouldn't care where the data comes from)
If you use Java memory it shouldn't take more than a 20 ms per user.
Here is a test which creates one million profiles in memory, looks them up and creates chat entries, which is removed later. The average time per operation was 640 ns (nano-seconds, or billionths of a second)
import java.util.LinkedHashMap;
import java.util.Map;
public class Main {
public static void main(String... args) {
UserDB userDB = new UserDB();
// add 1000,000 users
for (int i = 0; i < 1000000; i++)
userDB.addUser(
new Profile(i,
"user+i",
(short) (18 + i % 90),
i % 2 == 0 ? Profile.Sex.Male : Profile.Sex.Female,
"here", "mode"));
// lookup a users and add a chat session.
long start = System.nanoTime();
int operations = 0;
for(int i=0;i<userDB.profileCount();i+=2) {
Profile p0 = userDB.getProfileByNumber(i);
operations++;
Profile p1 = userDB.getProfileByNumber(i+1);
operations++;
userDB.chatsTo(i, i+1);
operations++;
}
for(int i=0;i<userDB.profileCount();i+=2) {
userDB.endChat(i);
operations++;
}
long time = System.nanoTime() -start;
System.out.printf("Average lookup and update time per operation was %d ns%n", time/operations);
}
}
class UserDB {
private final Map<Long, Profile> profileMap = new LinkedHashMap<Long, Profile>();
private final Map<Long, Long> chatsWith = new LinkedHashMap<Long, Long>();
public void addUser(Profile profile) {
profileMap.put(profile.number, profile);
}
public Profile getProfileByNumber(long number) {
return profileMap.get(number);
}
public void chatsTo(long number1, long number2) {
chatsWith.put(number1, number2);
chatsWith.put(number2, number1);
}
public void endChat(long number) {
Long other = chatsWith.get(number);
if (other == null) return;
Long number2 = chatsWith.get(other);
if (number2 != null && number2 == number)
chatsWith.remove(other);
}
public int profileCount() {
return profileMap.size();
}
}
class Profile {
final long number;
final String name;
final short age;
final Sex sex;
final String location;
final String aslmode;
Profile(long number, String name, short age, Sex sex, String location, String aslmode) {
this.number = number;
this.name = name;
this.age = age;
this.sex = sex;
this.location = location;
this.aslmode = aslmode;
}
enum Sex {Male, Female}
}
prints
Average lookup and update time per operation was 636 ns
If you need this to be faster you could look at using Trove4j which could be twice as fast in this case. Given this is likely to be fast enough, I would try to keep things simple.
Have you considered caching reads and batching writes?
I'm not sure how you can realistically expect anyone to determine where the bottle-necks are by merely looking at the source code.
To find the bottlenecks, you should run your app and the load test with a profiler attached, such as JVisualVM or YourKit or JProfiler. This will tell you exactly how much time is spent in each area of the code.
The only thing that anyone can really critique from looking at your code is the basic architecture:
Why are you looking up the DataSource on each doGet()?
Why are you using transactions for what appears to be unrelated database insertions and queries?
Is using a RDBMS to back a chat system really the best idea in the first place?
If your response times are so high, you need to properly index your db tables. Based on the times you provided I will assume this was not done.You need to speed up your read and writes.
Look up Execution Plans and how to read them. An execution plan will show you if/when indexes are being used with your queries; if you are performing seeks or scans etc on the tables. by using these, you can tweak your query/indexes/tables to be more optimal.
As others have stated, RDMS wont be your best option in large scale applications, but since you are just starting out it should be ok until you learn more.
Learn to properly setup those tables and you should see your deadlock counts and response times go down

Categories

Resources