Does anyone know how to do the TURN portion of Ice4j? I've managed to code it so that it works when the phone is on WiFi, but not when its on the mobile network.
I'm sending agent info via TCP and then building the connection manually instead of using a signalling process. The TCP connection already works fine, so I don't think its the TCP issue. Maybe I'm building the agent wrong?
I know that you're supposed to use a TURN server if STUN doesn't work, and I provided a large list of public TURN servers, but I might be missing something. Maybe the packets aren't being sent out properly?
Error: (Mostly Failed to send ALLOCATE-REQUEST(0X3))
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createMediaStream
INFO: Create media stream for data
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createComponent
INFO: Create component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent gatherCandidates
INFO: Gather candidates for component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.HostCandidateHarvester harvest
INFO: End candidate harvest within 160 ms, for org.ice4j.ice.harvest.HostCandidateHarvester, component: 1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.StunCandidateHarvest sendRequest
INFO: Failed to send ALLOCATE-REQUEST(0x3)[attrib.count=3 len=32 tranID=0x9909DC6648016A67FDD4B2D8] through /192.168.0.8:5001/udp to stun2.l.google.com:19302:5001/udp
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:c8ce:5a17:c339:cc40%4:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:380d:2a4c:b350:eea8%8:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient updateCheckListAndTimerStates
INFO: CheckList will failed in a few seconds if nosucceeded checks come
Sep 11, 2014 3:36:17 PM org.ice4j.ice.ConnectivityCheckClient$1 run
INFO: CheckList for stream data FAILED
Sep 11, 2014 3:36:17 PM org.ice4j.ice.Agent checkListStatesUpdated
INFO: ICE state is FAILED
Script (Both the server and the client sides have similar codes to this one):
Agent agent = new Agent();
agent.setControlling(false);
StunCandidateHarvester stunHarv = new StunCandidateHarvester(new TransportAddress("sip-communicator.net", port, Transport.UDP));
StunCandidateHarvester stun6Harv = new StunCandidateHarvester(new TransportAddress("ipv6.sip-communicator.net", port, Transport.UDP));
agent.addCandidateHarvester(stunHarv);
agent.addCandidateHarvester(stun6Harv);
String[] hostnames = new String[] { "130.79.90.150",
"2001:660:4701:1001:230:5ff:fe1a:805f",
"jitsi.org",
"numb.viagenie.ca",
"stun01.sipphone.com",
"stun.ekiga.net",
"stun.fwdnet.net",
"stun.ideasip.com",
"stun.iptel.org",
"stun.rixtelecom.se",
"stun.schlund.de",
"stun.l.google.com:19302",
"stun1.l.google.com:19302",
"stun2.l.google.com:19302",
"stun3.l.google.com:19302",
"stun4.l.google.com:19302",
"stunserver.org",
"stun.softjoys.com",
"stun.voiparound.com",
"stun.voipbuster.com",
"stun.voipstunt.com",
"stun.voxgratia.org",
"stun.xten.com",};
LongTermCredential longTermCredential = new LongTermCredential("guest", "anon");
for (String hostname : hostnames)
agent.addCandidateHarvester(new TurnCandidateHarvester(new TransportAddress(hostname, port, Transport.UDP), longTermCredential));
//Build a stream for agent
IceMediaStream stream = agent.createMediaStream("data");
try {
Component c = agent.createComponent(stream, Transport.UDP, port, port, port+100 );
String response = "";
List<LocalCandidate> remoteCandidates = c.getLocalCandidates();
for(Candidate<?> can : remoteCandidates) {
response += "||" + can.toString();
}
response = "Video||" + agent.getLocalUfrag() + "||" + agent.getLocalPassword() + "||" + c.getDefaultCandidate().toString() + response;
System.out.println("Server >>> " + response);
DataOutputStream outStream = new DataOutputStream(client.getOutputStream());
outStream.write(response.getBytes("UTF-8"));
outStream.flush();
List<IceMediaStream> streams = agent.getStreams();
for(IceMediaStream localStream : streams) {
List<Component> localComponents = localStream.getComponents();
for(Component localComponent : localComponents) {
for(int i = 3; i < info.length; i++) {
String[] detail = info[i].split(" "); //0: Foundation
//1: Component ID
//2: Transport
//3: Priority #
//4: Address (Needed with Port # to create Transport Address)
//5: Port # (Needed with Address to create Transport Address)
//6: -filler: "Type" is next field-
//7: Candidate Type
String[] foundation = detail[0].split(":"); //Turn "Candidate:#" -> "Candidate" and "#". We use "#"
localComponent.addRemoteCandidate(new RemoteCandidate(new TransportAddress(detail[4], Integer.valueOf(detail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, foundation[1], Long.valueOf(detail[3]), null));
}
String[] defaultDetail = info[3].split(" ");
String[] defaultFoundation = defaultDetail[0].split(":");
localComponent.setDefaultRemoteCandidate(new RemoteCandidate(new TransportAddress(defaultDetail[4], Integer.valueOf(defaultDetail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, defaultFoundation[1], Long.valueOf(defaultDetail[3]), null));
}
localStream.setRemoteUfrag(info[1]);
localStream.setRemotePassword(info[2]);
}
agent.startConnectivityEstablishment();
System.out.println("ICEServer <><><> Completed");
I realize now that your list of TURN servers seem to mostly actually be STUN servers (not sure about the first two). They should be added as STUN servers if anything:
agent.addCandidateHarvester(
new StunCandidateHarvester(
new TransportAddress(
InetAddress.getByName('stun.l.google.com'),
19302,
Transport.UDP)));
Related
I'm trying to run a Hazelcast ReplicatedMap spread over multiple nodes for caching. The map will have about 60.000 entries and the loading/creation of entries is really expensive.
The entries of the map occasionally become invalid, need to be updated or removed or new entries have to be inserted in the map. For this I thought about a scheduled service that updates the map on a regular basis.
To prevent concurrency and costly duplicate creation of new entries there should be only one reload service.
To test this I made a little test case:
public class HazelTest {
private ReplicatedMap<Long, Bean> hzMap;
private HazelcastInstance instance;
public HazelTest() {
instance = Hazelcast.newHazelcastInstance();
hzMap = instance.getReplicatedMap("UniqueName");
IScheduledExecutorService scheduler = instance.getScheduledExecutorService("ExecutorService");
try {
scheduler.scheduleAtFixedRate(new BeanReloader(), 5, 10, TimeUnit.SECONDS);
} catch (DuplicateTaskException ex) {
System.out.println("reloader already running");
}
}
public static void main(String[] args) throws Exception {
Random random = new Random(System.currentTimeMillis());
HazelTest test = new HazelTest();
System.out.println("Start ...");
long i = 0;
try {
while (true) {
i++;
Bean bean = test.hzMap.get((long) random.nextInt(1000));
if (bean != null) {
if (i % 100000 == 0) {
System.out.println("Bean: " + bean.toString());
}
bean.setName("NewName");
}
}
} finally {
test.close();
System.out.println("End.");
}
}
public void close() {
instance.getPartitionService().forceLocalMemberToBeSafe(5, TimeUnit.SECONDS);
if (instance.getPartitionService().isLocalMemberSafe()) {
instance.shutdown();
} else {
System.out.println("Error!!!!!");
}
}
}
The reloader:
public class BeanReloader implements NamedTask, Runnable, HazelcastInstanceAware, Serializable {
private transient HazelcastInstance hazelcastInstance;
#Override
public void run() {
System.out.println("Bean Reload ....");
for (long i = 0; i < 200; i++) {
Bean bean = new Bean(i, "Bean " + i);
ReplicatedMap<Object, Object> map = hazelcastInstance.getReplicatedMap("UniqueName");
map.put(i, bean);
}
System.out.println("Reload end.");
}
#Override
public void setHazelcastInstance(HazelcastInstance hazelcastInstance) {
this.hazelcastInstance = hazelcastInstance;
}
#Override
public String getName() {
return "BeanReloader";
}
}
The bean has only 2 fields for test purpose:
public class Bean implements Serializable {
private long id;
private String name;
// getter and setter
}
Now when I run this in different terminals on my machine (or over network, does make no difference) the nodes show the desired behaviour: the service is only running on one node at a time - when I start another one I get a DuplicateTaskException.
But when the node currently executing the task goes down the service is not always switched to another node. There is about 2/3 chance that the service is completely lost and not running on any of the remaining nodes.
Now my questions: is this behaviour normal? Do I have to check for running service myself, and if so via what api?
Or am I getting something wrong and there is another way to achieve my goals? Is hazelcast replicated map the right aproach in the first place?
Edit: edited the code as the simplifications I made for posting did not show the error any more.
There are no errors in the log when a node leaves and the service is not restored. The only logging is on one node:
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.TcpIpConnection
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connection[id=8, /X.X.X.X:5702->/X.X.X.X:54226, endpoint=[X.X.X.X]:5703, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Bean: Bean{id=893, name='NewName'}
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Connecting to /X.X.X.X:5703, timeout: 0, bind-any: true
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.InitConnectionTask
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Could not connect to: /X.X.X.X:5703. Reason: SocketException[Verbindungsaufbau abgelehnt to address /X.X.X.X:5703]
Mär 06, 2017 9:54:08 AM com.hazelcast.nio.tcp.TcpIpConnectionMonitor
WARNUNG: [X.X.X.X]:5702 [dev] [3.8] Removing connection to endpoint [X.X.X.X]:5703 Cause => java.net.SocketException {Verbindungsaufbau abgelehnt to address /X.X.X.X:5703}, Error-Count: 5
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.cluster.ClusterService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Removing Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.cluster.ClusterService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8]
Members [3] {
Member [X.X.X.X]:5702 - 0ec92bb9-6330-4a3d-90ac-0ed374fd266c this
Member [X.X.X.X]:5701 - 0d1a832e-e1e8-4cad-8546-5a97c8c052c3
Member [X.X.X.X]:5704 - bd18c7f9-892e-430a-b1ab-da740cc7a6c5
}
Mär 06, 2017 9:54:08 AM com.hazelcast.transaction.TransactionManagerService
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Committing/rolling-back alive transactions of Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff, UUID: 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Mär 06, 2017 9:54:08 AM com.hazelcast.internal.partition.impl.MigrationManager
INFORMATION: [X.X.X.X]:5702 [dev] [3.8] Re-partitioning cluster data... Migration queue size: 204
and on the others basically the same except for the last line:
INFORMATION: [X.X.X.X]:5701 [dev] [3.8] Committing/rolling-back alive transactions of Member [X.X.X.X]:5703 - 875ccc3a-dc10-4c21-815a-4b57ae41a6ff, UUID: 875ccc3a-dc10-4c21-815a-4b57ae41a6ff
Did a testrun with debug logs on 4 nodes (hz1-4.log). Node 4 picked up the schedule. After one run I killed that node (log timestamp 15:28:50,955 in hz4.log). Here are the four logs from that run (note: due to the vast amount of logging I only pasted the logs from timestamp 15:28:50,955 on ...)
hz3.log
hz2.log
hz1.log
hz4.log
With that setup I can reliably reproduce the failure. I just start the 4 nodes with
mvn exec:java -Dexec.mainClass="de.tle.products.HazelTest" -Dnumber=X
where X is the number of the node (just a variable in the log4j conf file). After all four nodes a running kill the one currently running the schedule and then the schedule is not running on any of the remaining nodes.
I recently started programming with Esper and I have a smart wearable that sends pedometer data to my laptop. I then process this data using esper. But suppose I have multiple smart wearables with each an unique MAC address. I use time windows and my question is how can I change my rule file so that the rules only fire for events with the same macaddress and take appropiate action based on this MAC address. My initialization and rule are:
Configuration cepConfig = new Configuration();
cepConfig.addEventType("Steps", Steps.class.getName());
// We setup the engine
EPServiceProvider cep = EPServiceProviderManager.getProvider("myCEPEngine", cepConfig);
EPRuntime cepRT = cep.getEPRuntime();
// We register an EPL statement
EPAdministrator cepSteps1 = cep.getEPAdministrator();
EPStatement cepStatementSteps1 = cepSteps1.createEPL("select * from "
+ "Steps().win:time(1 hour) "
+ "group by macAddress "
+ "having sum(max(steps)-min(steps)) < 100");
cepStatementSteps1.addListener(new rule1Listener());
My Steps class has the following fields:
double steps;
String stepsTimestamp;
String macAddress;
And this is how I insert the events:
Steps steps0 = new Steps(0, new Date(timeStamp).toString(), "K5E45H778");
cepRT.sendEvent(steps0);
Steps steps00 = new Steps(0, new Date(timeStamp).toString(), "LD24ESF74");
cepRT.sendEvent(steps00);
Steps steps1 = new Steps(25, new Date(timeStamp).toString(), "K5E45H778");
cepRT.sendEvent(steps1);
Steps steps2 = new Steps(50, new Date(timeStamp).toString(), "LD24ESF74");
cepRT.sendEvent(steps2);
Steps steps3 = new Steps(55, new Date(timeStamp).toString(), "K5E45H778");
cepRT.sendEvent(steps3);
Steps steps4 = new Steps(105, new Date(timeStamp).toString(), "LD24ESF74");
cepRT.sendEvent(steps4);
Steps steps5 = new Steps(75, new Date(timeStamp).toString(), "K5E45H778");
cepRT.sendEvent(steps5);
Steps steps6 = new Steps(110, new Date(timeStamp).toString(), "K5E45H778");
cepRT.sendEvent(steps6);
This is my output:
Sending tick: Steps: 0.0 Timestamp: Mon Mar 14 18:13:23 CET 2016 Mac Address: K5E45H778
->Rule 1 fired: K5E45H778
Sending tick: Steps: 0.0 Timestamp: Mon Mar 14 18:18:23 CET 2016 Mac Address: LD24ESF7474
->Rule 1 fired: LD24ESF7474
Sending tick: Steps: 25.0 Timestamp: Mon Mar 14 18:23:23 CET 2016 Mac Address: K5E45H778
->Rule 1 fired: K5E45H778
Sending tick: Steps: 105.0 Timestamp: Mon Mar 14 18:28:23 CET 2016 Mac Address: LD24ESF7474
Sending tick: Steps: 55.0 Timestamp: Mon Mar 14 18:33:23 CET 2016 Mac Address: K5E45H778
->Rule 1 fired: K5E45H778
Sending tick: Steps: 75.0 Timestamp: Mon Mar 14 18:38:23 CET 2016 Mac Address: K5E45H778
Sending tick: Steps: 110.0 Timestamp: Mon Mar 14 18:43:23 CET 2016 Mac Address: K5E45H778
Why doesn't the rule fire for the one but last event of 75 steps?
The SQL-standard "group by" clause is for aggregation per group. Thus just adding "group by macAddress" should get it done.
I'm developing a web service in Java EE 6, with only one method called E1CreateAccount.
It happens that for each call I do to the method, the execution of the method fires multiple times (at least 2 times). The executions are consecutive, so the next start when the previous ends.
The code of the method is the following:
#WebMethod
public Object E1CreateAccount(#WebParam(name = "arg0") InboundAccountF56101IB newAccount) {
Object ret = "KO";
MyAction action = MyAction.CREATE;
MyLogger.getLogger().info("Integration WebService called to Create new Account in E1, name " +
newAccount.getCRMAccountName());
E1AccountClientThread runner = new E1AccountClientThread();
runner.initialize(newAccount, action);
Thread thread = new Thread(runner);
MyLogger.getLogger().info(thread.getId() + ": Created thread to " + action.name() +
" Account. Account Name: " + newAccount.getCRMAccountName() + "...");
try {
MyLogger.getLogger().info(thread.getId() +
": Starting thread to call E1 Account web service, Action: " +
action.name());
thread.start();
ret = "OK";
} catch(Exception e) {
String str = "Exception calling the thread delegate to call E1 Account web service (Action " +
action.name() + ". Message: " + e.getMessage().toString();
MyLogger.getLogger().severe(str);
ret="KO " + str;
}
return ret;
}
It receive the call, it logs the fact that the service has been called, and then create a separated thread which will do the task. Then it always return the string "OK".
The method works well, the only problem is the multiple execution, which is undesired for the software purposes.
This is what I see from the logs:
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: Integration WebService called to Create new Account in E1, name THIS_IS_THE_COMPANY_NAME
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1026: Created thread to CREATE Account. Account Name: THIS_IS_THE_COMPANY_NAME...
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1026: Starting thread to call E1 Account web service, Action: CREATE
[...]
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: Integration WebService called to Create new Account in E1, name THIS_IS_THE_COMPANY_NAME
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1027: Created thread to CREATE Account. Account Name: THIS_IS_THE_COMPANY_NAME...
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1027: Starting thread to call E1 Account web service, Action: CREATE
[...]
(I substituted with [...] the logs related to what the threads actually do because I think it is not relevant)
The web service has been deployed on WebLogic 12.1.3, and I used the ide JDeveloper 12.1.3
I am trying to crawl and parse a dynamic content of website using selenium,
Generally the website am crawling loads its content by the scroll event from the page,So i triggered the scroll event by selenium until the end of the page is reached.
In Product phase am fetching each product detail by loop iteration, it also works fine.. but when it reaches iteration count of 280 above....
This is my code below...
private void init() throws IOException {
FirefoxProfile profile = new FirefoxProfile();//Create Firefox profile
profile.setPreference("javascript.enabled", true);//Allow javascript for browser
WebDriver htmDriver = new FirefoxDriver(profile);//add profile to firefoxDriver
htmDriver.get(urlTextField.getText());//Get and Connect to the url from URL text Field
htmDriver.manage().window().maximize();//Maximize the Browser window
String count = htmDriver.findElement(By.cssSelector("#numbFound > #no-of-results-filter")).getText();//Total Product Count for the category
//System.out.println("Total Category Count : "+count);
htmDriver.findElement(By.cssSelector(".list")).click();//Click to view the Product in List
htmDriver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);//Wait
int lCount = Integer.parseInt(count);//Calculate the scroll length
for (int i = 1; i <= Math.ceil(lCount / 5); i++) {
//Generate Arrow Down Action
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
((JavascriptExecutor) htmDriver).executeScript(
"window.scrollBy(0,document.body.scrollHeight)", "");
htmDriver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
}
//Product Phase
int row2 = 0;
List<WebElement> rdata = htmDriver.findElements(By.className("product_list_view_cont"));//selector to select each product row
for (WebElement data : rdata) {
String title = data.findElement(By.cssSelector(".product_list_view_heading")).getText();//Get Product Title
System.out.println(title);
//Check if Product Price available
boolean product_price = data.findElements(By.cssSelector(".product_list_view_price_outer span")).isEmpty();
if(product_price == false){
//Get the Price of the Product
String price = data.findElement(By.cssSelector(".product_list_view_price_outer var[id^=selling-price-id-]")).getText().trim();
System.out.println(price);
}else{
//If Price not Available add make the data null
system.out.println("No price")
}
String brand = data.findElement(By.cssSelector("ul.key-features li")).getText();
System.out.println(brand);
String brandUrl = data.findElement(By.cssSelector(".product_list_view_info_cont a")).getAttribute("href");//Fetch Brand Url
System.out.println(brandUrl);
String status = data.findElement(By.cssSelector(".product_list_view_buy-outer .lfloat")).getText();//Fetch Brand Url
System.out.println(status);
}
}
selenium throws exception as follows
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
for each Iteration after some times...
If i run the query through Quantum DB i do get the desired result. But when same query is used in the code no error is thrown and i get no result.
Query:
final String RECORD_LIST_QUERY = "SELECT COUNT(*) FROM A.abc AS WEU WHERE WEU.ZZAA_AUFT_NR = '?'";
final String FZEG_NR_QUERY="SELECT WEU.FZEG_NR, WEU.ZZAA_AUFT_NR FROM A.abc AS WEU WHERE WEU.ZZAA_AUFT_NR = '?'";
Method:
public OrderDetailsDto FetchOrderData(OrderDetailsDto orderDetailsDto) {
System.out.println("DAO:through DTO: Order Nr before running query " + orderDetailsDto.getOrderNr());
Object args = orderDetailsDto.getOrderNr();
OrderDetailsDto orderDetailsDtoCopy = new OrderDetailsDto();
System.out.println("Data source is " + getDataSource());
try {
List<Map<String, Object>> result = getJdbcTemplate().queryForList(IOrderDao.FZEG_NR_QUERY, args);
if (result.listIterator().hasNext()) {
System.out.println("DAO:through DTO: Order Nr before running query " + args);
System.out.println("Result to check query is fired " + result.get(result.listIterator().nextIndex()));
} else {
System.out.println("DAO:through DTO: Order Nr before running query " + args);
int cnt = getJdbcTemplate().queryForInt(IOrderDao.RECORD_LIST_QUERY, args);
System.out.println("count of record is " + cnt);
}
for (Map<String, Object> record : result) {
System.out.println("In Dao FOR Loop");
orderDetailsDtoCopy.setOrderNr("" + record.get("ZZAA_AUFT_NR"));
orderDetailsDtoCopy.setFZEGNr("" + record.get("FZEG_NR"));
}
System.out.println("In dao order number is set as " + orderDetailsDtoCopy.getOrderNr());
System.out.println("In dao FZEG number is set as " + orderDetailsDtoCopy.getFZEGNr());
} catch (DataAccessException e) {
e.printStackTrace();
}
return orderDetailsDtoCopy;
}
Output:
Executing D:\Users\rpashank\Documents\GOorderInfo\dist\run388364612\GOorderInfo.jar using platform C:\Program Files\Java\jdk1.7.0_51\jre/bin/java
Jun 11, 2014 11:49:54 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#367dd4d9: startup date [Wed Jun 11 11:49:54 IST 2014]; root of context hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from file [D:\Users\rpashank\Documents\GOorderInfo\src\main\resources\spring-config.xml]
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#690d76d5: defining beans [dataSource,jdbcTemplate,transactionManager,transactionTemplate,orderService,orderDao]; root of factory hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName
INFO: Loaded JDBC driver: com.ibm.db2.jcc.DB2Driver
OrderController orderNr before calling query 0729100528
Jun 11, 2014 11:49:54 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#40da178f: startup date [Wed Jun 11 11:49:54 IST 2014]; root of context hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from file [D:\Users\rpashank\Documents\GOorderInfo\src\main\resources\spring-config.xml]
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#162292fd: defining beans [dataSource,jdbcTemplate,transactionManager,transactionTemplate,orderService,orderDao]; root of factory hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName
INFO: Loaded JDBC driver: com.ibm.db2.jcc.DB2Driver
DAO:through DTO: Order Nr before running query 0729100528
Data source is org.springframework.jdbc.datasource.DriverManagerDataSource#997f31b
DAO:through DTO: Order Nr before running query 0729100528
count of record is 0
In dao order number is set as null
In dao FZEG number is set as null
Order Controller ORDER Nr. through DTO after query null
OrderController FZEG through DTO after query null