Getting Null results for query on DB2 using Spring - java

If i run the query through Quantum DB i do get the desired result. But when same query is used in the code no error is thrown and i get no result.
Query:
final String RECORD_LIST_QUERY = "SELECT COUNT(*) FROM A.abc AS WEU WHERE WEU.ZZAA_AUFT_NR = '?'";
final String FZEG_NR_QUERY="SELECT WEU.FZEG_NR, WEU.ZZAA_AUFT_NR FROM A.abc AS WEU WHERE WEU.ZZAA_AUFT_NR = '?'";
Method:
public OrderDetailsDto FetchOrderData(OrderDetailsDto orderDetailsDto) {
System.out.println("DAO:through DTO: Order Nr before running query " + orderDetailsDto.getOrderNr());
Object args = orderDetailsDto.getOrderNr();
OrderDetailsDto orderDetailsDtoCopy = new OrderDetailsDto();
System.out.println("Data source is " + getDataSource());
try {
List<Map<String, Object>> result = getJdbcTemplate().queryForList(IOrderDao.FZEG_NR_QUERY, args);
if (result.listIterator().hasNext()) {
System.out.println("DAO:through DTO: Order Nr before running query " + args);
System.out.println("Result to check query is fired " + result.get(result.listIterator().nextIndex()));
} else {
System.out.println("DAO:through DTO: Order Nr before running query " + args);
int cnt = getJdbcTemplate().queryForInt(IOrderDao.RECORD_LIST_QUERY, args);
System.out.println("count of record is " + cnt);
}
for (Map<String, Object> record : result) {
System.out.println("In Dao FOR Loop");
orderDetailsDtoCopy.setOrderNr("" + record.get("ZZAA_AUFT_NR"));
orderDetailsDtoCopy.setFZEGNr("" + record.get("FZEG_NR"));
}
System.out.println("In dao order number is set as " + orderDetailsDtoCopy.getOrderNr());
System.out.println("In dao FZEG number is set as " + orderDetailsDtoCopy.getFZEGNr());
} catch (DataAccessException e) {
e.printStackTrace();
}
return orderDetailsDtoCopy;
}
Output:
Executing D:\Users\rpashank\Documents\GOorderInfo\dist\run388364612\GOorderInfo.jar using platform C:\Program Files\Java\jdk1.7.0_51\jre/bin/java
Jun 11, 2014 11:49:54 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#367dd4d9: startup date [Wed Jun 11 11:49:54 IST 2014]; root of context hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from file [D:\Users\rpashank\Documents\GOorderInfo\src\main\resources\spring-config.xml]
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#690d76d5: defining beans [dataSource,jdbcTemplate,transactionManager,transactionTemplate,orderService,orderDao]; root of factory hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName
INFO: Loaded JDBC driver: com.ibm.db2.jcc.DB2Driver
OrderController orderNr before calling query 0729100528
Jun 11, 2014 11:49:54 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.FileSystemXmlApplicationContext#40da178f: startup date [Wed Jun 11 11:49:54 IST 2014]; root of context hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from file [D:\Users\rpashank\Documents\GOorderInfo\src\main\resources\spring-config.xml]
Jun 11, 2014 11:49:54 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#162292fd: defining beans [dataSource,jdbcTemplate,transactionManager,transactionTemplate,orderService,orderDao]; root of factory hierarchy
Jun 11, 2014 11:49:54 AM org.springframework.jdbc.datasource.DriverManagerDataSource setDriverClassName
INFO: Loaded JDBC driver: com.ibm.db2.jcc.DB2Driver
DAO:through DTO: Order Nr before running query 0729100528
Data source is org.springframework.jdbc.datasource.DriverManagerDataSource#997f31b
DAO:through DTO: Order Nr before running query 0729100528
count of record is 0
In dao order number is set as null
In dao FZEG number is set as null
Order Controller ORDER Nr. through DTO after query null
OrderController FZEG through DTO after query null

Related

Agent configuration for 'a1' has no configfilters

I got an error(Agent configuration for 'a1' has no configfilters) when I use flume 1.9 to transfer the data from kafka to HDFS, but no other error or info were reported.
The source I used is KafkaSource, sink is file sink.
Interceptor I used is self-define which I will show bolow.
Agent configuration for 'a1' has no configfilters.
the logger info is below. differ from other question, the
16 Aug 2022 11:45:27,600 WARN [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateConfigFilterSet:623) - Agent configuration for 'a1' has no configfilters.
16 Aug 2022 11:45:27,623 INFO [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:163) - Post-validation flume configuration contains configuration for agents: [a1]
16 Aug 2022 11:45:27,624 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:151) - Creating channels
16 Aug 2022 11:45:27,628 INFO [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:42) - Creating instance of channel c1 type file
16 Aug 2022 11:45:27,642 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205) - Created channel c1
16 Aug 2022 11:45:27,643 INFO [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:41) - Creating instance of source r1, type org.apache.flume.source.kafka.KafkaSource
16 Aug 2022 11:45:27,655 INFO [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:42) - Creating instance of sink: k1, type: hdfs
16 Aug 2022 11:45:27,786 INFO [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:120) - Channel c1 connected to [r1, k1]
16 Aug 2022 11:45:27,787 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:162) - Starting new configuration:{ sourceRunners:{r1=PollableSourceRunner: { source:org.apache.flume.source.kafka.KafkaSource{name:r1,state:IDLE} counterGroup:{ name:null counters:{} } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#2aa62fb9 counterGroup:{ name:null counters:{} } }} channels:{c1=FileChannel c1 { dataDirs: [/opt/module/flume/data/ranqi/behavior2] }} }
16 Aug 2022 11:45:27,788 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:169) - Starting Channel c1
16 Aug 2022 11:45:27,790 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184) - Waiting for channel: c1 to start. Sleeping for 500 ms
16 Aug 2022 11:45:27,790 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:278) - Starting FileChannel c1 { dataDirs: [/opt/module/flume/data/ranqi/behavior2] }...
16 Aug 2022 11:45:27,833 INFO [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
16 Aug 2022 11:45:27,833 INFO [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: CHANNEL, name: c1 started
16 Aug 2022 11:45:27,839 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.<init>:356) - Encryption is not enabled
16 Aug 2022 11:45:27,840 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:406) - Replay started
16 Aug 2022 11:45:27,845 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:418) - Found NextFileID 3, from [/opt/module/flume/data/ranqi/behavior2/log-3, /opt/module/flume/data/ranqi/behavior2/log-2]
16 Aug 2022 11:45:27,851 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:55) - Starting up with /opt/module/flume/checkpoint/ranqi/behavior2/checkpoint and /opt/module/flume/checkpoint/ranqi/behavior2/checkpoint.meta
16 Aug 2022 11:45:27,851 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:59) - Reading checkpoint metadata from /opt/module/flume/checkpoint/ranqi/behavior2/checkpoint.meta
16 Aug 2022 11:45:27,906 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FlumeEventQueue.<init>:115) - QueueSet population inserting 0 took 0
16 Aug 2022 11:45:27,908 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:457) - Last Checkpoint Mon Aug 15 17:11:08 CST 2022, queue depth = 0
16 Aug 2022 11:45:27,918 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.doReplay:542) - Replaying logs with v2 replay logic
16 Aug 2022 11:45:27,919 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:249) - Starting replay of [/opt/module/flume/data/ranqi/behavior2/log-2, /opt/module/flume/data/ranqi/behavior2/log-3]
16 Aug 2022 11:45:27,920 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262) - Replaying /opt/module/flume/data/ranqi/behavior2/log-2
16 Aug 2022 11:45:27,925 INFO [lifecycleSupervisor-1-0] (org.apache.flume.tools.DirectMemoryUtils.getDefaultDirectMemorySize:112) - Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
16 Aug 2022 11:45:27,926 INFO [lifecycleSupervisor-1-0] (org.apache.flume.tools.DirectMemoryUtils.allocate:48) - Direct Memory Allocation: Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 1908932608, Remaining = 1908932608
16 Aug 2022 11:45:27,982 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:660) - Checkpoint for file(/opt/module/flume/data/ranqi/behavior2/log-2) is: 1660554206424, which is beyond the requested checkpoint time: 1660555388025 and position 0
16 Aug 2022 11:45:27,982 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262) - Replaying /opt/module/flume/data/ranqi/behavior2/log-3
16 Aug 2022 11:45:27,983 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:658) - fast-forward to checkpoint position: 273662090
16 Aug 2022 11:45:27,983 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.next:683) - Encountered EOF at 273662090 in /opt/module/flume/data/ranqi/behavior2/log-3
16 Aug 2022 11:45:27,983 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:345) - read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
16 Aug 2022 11:45:27,984 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FlumeEventQueue.replayComplete:417) - Search Count = 0, Search Time = 0, Copy Count = 0, Copy Time = 0
16 Aug 2022 11:45:27,988 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:505) - Rolling /opt/module/flume/data/ranqi/behavior2
16 Aug 2022 11:45:27,988 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.roll:990) - Roll start /opt/module/flume/data/ranqi/behavior2
16 Aug 2022 11:45:27,989 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$Writer.<init>:220) - Opened /opt/module/flume/data/ranqi/behavior2/log-4
16 Aug 2022 11:45:27,996 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.roll:1006) - Roll end
16 Aug 2022 11:45:27,996 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:230) - Start checkpoint for /opt/module/flume/checkpoint/ranqi/behavior2/checkpoint, elements to sync = 0
16 Aug 2022 11:45:28,000 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:255) - Updating checkpoint metadata: logWriteOrderID: 1660621527859, queueSize: 0, queueHead: 557327
16 Aug 2022 11:45:28,008 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.writeCheckpoint:1065) - Updated checkpoint for file: /opt/module/flume/data/ranqi/behavior2/log-4 position: 0 logWriteOrderID: 1660621527859
16 Aug 2022 11:45:28,008 INFO [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:289) - Queue Size after replay: 0 [channel=c1]
16 Aug 2022 11:45:28,290 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:196) - Starting Sink k1
16 Aug 2022 11:45:28,291 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:207) - Starting Source r1
16 Aug 2022 11:45:28,292 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
16 Aug 2022 11:45:28,292 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: SINK, name: k1 started
16 Aug 2022 11:45:28,292 INFO [lifecycleSupervisor-1-4] (org.apache.flume.source.kafka.KafkaSource.doStart:524) - Starting org.apache.flume.source.kafka.KafkaSource{name:r1,state:IDLE}...
flume agent start use shell command below.
#!/bin/bash
case $1 in
"start")
echo " --------start flume-------"
ssh hadoop104 "nohup /opt/module/flume/bin/flume-ng agent -n a1 -c /opt/module/flume/conf -f /opt/module/flume/job/ranqi/ranqi_kafka_to_hdfs_db.conf >/dev/null 2>&1 &"
;;
"stop")
echo " --------stop flume-------"
ssh hadoop104 "ps -ef | grep ranqi_kafka_to_hdfs_db.conf | grep -v grep |awk '{print \$2}' | xargs -n1 kill"
;;
esac
flume config is below.
a1.sources = r1
a1.channels = c1
a1.sinks = k1
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 5000
a1.sources.r1.batchDurationMillis = 2000
a1.sources.r1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092
a1.sources.r1.kafka.topics = copy_1015
a1.sources.r1.kafka.consumer.group.id = flume
a1.sources.r1.setTopicHeader = true
a1.sources.r1.topicHeader = topic
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.ranqi.ranqiTimestampInterceptor$Builder
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /opt/module/flume/checkpoint/ranqi/behavior2
a1.channels.c1.dataDirs = /opt/module/flume/data/ranqi/behavior2/
a1.channels.c1.maxFileSize = 2146435071
a1.channels.c1.capacity = 1123456
a1.channels.c1.keep-alive = 6
## sink1
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /origin_data/ranqi/db/%{topic}_inc/%Y-%m-%d
a1.sinks.k1.hdfs.filePrefix = db
a1.sinks.k1.hdfs.round = false
a1.sinks.k1.hdfs.rollInterval = 10
a1.sinks.k1.hdfs.rollSize = 134217728
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.fileType = CompressedStream
a1.sinks.k1.hdfs.codeC = gzip
## 拼装
a1.sources.r1.channels = c1
a1.sinks.k1.channel= c1
ranqiTimestampInterceptor class I defined is below, which in flume/lib.
package com.atguigu.flume.interceptor.ranqi;
import com.alibaba.fastjson.JSONObject;
import com.atguigu.flume.interceptor.db.TimestampInterceptor;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.nio.charset.StandardCharsets;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import java.util.Map;
public class ranqiTimestampInterceptor implements Interceptor {
public static String dateToStamp(String s) throws ParseException {
String res;
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date date = simpleDateFormat.parse(s);
long ts = date.getTime();
res = String.valueOf(ts);
return res;
}
#Override
public void initialize() {
}
private final static Logger logger = LoggerFactory.getLogger(ranqiTimestampInterceptor.class);
#Override
public Event intercept(Event event) {
byte[] body = event.getBody();
Long createDate ;
String time = new String();
String log = new String(body, StandardCharsets.UTF_8);
JSONObject jsonObject = JSONObject.parseObject(log);
// logger.info(log);
logger.info(String.valueOf(jsonObject));
JSONObject data = jsonObject.getObject("data", JSONObject.class);
if(data.containsKey("createDate") && data.getLong("createDate") != null){
createDate = data.getLong("createDate");
try {
createDate = Long.valueOf(dateToStamp(String.valueOf(createDate)));
time = String.valueOf(createDate);
} catch (ParseException e) {
e.printStackTrace();
}
finally {
Long ts = jsonObject.getLong("ts");
time = String.valueOf(ts);
}
}else{
Long ts = jsonObject.getLong("ts");
time = String.valueOf(ts);
}
System.out.println(time);
logger.info(time);
Map<String, String> headers = event.getHeaders();
headers.put("timestamp",time);
return event;
}
#Override
public List<Event> intercept(List<Event> list) {
for (Event event : list) {
intercept(event);
}
return list;
}
#Override
public void close() {
}
public static class Builder implements Interceptor.Builder{
#Override
public Interceptor build() {
return new TimestampInterceptor();
}
#Override
public void configure(Context context) {
}
}
}
.

Unable to fetch data between two dates in Hibernate Criteria

I am surprisingly unable to fetch the sum of a field between two Dates
My method is:
public static double getTotalfeeIncome(Date fromDate, Date toDate) {
Session session = HibernateUtil.getSessionFactory().openSession();
Transaction tx = null;
double total = 0.0;
try {
tx = session.beginTransaction();
Criteria cr = session.createCriteria(StudentFees.class);
cr.add(Restrictions.eq("delFlg", "N"));
cr.add(Restrictions.between("paymentDate", fromDate, toDate));
log.debug(fromDate);
log.debug(toDate);
total = (Double) cr.setProjection(Projections.sum("paymentAmt")).uniqueResult();
tx.commit();
} catch (Exception asd) {
log.debug(asd.getMessage());
if (tx != null) {
tx.rollback();
}
} finally {
session.close();
}
return total;
}
When I try to get the value:
double feeIncome = Expense.getTotalfeeIncome(fdate, tdate);
returns 0 but there is data between the two dates. There seem to be an error that is not getting printed out in my catch section. My logger output is:
[edulogger] [11 Mar 2017 - 18:54:48] [DEBUG][dao.Expense][getTotalfeeIncome] - Sat Mar 11 00:00:00 EAT 2017
2017-03-11 18:54:48,334 DEBUG com.orig.edu.dao.Expense edulogger:182 - Sat Mar 11 00:00:00 EAT 2017
[edulogger] [11 Mar 2017 - 18:54:48] [DEBUG][dao.Expense][getTotalfeeIncome] - Sat Mar 11 00:00:00 EAT 2017
2017-03-11 18:54:48,338 DEBUG com.orig.edu.dao.Expense edulogger:183 - Sat Mar 11 00:00:00 EAT 2017
Hibernate: select sum(this_.payment_amt) as y0_ from edutek.student_fees this_ where this_.del_flg=? and this_.payment_date between ? and ?
[edulogger] [11 Mar 2017 - 18:54:48] [DEBUG][dao.Expense][getTotalfeeIncome] -
2017-03-11 18:54:48,455 DEBUG com.orig.edu.dao.Expense edulogger:187 -
What is it that I am doing wrong?
Use the following
cr.add(criteria.add(Restrictions.ge("paymentDate", fromDate)));
cr.add(criteria.add(Restrictions.lt("paymentDate", toDate)));
instead of
cr.add(Restrictions.between("paymentDate", fromDate, toDate));
UPDATE:
cr.add(Restrictions.between("DATE(paymentDate)", fromDate, toDate));
Resource Link:
https://stackoverflow.com/a/6122906/2293534

Web service method called once, but executed multiple times

I'm developing a web service in Java EE 6, with only one method called E1CreateAccount.
It happens that for each call I do to the method, the execution of the method fires multiple times (at least 2 times). The executions are consecutive, so the next start when the previous ends.
The code of the method is the following:
#WebMethod
public Object E1CreateAccount(#WebParam(name = "arg0") InboundAccountF56101IB newAccount) {
Object ret = "KO";
MyAction action = MyAction.CREATE;
MyLogger.getLogger().info("Integration WebService called to Create new Account in E1, name " +
newAccount.getCRMAccountName());
E1AccountClientThread runner = new E1AccountClientThread();
runner.initialize(newAccount, action);
Thread thread = new Thread(runner);
MyLogger.getLogger().info(thread.getId() + ": Created thread to " + action.name() +
" Account. Account Name: " + newAccount.getCRMAccountName() + "...");
try {
MyLogger.getLogger().info(thread.getId() +
": Starting thread to call E1 Account web service, Action: " +
action.name());
thread.start();
ret = "OK";
} catch(Exception e) {
String str = "Exception calling the thread delegate to call E1 Account web service (Action " +
action.name() + ". Message: " + e.getMessage().toString();
MyLogger.getLogger().severe(str);
ret="KO " + str;
}
return ret;
}
It receive the call, it logs the fact that the service has been called, and then create a separated thread which will do the task. Then it always return the string "OK".
The method works well, the only problem is the multiple execution, which is undesired for the software purposes.
This is what I see from the logs:
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: Integration WebService called to Create new Account in E1, name THIS_IS_THE_COMPANY_NAME
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1026: Created thread to CREATE Account. Account Name: THIS_IS_THE_COMPANY_NAME...
Jul 06, 2015 3:17:57 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1026: Starting thread to call E1 Account web service, Action: CREATE
[...]
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: Integration WebService called to Create new Account in E1, name THIS_IS_THE_COMPANY_NAME
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1027: Created thread to CREATE Account. Account Name: THIS_IS_THE_COMPANY_NAME...
Jul 06, 2015 3:18:00 PM integration.services.e1.account.E1AccountService E1CreateAccount
INFO: 1027: Starting thread to call E1 Account web service, Action: CREATE
[...]
(I substituted with [...] the logs related to what the threads actually do because I think it is not relevant)
The web service has been deployed on WebLogic 12.1.3, and I used the ide JDeveloper 12.1.3

Selenium - Address already in use: connect

I am trying to crawl and parse a dynamic content of website using selenium,
Generally the website am crawling loads its content by the scroll event from the page,So i triggered the scroll event by selenium until the end of the page is reached.
In Product phase am fetching each product detail by loop iteration, it also works fine.. but when it reaches iteration count of 280 above....
This is my code below...
private void init() throws IOException {
FirefoxProfile profile = new FirefoxProfile();//Create Firefox profile
profile.setPreference("javascript.enabled", true);//Allow javascript for browser
WebDriver htmDriver = new FirefoxDriver(profile);//add profile to firefoxDriver
htmDriver.get(urlTextField.getText());//Get and Connect to the url from URL text Field
htmDriver.manage().window().maximize();//Maximize the Browser window
String count = htmDriver.findElement(By.cssSelector("#numbFound > #no-of-results-filter")).getText();//Total Product Count for the category
//System.out.println("Total Category Count : "+count);
htmDriver.findElement(By.cssSelector(".list")).click();//Click to view the Product in List
htmDriver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);//Wait
int lCount = Integer.parseInt(count);//Calculate the scroll length
for (int i = 1; i <= Math.ceil(lCount / 5); i++) {
//Generate Arrow Down Action
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
Keys.ARROW_DOWN);
htmDriver.findElement(By.id("products-main4")).sendKeys(
((JavascriptExecutor) htmDriver).executeScript(
"window.scrollBy(0,document.body.scrollHeight)", "");
htmDriver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
}
//Product Phase
int row2 = 0;
List<WebElement> rdata = htmDriver.findElements(By.className("product_list_view_cont"));//selector to select each product row
for (WebElement data : rdata) {
String title = data.findElement(By.cssSelector(".product_list_view_heading")).getText();//Get Product Title
System.out.println(title);
//Check if Product Price available
boolean product_price = data.findElements(By.cssSelector(".product_list_view_price_outer span")).isEmpty();
if(product_price == false){
//Get the Price of the Product
String price = data.findElement(By.cssSelector(".product_list_view_price_outer var[id^=selling-price-id-]")).getText().trim();
System.out.println(price);
}else{
//If Price not Available add make the data null
system.out.println("No price")
}
String brand = data.findElement(By.cssSelector("ul.key-features li")).getText();
System.out.println(brand);
String brandUrl = data.findElement(By.cssSelector(".product_list_view_info_cont a")).getAttribute("href");//Fetch Brand Url
System.out.println(brandUrl);
String status = data.findElement(By.cssSelector(".product_list_view_buy-outer .lfloat")).getText();//Fetch Brand Url
System.out.println(status);
}
}
selenium throws exception as follows
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:10 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: I/O exception (java.net.BindException) caught when processing request to {}->http://localhost:7055: Address already in use: connect
Feb 18, 2015 10:00:12 AM org.apache.http.impl.execchain.RetryExec execute
INFO: Retrying request to {}->http://localhost:7055
for each Iteration after some times...

Ice4J: Ice State Failed on 4G network

Does anyone know how to do the TURN portion of Ice4j? I've managed to code it so that it works when the phone is on WiFi, but not when its on the mobile network.
I'm sending agent info via TCP and then building the connection manually instead of using a signalling process. The TCP connection already works fine, so I don't think its the TCP issue. Maybe I'm building the agent wrong?
I know that you're supposed to use a TURN server if STUN doesn't work, and I provided a large list of public TURN servers, but I might be missing something. Maybe the packets aren't being sent out properly?
Error: (Mostly Failed to send ALLOCATE-REQUEST(0X3))
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createMediaStream
INFO: Create media stream for data
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent createComponent
INFO: Create component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.Agent gatherCandidates
INFO: Gather candidates for component data.1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.HostCandidateHarvester harvest
INFO: End candidate harvest within 160 ms, for org.ice4j.ice.harvest.HostCandidateHarvester, component: 1
Sep 11, 2014 3:36:09 PM org.ice4j.ice.harvest.StunCandidateHarvest sendRequest
INFO: Failed to send ALLOCATE-REQUEST(0x3)[attrib.count=3 len=32 tranID=0x9909DC6648016A67FDD4B2D8] through /192.168.0.8:5001/udp to stun2.l.google.com:19302:5001/udp
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:c8ce:5a17:c339:cc40%4:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /fe80:0:0:0:380d:2a4c:b350:eea8%8:5001/udp -> /fe80:0:0:0:14e8:f3ff:fef3:6a21:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient processTimeout
INFO: timeout for pair: /192.168.0.8:5001/udp -> /100.64.74.58:6001/udp (data.1), failing.
Sep 11, 2014 3:36:12 PM org.ice4j.ice.ConnectivityCheckClient updateCheckListAndTimerStates
INFO: CheckList will failed in a few seconds if nosucceeded checks come
Sep 11, 2014 3:36:17 PM org.ice4j.ice.ConnectivityCheckClient$1 run
INFO: CheckList for stream data FAILED
Sep 11, 2014 3:36:17 PM org.ice4j.ice.Agent checkListStatesUpdated
INFO: ICE state is FAILED
Script (Both the server and the client sides have similar codes to this one):
Agent agent = new Agent();
agent.setControlling(false);
StunCandidateHarvester stunHarv = new StunCandidateHarvester(new TransportAddress("sip-communicator.net", port, Transport.UDP));
StunCandidateHarvester stun6Harv = new StunCandidateHarvester(new TransportAddress("ipv6.sip-communicator.net", port, Transport.UDP));
agent.addCandidateHarvester(stunHarv);
agent.addCandidateHarvester(stun6Harv);
String[] hostnames = new String[] { "130.79.90.150",
"2001:660:4701:1001:230:5ff:fe1a:805f",
"jitsi.org",
"numb.viagenie.ca",
"stun01.sipphone.com",
"stun.ekiga.net",
"stun.fwdnet.net",
"stun.ideasip.com",
"stun.iptel.org",
"stun.rixtelecom.se",
"stun.schlund.de",
"stun.l.google.com:19302",
"stun1.l.google.com:19302",
"stun2.l.google.com:19302",
"stun3.l.google.com:19302",
"stun4.l.google.com:19302",
"stunserver.org",
"stun.softjoys.com",
"stun.voiparound.com",
"stun.voipbuster.com",
"stun.voipstunt.com",
"stun.voxgratia.org",
"stun.xten.com",};
LongTermCredential longTermCredential = new LongTermCredential("guest", "anon");
for (String hostname : hostnames)
agent.addCandidateHarvester(new TurnCandidateHarvester(new TransportAddress(hostname, port, Transport.UDP), longTermCredential));
//Build a stream for agent
IceMediaStream stream = agent.createMediaStream("data");
try {
Component c = agent.createComponent(stream, Transport.UDP, port, port, port+100 );
String response = "";
List<LocalCandidate> remoteCandidates = c.getLocalCandidates();
for(Candidate<?> can : remoteCandidates) {
response += "||" + can.toString();
}
response = "Video||" + agent.getLocalUfrag() + "||" + agent.getLocalPassword() + "||" + c.getDefaultCandidate().toString() + response;
System.out.println("Server >>> " + response);
DataOutputStream outStream = new DataOutputStream(client.getOutputStream());
outStream.write(response.getBytes("UTF-8"));
outStream.flush();
List<IceMediaStream> streams = agent.getStreams();
for(IceMediaStream localStream : streams) {
List<Component> localComponents = localStream.getComponents();
for(Component localComponent : localComponents) {
for(int i = 3; i < info.length; i++) {
String[] detail = info[i].split(" "); //0: Foundation
//1: Component ID
//2: Transport
//3: Priority #
//4: Address (Needed with Port # to create Transport Address)
//5: Port # (Needed with Address to create Transport Address)
//6: -filler: "Type" is next field-
//7: Candidate Type
String[] foundation = detail[0].split(":"); //Turn "Candidate:#" -> "Candidate" and "#". We use "#"
localComponent.addRemoteCandidate(new RemoteCandidate(new TransportAddress(detail[4], Integer.valueOf(detail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, foundation[1], Long.valueOf(detail[3]), null));
}
String[] defaultDetail = info[3].split(" ");
String[] defaultFoundation = defaultDetail[0].split(":");
localComponent.setDefaultRemoteCandidate(new RemoteCandidate(new TransportAddress(defaultDetail[4], Integer.valueOf(defaultDetail[5]), Transport.UDP), c, CandidateType.HOST_CANDIDATE, defaultFoundation[1], Long.valueOf(defaultDetail[3]), null));
}
localStream.setRemoteUfrag(info[1]);
localStream.setRemotePassword(info[2]);
}
agent.startConnectivityEstablishment();
System.out.println("ICEServer <><><> Completed");
I realize now that your list of TURN servers seem to mostly actually be STUN servers (not sure about the first two). They should be added as STUN servers if anything:
agent.addCandidateHarvester(
new StunCandidateHarvester(
new TransportAddress(
InetAddress.getByName('stun.l.google.com'),
19302,
Transport.UDP)));

Categories

Resources