Error in shutting down the Google Pub sub publisher - java

I have started the publisher of google pub sub using the coding[1].After finishing the publisher I shut down the publisher as[2].But when I run I'm getting an error[3]saying that the publisher is not properly shutdown.
I'm using pubsub 1.61.0 version.
Is there any way to handle this error?
[1]
public class PublisherExample {
// use the default project id
private static final String PROJECT_ID = ServiceOptions.getDefaultProjectId();
/** Publish messages to a topic.
* #param args topic name, number of messages
*/
public static void main(String... args) throws Exception {
// topic id, eg. "my-topic"
String topicId = args[0];
int messageCount = Integer.parseInt(args[1]);
ProjectTopicName topicName = ProjectTopicName.of(PROJECT_ID, topicId);
Publisher publisher = null;
List<ApiFuture<String>> futures = new ArrayList<>();
try {
// Create a publisher instance with default settings bound to the topic
publisher = Publisher.newBuilder(topicName).build();
for (int i = 0; i < messageCount; i++) {
String message = "message-" + i;
// convert message to bytes
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder()
.setData(data)
.build();
// Schedule a message to be published. Messages are automatically batched.
ApiFuture<String> future = publisher.publish(pubsubMessage);
futures.add(future);
}
} finally {
// Wait on any pending requests
List<String> messageIds = ApiFutures.allAsList(futures).get();
for (String messageId : messageIds) {
System.out.println(messageId);
}
if (publisher != null) {
// When finished with the publisher, shutdown to free up resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
}
}
[2]
if (publisher != null) {
resources.
publisher.shutdown();
publisher.awaitTermination(1, TimeUnit.MINUTES);
}
[3]
io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue
SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=9, target=pubsub.googleapis.com:443} was not shutdown properly!!! ~*~*~*
Make sure to call shutdown()/shutdownNow() and wait until awaitTermination() returns true.
java.lang.RuntimeException: ManagedChannel allocation site
at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.<init>(ManagedChannelOrphanWrapper.java:103)
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:53)
at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOrphanWrapper.java:44)
at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:419)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:254)

Related

Hazelcast Jet not allowing Tomcat to stop

I am using Hazelcast jet to some aggregation and grouping but after being idle for sometime and when I tried to stop my tomcat it is not allowing to stop my tomcat and I have restart my PC. Below is the error which I am getting. Please anyone can guide me what it exactly error is showing and how to shutdown it gracefully?
Sending multicast datagram failed. Exception message saying the operation is not permitted
usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time.
java.net.NoRouteToHostException: No route to host: Datagram send failed
at java.net.TwoStacksPlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at com.hazelcast.internal.cluster.impl.MulticastService.send(MulticastService.java:291)
at com.hazelcast.internal.cluster.impl.MulticastJoiner.searchForOtherClusters(MulticastJoiner.java:113)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.searchForOtherClusters(SplitBrainHandler.java:75)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.run(SplitBrainHandler.java:42)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
Code is quite huge but I tried show you some sample may it won't work as it is just glimpse of a code:
Class Abc{
// It Create Jet Instance
JetConfig jetConfig = new JetConfig();
jetConfig.getHazelcastConfig().setProperty( "hazelcast.logging.type", "log4j" );
jetConfig.getInstanceConfig().setCooperativeThreadCount(5);
jetConfig.configureHazelcast(c -> {
c.getNetworkConfig().setReuseAddress(true);
c.setClusterName("DATA" + UUID.randomUUID().toString());
c.getNetworkConfig().setPort(9093);
c.getNetworkConfig().setPublicAddress("localhost");
c.getNetworkConfig().setPortAutoIncrement(true);
});
JetInstance jetInstance= Jet.newJetInstance(jetConfig);
public Pipeline createPipeline() {
return Pipeline.create();
}
// To Add Job to pipeline
public void joinPipeToJet(Pipeline pl, String name) {
JobConfig j = new JobConfig();
//j.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
j.setName(name);
jetInstance.newJob(pl,j).join();
}
public void readJsonFile(final Map<String, Object> data) {
// Random Id for job so I can separate two jobs Imaps
String jobid = UUID.randomUUID().toString();
try {
Pipeline pl = createPipeline();
UUID idOne = UUID.randomUUID();
final IMap<Object, Object> abc = jetInstance.getMap(idOne.toString());
abc.putAll(data);
// Reading data from file and sending data to next
final BatchSource batchSource = Sources.map(abc);
pl.readFrom(batchSource)
.writeTo(Sinks.map(this.uid));
joinPipeToJet(pl, jobid);
abc.destroy();
} catch (Exception e) {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
} finally {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
}
}
//Process to mainplate data and returning it using BatchStage to Map
public Map<String, Object> runProcess(final Pipeline pl) {
String jobid = UUID.randomUUID().toString();
UID idOne = UUID.randomUUID();
BatchStage<Object> bd1 = ;//get data by calling method
bd1.writeTo(Sinks.list(idOne.toString()));
joinPipeToJet(pl, jobid);
IList<Object> abc = jetInstance.getList(idOne.toString());
List<Object> result = new ArrayList(abc);
final Map<String, Object> finalresult =new HashMap<String, Object>();
finalresult.put("datas", result.get(0));
abc.destroy();
return finalresult;
}
public static void main(String...args) {
Map<String, Object> p = new HashMap<String, Object>();
p.putAll("Some Data");
readJsonFile(p);
Pipeline pl = createPipeline();
runProcess(pl);
}
}

Why pulling message have nothing google pubsub?

I have subscription VIEW_TOPIC with pull strategy. Why I cannot see any message although have 7 delay messages? I cannot figure out what am I missing. By the way, I'm running subscriber on k8s GCP. I was also add GOOGLE_APPLICATION_CREDENTIALS variable environment.
Subscriber configuration
private Subscriber buildSubscriber() {
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
TopicName topicName = TopicName.of(projectId, topic);
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Create a pull subscription with default acknowledgement deadline of 10 seconds.
// Messages not successfully acknowledged within 10 seconds will get resent by the server.
Subscription subscription =
subscriptionAdminClient.createSubscription(
subscriptionName, topicName, PushConfig.getDefaultInstance(), 10);
System.out.println("Created pull subscription: " + subscription.getName());
} catch (IOException e) {
LOGGER.error("Cannot create pull subscription");
} catch (AlreadyExistsException existsException) {
LOGGER.warn("Subscription already created");
}
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(projectId, subscriptionId);
LOGGER.info("Subscribe topic: " + topic + " | SubscriptionId: " + subscriptionId);
// default is 4 * num of processor
ExecutorProvider executorProvider = InstantiatingExecutorProvider.newBuilder().build();
Subscriber.Builder subscriberBuilder = Subscriber.newBuilder(subscriptionName, new MessageReceiverImpl())
.setExecutorProvider(executorProvider);
// The subscriber will pause the message stream and stop receiving more messages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
.setMaxOutstandingElementCount(100)
// the maximum size of messages the subscriber
// receives before pausing the message stream.
// 10Mib
.setMaxOutstandingRequestBytes(10L * 1024L * 1024L)
.build();
subscriberBuilder.setFlowControlSettings(flowControlSettings);
Subscriber subscriber = subscriberBuilder.build();
subscriber.addListener(new ApiService.Listener() {
#Override
public void failed(ApiService.State from, Throwable failure) {
LOGGER.error(from, failure);
}
}, MoreExecutors.directExecutor());
return subscriber;
}
Subscriber
public void startSubscribeMessage() {
LOGGER.info("Begin subscribe topic " + topic);
this.subscriber.startAsync().awaitRunning();
LOGGER.info("Subscriber start successfully!!!");
}
public class MessageReceiverImpl implements MessageReceiver {
private static final Logger LOGGER = Logger.getLogger(MessageReceiverImpl.class);
private final LogSave logSave = MatchSave.getInstance();
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
ByteString data = message.getData();
// Get the schema encoding type.
String encoding = message.getAttributesMap().get("googclient_schemaencoding");
Req.LogReq logReqMsg = null;
try {
switch (encoding) {
case "BINARY":
logReqMsg = Req.LogReq.parseFrom(data);
break;
case "JSON":
Req.LogReq.Builder msgBuilder = Req.LogReq.newBuilder();
JsonFormat.parser().merge(data.toStringUtf8(), msgBuilder);
logReqMsg = msgBuilder.build();
break;
}
LOGGER.info((JsonFormat.printer().omittingInsignificantWhitespace().print(logReqMsg)));
logSave.addLogMsg(battleLogMsg);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
consumer.ack();
}
}
With Req.LogReq is a proto message. My dependency:
// google cloud
implementation platform('com.google.cloud:libraries-bom:22.0.0')
implementation 'com.google.cloud:google-cloud-pubsub'
implementation group: 'com.google.protobuf', name: 'protobuf-java-util', version: '3.17.2'
And the call function logSave.addLogMsg(battleLogMsg); is add message to CopyOnWriteArrayList

How to distribute kafka load into openshift pods using kafka partitions

I have a spring boot application and I would like to distribute the load of a Kafka topic into 3 open-shift pods. I have the following example where I can listen from 3 Kafka partitions on three different threads, this spring boot application will load into one openshift pod. But I want to be a able to listen from one Kafka partition on one pod so when I load 3 pods on open-shift each pod will listen from one Kafka partition. This will allow me to scale the application to N partitions on N pods. I am not sure if this is possible or if need to use a different approach. Thanks
public class DepAcctInqConsumerController {
private static final Logger LOGGER = LoggerFactory.getLogger(DepAcctInqConsumerController.class);
#Value("${kafka.topic.acct-info.request}")
private String requestTopic;
#KafkaListener(id = "id-0",containerFactory = "requestReplyListenerContainerFactory",
topicPartitions = { #TopicPartition(topic = "${kafka.topic.acct-info.request}", partitions = "0" )})
public Message<?> listenPartition0(InGetAccountInfo accountInfo, #Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.CORRELATION_ID) byte[] correlation,#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
#KafkaListener(id = "id-1",containerFactory = "requestReplyListenerContainerFactory",
topicPartitions = { #TopicPartition(topic = "${kafka.topic.acct-info.request}", partitions = "#{#finder.partitions(${kafka.topic.acct-info.request)}" )})
public Message<?> listenPartition1(InGetAccountInfo accountInfo, #Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.CORRELATION_ID) byte[] correlation,#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
#KafkaListener(id = "id-2",containerFactory = "requestReplyListenerContainerFactory",
topicPartitions = { #TopicPartition(topic = "${kafka.topic.acct-info.request}", partitions = "2" )})
public Message<?> listenPartition2(InGetAccountInfo accountInfo, #Header(KafkaHeaders.REPLY_TOPIC) byte[] replyTo,
#Header(KafkaHeaders.CORRELATION_ID) byte[] correlation, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) int id) {
try {
LOGGER.info("Received request for partition id = " + id);
AccountInquiryDto accountInfoDto = getAccountInquiryDto(accountInfo);
return MessageBuilder.withPayload(accountInfoDto)
.setHeader(KafkaHeaders.TOPIC, replyTo)
.setHeader(KafkaHeaders.RECEIVED_PARTITION_ID, id)
.setHeader(KafkaHeaders.CORRELATION_ID, correlation)
.build();
} catch (Exception e) {
LOGGER.error(e.toString(),e);
}
return null;
}
We don't need to have multiple kafka listeners for each partition. We just need one listener.
If you are running single pod, messages from all three partitions will be consumed by that single pod,
If you run more than 1 pod, partitions will be distributed across pods.
We can run as many pods as no of partitions.
All Pods must use same consumer group name.
This is all we need.
#KafkaListener(topics = "${kafka.topic.acct-info.request}")
public void receive(ConsumerRecord<String, String> record)

I can't send message using google pubsub emulator in spring boot

I'm trying to send push message using the emulator of pubsub, I'm using spring boot too, this is my configuration:
Dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-pubsub</artifactId>
</dependency>
My bean:
#Configuration
#AutoConfigureBefore(value= GcpPubSubAutoConfiguration.class)
#EnableConfigurationProperties(value= GcpPubSubProperties.class)
public class EmulatorPubSubConfiguration {
#Value("${spring.gcp.pubsub.projectid}")
private String projectId;
#Value("${spring.gcp.pubsub.subscriptorid}")
private String subscriptorId;
#Value("${spring.gcp.pubsub.topicid}")
private String topicId;
#Bean
public Publisher pubsubEmulator() throws IOException {
String hostport = System.getenv("PUBSUB_EMULATOR_HOST");
ManagedChannel channel = ManagedChannelBuilder.forTarget(hostport).usePlaintext().build();
try {
TransportChannelProvider channelProvider =
FixedTransportChannelProvider.create(GrpcTransportChannel.create(channel));
CredentialsProvider credentialsProvider = NoCredentialsProvider.create();
// Set the channel and credentials provider when creating a `TopicAdminClient`.
// Similarly for SubscriptionAdminClient
TopicAdminClient topicClient =
TopicAdminClient.create(
TopicAdminSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build());
ProjectTopicName topicName = ProjectTopicName.of(projectId, topicId);
// Set the channel and credentials provider when creating a `Publisher`.
// Similarly for Subscriber
return Publisher.newBuilder(topicName)
.setChannelProvider(channelProvider)
.setCredentialsProvider(credentialsProvider)
.build();
} finally {
channel.shutdown();
}
}
}
Of course, I have set PUBSUB_EMULATOR_HOST system variable to localhost:8085, where is the emulator running
I created a rest controller for testing:
for send push message
#Autowired
private Publisher pubsubPublisher;
#PostMapping("/send1")
public String publishMessage(#RequestParam("message") String message) throws InterruptedException, IOException {
Publisher pubsubPublisher = this.getPublisher();
ByteString data = ByteString.copyFromUtf8(message);
PubsubMessage pubsubMessage = PubsubMessage.newBuilder().setData(data).build();
ApiFuture<String> future = pubsubPublisher.publish(pubsubMessage);
//pubsubPublisher.publishAllOutstanding();
try {
// Add an asynchronous callback to handle success / failure
ApiFutures.addCallback(future,
new ApiFutureCallback<String>() {
#Override
public void onFailure(Throwable throwable) {
if (throwable instanceof ApiException) {
ApiException apiException = ((ApiException) throwable);
// details on the API exception
System.out.println(apiException.getStatusCode().getCode());
System.out.println(apiException.isRetryable());
}
System.out.println("Error publishing message : " + message);
System.out.println("Error publishing error : " + throwable.getMessage());
System.out.println("Error publishing cause : " + throwable.getCause());
}
#Override
public void onSuccess(String messageId) {
// Once published, returns server-assigned message ids (unique within the topic)
System.out.println(messageId);
}
},
MoreExecutors.directExecutor());
}
finally {
if (pubsubPublisher != null) {
// When finished with the publisher, shutdown to free up resources.
pubsubPublisher.shutdown();
pubsubPublisher.awaitTermination(1, TimeUnit.MINUTES);
}
}
return "ok";
for get message:
#PostMapping("/pushtest")
public String pushTest(#RequestBody CloudPubSubPushMessage request) {
System.out.println( "------> message received: " + decode(request.getMessage().getData()) );
return request.toString();
}
I have created my topic and subscription in the emulator, I followed this tutorial:
https://cloud.google.com/pubsub/docs/emulator
I'm set the endpoint "pushtest" for get push message in the emulator, with this command:
python subscriber.py PUBSUB_PROJECT_ID create-push TOPIC_ID SUBSCRIPTION_ID PUSH_ENDPOINT
But when I run the test, doesn't reach "/pushtest" endpoint and I'm getting this error:
Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask#265d5d05
[Not completed, task = java.util.concurrent.Executors$RunnableAdapter#a8c8be3
[Wrapped task = com.google.common.util.concurrent.TrustedListenableFutureTask#1a53c57c
[status=PENDING, info=[task=[running=[NOT STARTED YET], com.google.api.gax.rpc.AttemptCallable#3866e1d0]]]]]
rejected from java.util.concurrent.ScheduledThreadPoolExecutor#3f34809a
[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
for assurance that the emulator is running ok, I'm run the test in python with the following command:
python publisher.py PUBSUB_PROJECT_ID publish TOPIC_ID
And I'm getting messages correctly in "pushtest" endpoint.
I don't know why sorry for my hazing.
Thanks for your help.
I found the problem.
Only comment this line in the bean
channel.shutdown();
HAHA very simple.

Jedis related: how could the sub thread in the main thread had stopped?

I'll give some context information and hope you could get any idea on how could this issue happen.
Firstly, the main thread code for whole app is attched here.
public static void main(String args[]) throws Exception {
AppConfig appConfig = AppConfig.getInstance();
appConfig.initBean("applicationContext.xml");
SchedulerFactory factory=new StdSchedulerFactory();
Scheduler _scheduler=factory.getScheduler();
_scheduler.start();
Thread t = new Thread((Runnable) appConfig.getBean("consumeGpzjDataLoopTask"));
t.start();
}
Main method just does 3 things: inits beans by the Spring way, starts the Quartz jobs thread and starts the sub thread which subscribes one channel in Jedis and listen for msgs continuously. Then I'll show the code for the sub thread which starts subscribing:
#Override
public void run() {
Properties pros = new Properties();
Jedis sub = new Jedis(server, defaultPort, 0);
sub.subscribe(subscriber, channelId);
}
and the thread stack then message received:
But something weird happened in production environment. The quartz jobs scheduler is running properly while the consumeGpzjDataLoopTask seems to be exited somehow! I really can't get an idea why the issue could even happen, as you could see, the sub thread inits one Jedis instance with timeout 0 which stands for running infinitely, so I thought the sub thread should not be closed unless some terrible issues in main thread occured. But in prod environment, the message publisher published messages normally and the messages disappeared, and no related could be found in log file, just like the subscriber thread already been dead. BTW, I never met the situation when self testing in local machine.
Could you help me on the possibility for the issue? Comment me if any extra info needed for problem analyzing. Thanks.
Edited: For #code, here's the code for subscriber.
public class GpzjDataSubscriber extends JedisPubSub {
private static final Logger logger = LoggerFactory.getLogger(GpzjDataSubscriber.class);
private static final String META_INSERT_SQL = "insert into dbo.t_cl_tj_transaction_meta_attributes\n" +
"(transaction_id, meta_key, meta_value) VALUES (%d, '%s', '%s')";
private static final String GET_EVENT_ID_SQL = "select id from t_cl_tj_monthly_golden_events_dict where target = ?";
private static final String TRANSACTION_TB_NAME = "t_cl_tj_monthly_golden_stock_transactions";
private static Map<String, Object> insertParams = new HashMap<String, Object>();
private static Collection<String> metaSqlContainer = new ArrayList<String>();
#Autowired(required = false)
#Qualifier(value = "gpzjDao")
private GPZJDao gpzjDao;
public GpzjDataSubscriber() {}
public void onMessage(String channel, String message) {
consumeTransactionMessage(message);
logger.info(String.format("gpzj data subscriber receives redis published message, channel %s, message %s", channel, message));
}
public void onSubscribe(String channel, int subscribedChannels) {
logger.info(String.format("gpzj data subscriber subscribes redis channel success, channel %s, subscribedChannels %d",
channel, subscribedChannels));
}
#Transactional(isolation = Isolation.READ_COMMITTED)
private void consumeTransactionMessage(String msg) {
final GpzjDataTransactionOrm jsonOrm = JSON.parseObject(msg, GpzjDataTransactionOrm.class);
Map<String, String> extendedAttrs = (jsonOrm.getAttr() == null || jsonOrm.getAttr().isEmpty())? null : JSON.parseObject(jsonOrm.getAttr(), HashMap.class);
if (jsonOrm != null) {
SimpleJdbcInsert insertActor = gpzjDao.getInsertActor(TRANSACTION_TB_NAME);
initInsertParams(jsonOrm);
Long transactionId = insertActor.executeAndReturnKey(insertParams).longValue();
if (extendedAttrs == null || extendedAttrs.isEmpty()) {
return;
}
metaSqlContainer.clear();
for (Map.Entry e: extendedAttrs.entrySet()) {
metaSqlContainer.add(String.format(META_INSERT_SQL, transactionId.intValue(), e.getKey(), e.getValue()));
}
int[] insertMetaResult = gpzjDao.batchUpdate(metaSqlContainer.toArray(new String[0]));
}
}
private void initInsertParams(GpzjDataTransactionOrm orm) {
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Integer eventId = gpzjDao.queryForInt(GET_EVENT_ID_SQL, orm.getTarget());
insertParams.clear();
insertParams.put("khid", orm.getKhid());
insertParams.put("attr", orm.getAttr());
insertParams.put("event_id", eventId);
insertParams.put("user_agent", orm.getUser_agent());
insertParams.put("referrer", orm.getReferrer());
insertParams.put("page_url", orm.getPage_url());
insertParams.put("channel", orm.getChannel());
insertParams.put("os", orm.getOs());
insertParams.put("screen_width", orm.getScreen_width());
insertParams.put("screen_height", orm.getScreen_height());
insertParams.put("note", orm.getNote());
insertParams.put("create_time", df.format(new Date()));
insertParams.put("already_handled", 0);
}
}

Categories

Resources