Hazelcast Jet not allowing Tomcat to stop - java

I am using Hazelcast jet to some aggregation and grouping but after being idle for sometime and when I tried to stop my tomcat it is not allowing to stop my tomcat and I have restart my PC. Below is the error which I am getting. Please anyone can guide me what it exactly error is showing and how to shutdown it gracefully?
Sending multicast datagram failed. Exception message saying the operation is not permitted
usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time.
java.net.NoRouteToHostException: No route to host: Datagram send failed
at java.net.TwoStacksPlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at com.hazelcast.internal.cluster.impl.MulticastService.send(MulticastService.java:291)
at com.hazelcast.internal.cluster.impl.MulticastJoiner.searchForOtherClusters(MulticastJoiner.java:113)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.searchForOtherClusters(SplitBrainHandler.java:75)
at com.hazelcast.internal.cluster.impl.SplitBrainHandler.run(SplitBrainHandler.java:42)
at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77)
at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
Code is quite huge but I tried show you some sample may it won't work as it is just glimpse of a code:
Class Abc{
// It Create Jet Instance
JetConfig jetConfig = new JetConfig();
jetConfig.getHazelcastConfig().setProperty( "hazelcast.logging.type", "log4j" );
jetConfig.getInstanceConfig().setCooperativeThreadCount(5);
jetConfig.configureHazelcast(c -> {
c.getNetworkConfig().setReuseAddress(true);
c.setClusterName("DATA" + UUID.randomUUID().toString());
c.getNetworkConfig().setPort(9093);
c.getNetworkConfig().setPublicAddress("localhost");
c.getNetworkConfig().setPortAutoIncrement(true);
});
JetInstance jetInstance= Jet.newJetInstance(jetConfig);
public Pipeline createPipeline() {
return Pipeline.create();
}
// To Add Job to pipeline
public void joinPipeToJet(Pipeline pl, String name) {
JobConfig j = new JobConfig();
//j.setProcessingGuarantee(ProcessingGuarantee.EXACTLY_ONCE);
j.setName(name);
jetInstance.newJob(pl,j).join();
}
public void readJsonFile(final Map<String, Object> data) {
// Random Id for job so I can separate two jobs Imaps
String jobid = UUID.randomUUID().toString();
try {
Pipeline pl = createPipeline();
UUID idOne = UUID.randomUUID();
final IMap<Object, Object> abc = jetInstance.getMap(idOne.toString());
abc.putAll(data);
// Reading data from file and sending data to next
final BatchSource batchSource = Sources.map(abc);
pl.readFrom(batchSource)
.writeTo(Sinks.map(this.uid));
joinPipeToJet(pl, jobid);
abc.destroy();
} catch (Exception e) {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
} finally {
Job j1 = jetInstance.getJob(jobid);
if (j1 != null) {
j1.cancel();
}
}
}
//Process to mainplate data and returning it using BatchStage to Map
public Map<String, Object> runProcess(final Pipeline pl) {
String jobid = UUID.randomUUID().toString();
UID idOne = UUID.randomUUID();
BatchStage<Object> bd1 = ;//get data by calling method
bd1.writeTo(Sinks.list(idOne.toString()));
joinPipeToJet(pl, jobid);
IList<Object> abc = jetInstance.getList(idOne.toString());
List<Object> result = new ArrayList(abc);
final Map<String, Object> finalresult =new HashMap<String, Object>();
finalresult.put("datas", result.get(0));
abc.destroy();
return finalresult;
}
public static void main(String...args) {
Map<String, Object> p = new HashMap<String, Object>();
p.putAll("Some Data");
readJsonFile(p);
Pipeline pl = createPipeline();
runProcess(pl);
}
}

Related

Unable to insert data into db via an Observable

I have the following Observable where I am expecting some DB insertions to occur upon subscribing to it.
But nothing happens, no DB inserts and same time no errors either.
But If I directly subscribe to the method that does the DB calls, the DB insert occurs as expected.
How can I fix this such that the subscription to the Observable call below will perform the DB insert?
Please advice. Thanks.
This is the Observable where no DB insert occurs and no errors. I want to change this such that the DB insertion occurs when I subscribe to this Observable.
public Observable<KafkaConsumerRecord<String, RequestObj>> apply(KafkaConsumerRecords<String, RequestObj> records) {
Observable.from(records.getDelegate().records().records("TOPIC_NAME"))
.buffer(2)
.map(this::convertToEventRequest)
.doOnNext(this::handleEventInsertions)
.doOnSubscribe(() -> System.out.println("Subscribed!"))
.subscribe(); // purposely subscribing here itself to test
return null; // even if I return this observable and subscribe at the caller, same outcome.
}
Just to test if the query works, if I were to directly subscribe to the method that does the insertion, it works as expected as follows.
Doing this in debug mode.
client.rxQueryWithParams(query, new JsonArray(params)).subscribe() // works
The following are references to see whats happening inside the convertToEventRequest and handleEventInsertions methods
private Map<String, List<?>> convertToEventRequest(Object records) {
List<ConsumerRecord<String, RequestObj>> consumerRecords = (List<ConsumerRecord<String, RequestObj>>) records;
List<AddEventRequest> addEventRequests = new ArrayList<>();
List<UpdateEventRequest> updateEventRequests = new ArrayList<>();
consumerRecords.forEach(record -> {
String eventType = new String(record.headers().headers("type").iterator().next().value(), StandardCharsets.UTF_8);
if("add".equals(eventType)) {
AddEventRequest request = AddEventRequest.builder()
.count(Integer.parseInt(new String(record.headers().headers("count").iterator().next().value(), StandardCharsets.UTF_8)))
.data(record.value())
.build();
addEventRequests.add(request);
} else {
UpdateEventRequest request = UpdateEventRequest.builder()
.id(new String(record.headers().headers("id").iterator().next().value(), StandardCharsets.UTF_8))
.status(Integer.parseInt(new String(record.headers().headers("status").iterator().next().value(), StandardCharsets.UTF_8)))
.build();
updateEventRequests.add(request);
}
});
return new HashMap<String, List<?>>() {{
put("add", addEventRequests);
put("update", updateEventRequests);
}};
}
private void handleEventInsertions(Object eventObject) {
Map<String, List<?>> eventMap = (Map<String, List<?>>) eventObject;
List<AddEventRequest> addEventRequests = (List<AddEventRequest>) eventMap.get("add");
List<UpdateEventRequest> updateEventRequests = (List<UpdateEventRequest>) eventMap.get("update");
if(addEventRequests != null && !addEventRequests.isEmpty()) {
insertAddEvents(addEventRequests);
}
if(updateEventRequests != null && !updateEventRequests.isEmpty()) {
insertUpdateEvents(updateEventRequests);
}
}
private Single<ResultSet> insertAddEvents(List<AddEventRequest> requests) {
AddEventRequest request = requests.get(0);
List<Object> params = Arrays.asList(request.getCount(), request.getData());
String query = "INSERT INTO mytable(count, data, creat_ts) " +
"VALUES (?, ?, current_timestamp)";
return client.rxQueryWithParams(query, new JsonArray(params));
}
private Single<ResultSet> insertUpdateEvents(List<UpdateEventRequest> requests) {
UpdateEventRequest request = requests.get(0);
return client.rxQueryWithParams(
"UPDATE mytable SET status=?, creat_ts=current_timestamp WHERE id=?",
new JsonArray(Arrays.asList(request.getStatus(), request.getId())));
}
Can you try to wrap it into Observable.defer?
Observable.defer(() -> Observable.from(records.getDelegate().records().records("TOPIC_NAME"))...

OpenSubtitles API Response

I can communicate but I expect to get a list of subtitles in the Object. Here is my code:
public static void makerequest(){
Thread thread = new Thread() {
#Override
public void run() {
try {
XMLRPCClient client = new XMLRPCClient(new URL("https://api.opensubtitles.org/xml-rpc"));
HashMap ed = (HashMap<Object,String>) client.call("LogIn",username,password,"en",useragent);
String Token = (String) ed.get("token");
Map<String, String> videoProperties = new HashMap<>();
videoProperties.put("sublanguageid", "en");
videoProperties.put("imdbid", "528809");
Object[] videoParams = {videoProperties};
Object[] params = {Token, videoParams};
HashMap test2 = (HashMap<Object,String>) client.call("SearchSubtitles",params);
Object[] d = (Object[]) test2.get("data");
Log.d("diditworkstring", String.valueOf(d));
} catch (Exception ex) {
// Any other exception
Log.d("diditworkexception", String.valueOf(ex));
}
}
};
thread.start();
}
In my log I get the following:
Log: {seconds=0.188, data=[Ljava.lang.Object;#2ec1b40, status=200 OK}
I thought I would see a list of subtitle information. I see that in this response (data=Ljava.Object;#23c1b40). is there something in that Object??
Below is the code that ultimately worked. I don't know the proper terminology but here is my best shot at explaining what I was doing wrong. I was trying to directly look at the Object as a string. After viewing it with Arrays.asList() I was able to see the data. Then each item in the list I cast as Map. After that I was able to get/change anything my heart desired.
Hope this Helps someone some day :)
Thread thread = new Thread() {
#Override
public void run() {
try {
// Setup XMLRPC Client
XMLRPCClient client = new XMLRPCClient(new URL("https://api.opensubtitles.org/xml-rpc"));
HashMap ed = (HashMap<Object,String>) client.call("LogIn",username,password,"en",useragent);
// separate my Token from the reply
String Token = (String) ed.get("token");
// setup Parameters for next call to search for subs
Map<String, String> videoProperties = new HashMap<>();
videoProperties.put("sublanguageid", "en");
videoProperties.put("query", "blade 2");
Object[] videoParams = {videoProperties};
Object[] params = {Token, videoParams};
// Make next call include method and Parameters
java.util.HashMap test2 = (HashMap<String,Array>) client.call("SearchSubtitles",params);
// select data key from test2
Object[] d = (Object[]) test2.get("data");
// change d Object to List
List ee = Arrays.asList(d);
// Grab Map from list
Map xx = (Map) ee.get(1);
Log.d("diditworkstring", String.valueOf(xx.get("ZipDownloadLink")));
} catch (Exception ex) {
// Any other exception
Log.d("diditworkexception", String.valueOf(ex));
}
}
};

How to track progress status of async tasks running in multiple servers

I have multiple async tasks running in spring boot.These tasks read an excel file and insert all that data into the database.
The task is started when a request is made from the front-end. The front-end then periodically keeps requesting for the progress status of the task.
I need to track the progress of each of these tasks and know when they are completed.
This is the controller file that takes in requests for tasks and for polling their progress status:
public class TaskController {
#RequestMapping(method = RequestMethod.POST, value = "/uploadExcel")
public ResponseEntity<?> uploadExcel(String excelFilePath) {
String taskId = UUID.randomUUID().toString();
taskAsyncService.AsyncManager(id, excelFilePath);
HashMap<String, String> responseMap = new HashMap<>();
responeMap.put("taskId",taskId);
return new ResponseEntity<>(responseMap, HttpStatus.ACCEPTED);
}
// This will be polled to get progress of tasks being executed
#RequestMapping(method = RequestMethod.GET, value = "/tasks/progress/{id}")
public ResponseEntity<?> getTaskProgress(#PathVariable String taskId) {
HashMap<String, String> map = new HashMap<>();
if (taskAsyncService.containsTaskEntry(id) == null) {
map.put("Error", "TaskId does not exist");
return new ResponseEntity<>(map, HttpStatus.BAD_REQUEST);
}
boolean taskProgress = taskAsyncService.getTaskProgress(taskId);
if (taskProgress) {
map.put("message", "Task complete");
taskAsyncService.removeTaskProgressEntry(taskId);
return new ResponseEntity<>(map, HttpStatus.OK);
}
//Otherwise task is still running
map.put("progressStatus", "Task running");
return new ResponseEntity<>(map, HttpStatus.PARTIAL_CONTENT);
}
}
This is the code that executes the async tasks.
public class TaskAsyncService {
private final AtomicReference<ConcurrentHashMap<String, Boolean>> isTaskCompleteMap = new AtomicReference<ConcurrentHashMap<String, Boolean>>();
protected boolean containsTaskEntry(String taskId) {
if (isTaskCompleteMap.get().get(taskId) != null) {
return true;
}
return false;
}
protected boolean getTaskProgress(String taskId, String excelFilePath) {
return isTaskCompleteMap.get().get(taskId);
}
protected void removeTaskProgressEntry(String taskId) {
if (isTaskCompleteMap.get() != null) {
isTaskCompleteMap.get().remove(taskId);
}
}
#Async
public CompletableFuture<?> AsyncManager(String taskId) {
HashMap<String, String> map = new HashMap<>();
//Add a new entry into isTaskCompleteMap
isTaskCompleteMap.get().put(taskId, false);
//Insert excel rows into database
//Task completed set value to true
isTaskCompleteMap.get().put(taskId, true);
map.put("Success", "Task completed");
return CompletableFuture.completedFuture(map);
}
}
I am using AWS EC2 with a load balancer. Therefore, sometimes a
polling request gets handled by a newly spawned server which cannot
access the isTaskCompleteMap and returns saying that "TaskId does not exist".
How do I track the status of the tasks in this case? I understand i need a distributed data structure but don't understand of what kind and how to implement it.
You can use Hazelcast or similar distributed solutions(Redis, etc).
maps - https://docs.hazelcast.org/docs/3.0/manual/html/ch02.html#Map
Use distributed map from hazelcast instead of CHM.
Get from such map should return task even if they are processing on another pod(server)

How to get the processing kafka topic name dynamically in Flink Kafka Consumer?

Currently, I have one Flink Cluster which wants to consume Kafka Topic by one Pattern, By using this way, we don't need to maintain one hard code Kafka topic list.
import java.util.regex.Pattern;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010;
...
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
...
FlinkKafkaConsumer010<KafkaMessage> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, deserializerClazz.newInstance(), kafkaConsumerProps);
DataStream<KafkaMessage> input = env.addSource(kafkaConsumer);
I just want to know by using the above way, How can I get to know the real Kafka topic name during the processing?
Thanks.
--Update--
The reason why I need to know the topic information is we need this topic name as the parameter to be used in the coming Flink sink part.
You can implement your own custom KafkaDeserializationSchema, like this:
public class CustomKafkaDeserializationSchema implements KafkaDeserializationSchema<Tuple2<String, String>> {
#Override
public boolean isEndOfStream(Tuple2<String, String> nextElement) {
return false;
}
#Override
public Tuple2<String, String> deserialize(ConsumerRecord<byte[], byte[]> record) throws Exception {
return new Tuple2<>(record.topic(), new String(record.value(), "UTF-8"));
}
#Override
public TypeInformation<Tuple2<String, String>> getProducedType() {
return new TupleTypeInfo<>(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
}
}
With the custom KafkaDeserializationSchema, you can create DataStream of which the element contains topic infos. In my demo case the element type is Tuple2<String, String>, so you can access the topic name by Tuple2#f0.
FlinkKafkaConsumer010<Tuple2<String, String>> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, new CustomKafkaDeserializationSchema, kafkaConsumerProps);
DataStream<Tuple2<String, String>> input = env.addSource(kafkaConsumer);
input.process(new ProcessFunction<Tuple2<String,String>, String>() {
#Override
public void processElement(Tuple2<String, String> value, Context ctx, Collector<String> out) throws Exception {
String topicName = value.f0;
// your processing logic here.
out.collect(value.f1);
}
});
There are two ways to do that.
Option 1 :
You can use Kafka-clients library to access the Kafka metadata, get topic lists. Add maven dependency or equivalent.
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency>
You can fetch topics from Kafka cluster and filter using regex as given below
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
Properties properties = new Properties();
properties.put("bootstrap.servers","localhost:9092");
properties.put("client.id","java-admin-client");
try (AdminClient client = AdminClient.create(properties)) {
ListTopicsOptions options = new ListTopicsOptions();
options.listInternal(false);
Collection<TopicListing> listing = client.listTopics(options).listings().get();
List<String> allTopicsList = listings.stream().map(TopicListing::name)
.collect(Collectors.toList());
List<String> matchedTopics = allTopicsList.stream()
.filter(topicPattern.asPredicate())
.collect(Collectors.toList());
}catch (Exception e) {
e.printStackTrace();
}
}
Once you have matchedTopics list, you can pass that to FlinkKafkaConsumer.
Option 2 :
FlinkKafkaConsumer011 in Flink release 1.8 supports Topic & partition discovery dynamically based on pattern. Below is the example :
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test");
FlinkKafkaConsumer011<String> myConsumer = new FlinkKafkaConsumer011<>(
topicPattern ,
new SimpleStringSchema(),
properties);
Link : https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/connectors/kafka.html#kafka-consumers-topic-and-partition-discovery
In your case, option 2 suits best.
Since you want to access topic metadata as part of KafkaMessage, you need to implement KafkaDeserializationSchema interface as given below :
public class CustomKafkaDeserializationSchema extends KafkaDeserializationSchema<KafkaMessage> {
/**
* Deserializes the byte message.
*
* #param messageKey the key as a byte array (null if no key has been set).
* #param message The message, as a byte array (null if the message was empty or deleted).
* #param partition The partition the message has originated from.
* #param offset the offset of the message in the original source (for example the Kafka offset).
*
* #return The deserialized message as an object (null if the message cannot be deserialized).
*/
#Override
public KafkaMessage deserialize(ConsumerRecord<byte[], byte[]> record) throws IOException {
//You can access record.key(), record.value(), record.topic(), record.partition(), record.offset() to get topic information.
KafkaMessage kafkaMessage = new KafkaMessage();
kafkaMessage.setTopic(record.topic());
// Make your kafka message here and assign the values like above.
return kafkaMessage ;
}
#Override
public boolean isEndOfStream(Long nextElement) {
return false;
}
}
And then call :
FlinkKafkaConsumer010<Tuple2<String, String>> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, new CustomKafkaDeserializationSchema, kafkaConsumerProps);

Jedis related: how could the sub thread in the main thread had stopped?

I'll give some context information and hope you could get any idea on how could this issue happen.
Firstly, the main thread code for whole app is attched here.
public static void main(String args[]) throws Exception {
AppConfig appConfig = AppConfig.getInstance();
appConfig.initBean("applicationContext.xml");
SchedulerFactory factory=new StdSchedulerFactory();
Scheduler _scheduler=factory.getScheduler();
_scheduler.start();
Thread t = new Thread((Runnable) appConfig.getBean("consumeGpzjDataLoopTask"));
t.start();
}
Main method just does 3 things: inits beans by the Spring way, starts the Quartz jobs thread and starts the sub thread which subscribes one channel in Jedis and listen for msgs continuously. Then I'll show the code for the sub thread which starts subscribing:
#Override
public void run() {
Properties pros = new Properties();
Jedis sub = new Jedis(server, defaultPort, 0);
sub.subscribe(subscriber, channelId);
}
and the thread stack then message received:
But something weird happened in production environment. The quartz jobs scheduler is running properly while the consumeGpzjDataLoopTask seems to be exited somehow! I really can't get an idea why the issue could even happen, as you could see, the sub thread inits one Jedis instance with timeout 0 which stands for running infinitely, so I thought the sub thread should not be closed unless some terrible issues in main thread occured. But in prod environment, the message publisher published messages normally and the messages disappeared, and no related could be found in log file, just like the subscriber thread already been dead. BTW, I never met the situation when self testing in local machine.
Could you help me on the possibility for the issue? Comment me if any extra info needed for problem analyzing. Thanks.
Edited: For #code, here's the code for subscriber.
public class GpzjDataSubscriber extends JedisPubSub {
private static final Logger logger = LoggerFactory.getLogger(GpzjDataSubscriber.class);
private static final String META_INSERT_SQL = "insert into dbo.t_cl_tj_transaction_meta_attributes\n" +
"(transaction_id, meta_key, meta_value) VALUES (%d, '%s', '%s')";
private static final String GET_EVENT_ID_SQL = "select id from t_cl_tj_monthly_golden_events_dict where target = ?";
private static final String TRANSACTION_TB_NAME = "t_cl_tj_monthly_golden_stock_transactions";
private static Map<String, Object> insertParams = new HashMap<String, Object>();
private static Collection<String> metaSqlContainer = new ArrayList<String>();
#Autowired(required = false)
#Qualifier(value = "gpzjDao")
private GPZJDao gpzjDao;
public GpzjDataSubscriber() {}
public void onMessage(String channel, String message) {
consumeTransactionMessage(message);
logger.info(String.format("gpzj data subscriber receives redis published message, channel %s, message %s", channel, message));
}
public void onSubscribe(String channel, int subscribedChannels) {
logger.info(String.format("gpzj data subscriber subscribes redis channel success, channel %s, subscribedChannels %d",
channel, subscribedChannels));
}
#Transactional(isolation = Isolation.READ_COMMITTED)
private void consumeTransactionMessage(String msg) {
final GpzjDataTransactionOrm jsonOrm = JSON.parseObject(msg, GpzjDataTransactionOrm.class);
Map<String, String> extendedAttrs = (jsonOrm.getAttr() == null || jsonOrm.getAttr().isEmpty())? null : JSON.parseObject(jsonOrm.getAttr(), HashMap.class);
if (jsonOrm != null) {
SimpleJdbcInsert insertActor = gpzjDao.getInsertActor(TRANSACTION_TB_NAME);
initInsertParams(jsonOrm);
Long transactionId = insertActor.executeAndReturnKey(insertParams).longValue();
if (extendedAttrs == null || extendedAttrs.isEmpty()) {
return;
}
metaSqlContainer.clear();
for (Map.Entry e: extendedAttrs.entrySet()) {
metaSqlContainer.add(String.format(META_INSERT_SQL, transactionId.intValue(), e.getKey(), e.getValue()));
}
int[] insertMetaResult = gpzjDao.batchUpdate(metaSqlContainer.toArray(new String[0]));
}
}
private void initInsertParams(GpzjDataTransactionOrm orm) {
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Integer eventId = gpzjDao.queryForInt(GET_EVENT_ID_SQL, orm.getTarget());
insertParams.clear();
insertParams.put("khid", orm.getKhid());
insertParams.put("attr", orm.getAttr());
insertParams.put("event_id", eventId);
insertParams.put("user_agent", orm.getUser_agent());
insertParams.put("referrer", orm.getReferrer());
insertParams.put("page_url", orm.getPage_url());
insertParams.put("channel", orm.getChannel());
insertParams.put("os", orm.getOs());
insertParams.put("screen_width", orm.getScreen_width());
insertParams.put("screen_height", orm.getScreen_height());
insertParams.put("note", orm.getNote());
insertParams.put("create_time", df.format(new Date()));
insertParams.put("already_handled", 0);
}
}

Categories

Resources