Solr Trigger Optimize And Check Progress From Java Code - java

From this topic there are two ways to trigger solr optimize from Java code. Either sending an http request, or using solrj api.
But how to check the progress of it?
Say, an api which returns the progress of optimize in percentage
or strings like RUNNING/COMPLETED/FAILED.
Is there such an api?

Yes, optimize() in solrj api is a sync method. Here is what I used to monitor the optimization progress.
CloudSolrClient client = null;
try {
client = new CloudSolrClient(zkClientUrl);
client.setDefaultCollection(collectionName);
m_logger.info("Explicit optimize of collection " + collectionName);
long optimizeStart = System.currentTimeMillis();
UpdateResponse optimizeResponse = client.optimize();
for (Object object : optimizeResponse.getResponse()) {
m_logger.info("Solr optimizeResponse" + object.toString());
}
if (optimizeResponse != null) {
m_logger.info(String.format(
" Elapsed Time(in ms) - %d, QTime (in ms) - %d",
optimizeResponse.getElapsedTime(),
optimizeResponse.getQTime()));
}
m_logger.info(String.format(
"Time spent on Optimizing a collection %s :"
+ (System.currentTimeMillis() - optimizeStart)
/ 1000 + " seconds", collectionName));
} catch (Exception e) {
m_logger.error("Failed during explicit optimize on collection "
+ collectionName, e);
} finally {
if (client != null) {
try {
client.close();
} catch (IOException e) {
throw new RuntimeException(
"Failed to close CloudSolrClient connection.", e);
}
client = null;
}
}

Related

JackMidi.eventWrite - time parameter

I'm using this library: https://github.com/jaudiolibs/jnajack
I created a simple project to reproduce my issue: https://github.com/sc3sc3/MidiJnaJackTest
I have a JackPort outputPort running and appears in QjackCtl in 'Output Ports'.
In QjackCtl this outputPort is connected to GMIDImonitor, to observe Midi traffic.
I send MidiMessages to GMIDImonitor via method below.
I can't figure out the value of the time parameter.
When I set time = jackClient.getFrameTime() then the message does not arrive in GMIDImonitor.
When I set it to for example to 300 then one message is being sent eternally in a loop.
Any help? Thanks
public void processMidiMessage(ShortMessage shortMessage) {
System.out.println("processMidiMessage: " + shortMessage + ", on port: " + this.outputPort.getName());
try {
JackMidi.clearBuffer(this.outputPort);
} catch (JackException e) {
e.printStackTrace();
}
try {
int time = 300;
JackMidi.eventWrite(this.outputPort, time, shortMessage.getMessage(), shortMessage.getLength());
} catch (JackException e) {
e.printStackTrace();
}
}

Changing default limit in Vaadin 8 lazy loading with grid

I have implemented lazy loading in Vaadin 8 with grid implementation.
My backend runs on AWS Lambda which has a limit of 6 MB in response object.
The lazy loading implementation gives default limit(40) to server which makes my program crash giving error as "body too large".
I want to make changes in default limit of lazy loading in Vaadin.
Below is my code snippet:
grid.setDataProvider((sortorder, offset, limit) -> {
try {
return billingClient.getInvoiceListByCriteria(criteria, (long) offset, (long) limit).stream();
} catch (Exception e) {
logger.error("Exception while getInvoiceListByCriteria", e);
return null;
}
}, () -> {
try {
totalInvoices = billingClient.getCountInvoiceListByCriteria(criteria).longValue();
Integer count = totalInvoices.intValue();
if (count == 0)
Notification.show("No Invoices found.", Notification.Type.HUMANIZED_MESSAGE);
return count;
} catch (Exception e) {
logger.error("Error occured while getting count calling getCountInvoiceListByCriteria", e);
Notification.show("Error while getting count", Notification.Type.ERROR_MESSAGE);
return null;
}
});
Thats strange that 40 rows are larger than 6MB.
Never tried it but you can use grid.getDataCommunicator().setMinPushSize(size) to set the minimum number of items. It's initialized with 40 so I guess you can lower this to prevent your response from getting to large. But the "min" in the name suggests that also other factors may influence it, so you need to test it thoroughly.
The problem is resolved by manually changing the value of limit.
grid.setDataProvider((sortorder, offset, limit) -> {
try {
limit=20;
return billingClient.getInvoiceListByCriteria(criteria, (long) offset, (long) limit).stream();
} catch (Exception e) {
logger.error("Exception while getInvoiceListByCriteria", e);
return null;
}
}, () -> {
try {
totalInvoices = billingClient.getCountInvoiceListByCriteria(criteria).longValue();
Integer count = totalInvoices.intValue();
if (count == 0)
Notification.show("No Invoices found.", Notification.Type.HUMANIZED_MESSAGE);
return count;
} catch (Exception e) {
logger.error("Error occured while getting count calling getCountInvoiceListByCriteria", e);
Notification.show("Error while getting count", Notification.Type.ERROR_MESSAGE);
return null;
}
});
offset is adjusted according to limit I have set.

java.lang.OutOfMemoryError | Reading and posting huge data

I have been scratching my head on this for quite a while now. I have a large CSV file with over hundreds of billions of records.
I have a simple task at hand, to create JSON out of this CSV file and post it to a server. I want to make this task as quick as possible. So far my code to read CSV is as follows:
protected void readIdentityCsvDynamicFetch() {
String csvFile = pathOfIdentities;
CSVReader reader = null;
PayloadEngine payloadEngine = new PayloadEngine();
long counter = 0;
int size;
List<IdentityJmPojo> identityJmList = new ArrayList<IdentityJmPojo>();
try {
ExecutorService uploaderPoolService = Executors.newFixedThreadPool(3);
long lineCount = lineCount(pathOfIdentities);
logger.info("Line Count: " + lineCount);
reader = new CSVReader(new BufferedReader(new FileReader(csvFile)), ',', '\'', OFFSET);
String[] line;
long startTime = System.currentTimeMillis();
while ((line = reader.readNext()) != null) {
// logger.info("Lines"+line[0]+line[1]);
IdentityJmPojo identityJmPojo = new IdentityJmPojo();
identityJmPojo.setIdentity(line[0]);
identityJmPojo.setJM(line.length > 1 ? line[1] : (jsonValue/*!=null?"":jsonValue*/));
identityJmList.add(identityJmPojo);
size = identityJmList.size();
switch (size) {
case STEP:
counter = counter + STEP;
payloadEngine.prepareJson(identityJmList, uploaderPoolService,jsonKey);
identityJmList = new ArrayList<IdentityJmPojo>();
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
logger.info("=================== Time taken to read " + STEP + " records from CSV: " + elapsedTime + " and total records read: " + counter + "===================");
}
}
if (identityJmList.size() > 0) {
logger.info("=================== Executing Last Loop - Payload Size: " + identityJmList.size() + " ================= ");
payloadEngine.prepareJson(identityJmList, uploaderPoolService, jsonKey);
}
uploaderPoolService.shutdown();
} catch (Throwable e) {
e.printStackTrace();
logger.error("CsvReader || readIdentityCsvDynamicFetch method ", e);
} finally {
try {
if (reader != null)
reader.close();
} catch (IOException e) {
e.printStackTrace();
logger.error("CsvReader || readIdentityCsvDynamicFetch method ", e);
}
}
}
Now I use a ThreadPool executor service, in whose run() method I have an Apache Http client setup to post JSON to server. (I am using connection pooling and keep alive strategy and opening and closing the conn just once)
I create & post my JSON like below:
public void prepareJson(List<IdentityJmPojo> identities, ExecutorService notificationService, String key) {
try {
notificationService.submit(new SendPushNotification(prepareLowLevelJson(identities, key)));
// prepareLowLevelJson(identities, key);
} catch (Exception e) {
e.printStackTrace();
logger.error("PayloadEngine || readIdentityCsvDynamicFetch method ", e);
}
}
private ObjectNode prepareLowLevelJson(List<IdentityJmPojo> identities, String key) {
long startTime = System.currentTimeMillis();
ObjectNode mainJacksonObject = JsonNodeFactory.instance.objectNode();
ArrayNode dJacksonArray = JsonNodeFactory.instance.arrayNode();
for (IdentityJmPojo identityJmPojo : identities) {
ObjectNode dSingleObject = JsonNodeFactory.instance.objectNode();
ObjectNode dProfileInnerObject = JsonNodeFactory.instance.objectNode();
dSingleObject.put("identity", identityJmPojo.getIdentity());
dSingleObject.put("ts", ts);
dSingleObject.put("type", "profile");
//
dProfileInnerObject.put(key, identityJmPojo.getJM());
dSingleObject.set("profileData", dProfileInnerObject);
dJacksonArray.add(dSingleObject);
}
mainJacksonObject.set("d", dJacksonArray);
long stopTime = System.currentTimeMillis();
long elapsedTime = stopTime - startTime;
logger.info("===================Time to create JSON: " + elapsedTime + "===================");
return mainJacksonObject;
}
Now comes a strange part, When I comment out the notification sevice:
// notificationService.submit(new SendPushNotification(prepareLowLevelJson(identities, key)));
Everything works smooth, I can read the CSV and prepare JSON in under 29000 millis.
But when the actual task is to be performed, it fails and and I get a Out of memory error, I think there is a design flaw here. How can I handle this huge amount of data quickly, any tips will be greatly appreciated.
I think creating the Json object and array inside the for loop is also taking a lot of my memory, however I dont seem to find an alternative to this.
here is the stack trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.LinkedHashMap.createEntry(LinkedHashMap.java:442)
at java.util.HashMap.addEntry(HashMap.java:884)
at java.util.LinkedHashMap.addEntry(LinkedHashMap.java:427)
at java.util.HashMap.put(HashMap.java:505)
at com.fasterxml.jackson.databind.node.ObjectNode._put(ObjectNode.java:861)
at com.fasterxml.jackson.databind.node.ObjectNode.put(ObjectNode.java:769)
at uploader.PayloadEngine.prepareLowLevelJson(PayloadEngine.java:50)
at uploader.PayloadEngine.prepareJson(PayloadEngine.java:24)
at uploader.CsvReader.readIdentityCsvDynamicFetch(CsvReader.java:97)
at uploader.Main.main(Main.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

J2me - OutofMemory in HTTP request

I have a method in a j2me project in which, after 2 days of normal working, it collapses. The error presented is the following:
Contacting website...
java.lang.OutOfMemoryError
(stack trace incomplete)
The said method is the one used to communicate with a website. It receives a string and a mode selector (i=1 or smth else) and procedes with the request. Here's the code:
void enviarPost(int i, String comando)throws IOException, IllegalStateException, IllegalArgumentException, ATCommandFailedException{
System.out.println("Contacting website...");
if(i == 1)
{
url = "http://websitedummy.com.pl/index.php?IMEI=" + imeiX + "&IP=" + ipX;
}
//53543D303B44723D4E616F
else
{
url2 = comando;
url = "http://websitedummy.com.pl/index.php?data={\"IMEI\":\""+imeiX+"\",\"TS\":\"20/04/13-08:31:44\",\"SER\":\""+url2+"\"}";
}
try {
Thread.sleep(1000);
connection = (HttpConnection) Connector.open(url);
Thread.sleep(500);
connection.setRequestMethod(HttpConnection.GET);
Thread.sleep(500);
connection.setRequestProperty("Content-Type", "text/plain");
Thread.sleep(500);
int con = 0;
try{
con = connection.getResponseCode();
} catch (Exception e4)
{
System.out.println(e4);
}
if (con == HttpConnection.HTTP_OK) {
System.out.println("Vamos");
inputstream_ = connection.openInputStream();
int ch;
while ((ch = inputstream_.read()) != -1 ) {
dataReceived.append((char) ch);
}
System.out.println("Atualizado.");
acabouatualizar=1;
inputstream_.close();
connection.close();
} else {
System.out.println("Error");
// Connection not ok
}
} catch (Exception e) {
System.out.println("EXCEÇÂO 1 - " + e);
} finally {
if (inputstream_ != null) {
try {
inputstream_.close();
} catch (Exception e1) {
System.out.println("EXCEÇÂO 2- " + e1);
}
}
if (connection == null) {
try {
System.out.println("Fechou conexao.");
connection.close();
} catch (Exception e2) {
System.out.println("EXCEÇÂO 3- " + e2);
}
}
}
}
To solve the issue i thought about clearing all the variables used in the connection. The problem is I kind of have to be almost sure what the issue is because the error will take 2 days to happen and this will cost me a great amount of time.
Any suggestions?
Thanks guys.
It's hard to say what cause the outOfMemoryError exception, but there are ways to work with it. The root cause of this problem can refer to this:
there isn't enough memory in the JVM;
native threads aren't enough, unable to create more threads;
You can use jconsole to debug with this problem, which is a tool to have a look at the memory, threads, classes usage in the JVM. Besides, the exception message can point out the root cause in some cases.

Put multiple items into DynamoDB by Java code

I would like use batchWriteItem method of SDK Amazon to put a lot of items into table.
I retrive the items from Kinesis, ad it has a lot of shard.
I used this method for one item:
public static void addSingleRecord(Item thingRecord) {
// Add an item
try
{
DynamoDB dynamo = new DynamoDB(dynamoDB);
Table table = dynamo.getTable(dataTable);
table.putItem(thingRecord);
} catch (AmazonServiceException ase) {
System.out.println("addThingsData request "
+ "to AWS was rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("addThingsData - Caught an AmazonClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with AWS, "
+ "such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
public static void addThings(String thingDatum) {
Item itemJ2;
itemJ2 = Item.fromJSON(thingDatum);
addSingleRecord(itemJ2);
}
The item is passed from:
private void processSingleRecord(Record record) {
// TODO Add your own record processing logic here
String data = null;
try {
// For this app, we interpret the payload as UTF-8 chars.
data = decoder.decode(record.getData()).toString();
System.out.println("**processSingleRecord - data " + data);
AmazonDynamoDBSample.addThings(data);
} catch (NumberFormatException e) {
LOG.info("Record does not match sample record format. Ignoring record with data; " + data);
} catch (CharacterCodingException e) {
LOG.error("Malformed data: " + data, e);
}
}
Now if i want to put a lot of record, I will use:
public static void writeMultipleItemsBatchWrite(Item thingRecord) {
try {
dataTableWriteItems.addItemToPut(thingRecord);
System.out.println("Making the request.");
BatchWriteItemOutcome outcome = dynamo.batchWriteItem(dataTableWriteItems);
do {
// Check for unprocessed keys which could happen if you exceed provisioned throughput
Map<String, List<WriteRequest>> unprocessedItems = outcome.getUnprocessedItems();
if (outcome.getUnprocessedItems().size() == 0) {
System.out.println("No unprocessed items found");
} else {
System.out.println("Retrieving the unprocessed items");
outcome = dynamo.batchWriteItemUnprocessed(unprocessedItems);
}
} while (outcome.getUnprocessedItems().size() > 0);
} catch (Exception e) {
System.err.println("Failed to retrieve items: ");
e.printStackTrace(System.err);
}
}
but how can I send the last group? because I send only when I have 25 items, but at the end the number is lower.
You can write items to your DynamoDB table one at a time using the Document SDK in a Lambda function attached to your Kinesis Stream using PutItem or UpdateItem. This way, you can react to Stream Records as they appear in the Stream without worrying about whether there are any more records to process. Behind the scenes, BatchWriteItem consumes the same amount of write capacity units as the corresponding PutItem calls. A BatchWriteItem will be as latent as the PUT in the batch that takes the longest. Therefore, using BatchWriteItem, you may experience higher average latency than with parallel PutItem/UpdateItem calls.

Categories

Resources