I'm trying ElasticSearch 7.9 and wanted to do a benchmark on 1M documents. I use the 'single node' docker image.
I use the high level java client to index documents using BulkRequest. I consistenly get a Too Many Requests exception after 360k requests, even if I put some sleep(1000) statements after each 10k docs.
I tried increasing the memory in the jvm.options from 1G to 8G but that did not affect it.
Is there an option to increase this number of requests?
My laptop has 4 cores and 16GB and docker is not limited in any way.
Error details:
{"error":{"root_cause":[{"type":"es_rejected_execution_exception","reason":"rejected execution of coordinating operation [coordinating_and_primary_bytes=0, replica_bytes=0, all_bytes=0, coordinating_operation_bytes=108400734, max_coordinating_and_primary_bytes=107374182]"}],"type":"es_rejected_execution_exception","reason":"rejected execution of coordinating operation [coordinating_and_primary_bytes=0, replica_bytes=0, all_bytes=0, coordinating_operation_bytes=108400734, max_coordinating_and_primary_bytes=107374182]"},"status":429}
Indexing code
CreateIndexRequest createIndexRequest = new CreateIndexRequest(index);
createIndexRequest.mapping(
"{\n" +
" \"properties\": {\n" +
" \"category\": {\n" +
" \"type\": \"keyword\"\n" +
" },\n" +
" \"title\": {\n" +
" \"type\": \"keyword\"\n" +
" },\n" +
" \"naam\": {\n" +
" \"type\": \"keyword\"\n" +
" }\n" +
" }\n" +
"}",
XContentType.JSON);
CreateIndexResponse createIndexResponse = client.indices().create(createIndexRequest, RequestOptions.DEFAULT);
for (int b=0;b<100; b++) {
List<Book> bookList = new ArrayList<>();
for (int i = 0; i < 10_000; i++) {
int item = b*100_000 + i;
bookList.add(new Book("" + item,
item % 2 == 0 ? "aap" : "banaan",
item % 4 == 0 ? "naam1" : "naam2",
"Rob" + item,
"The great start" + item/100,
item));
}
bookList.forEach(book -> {
IndexRequest indexRequest = new IndexRequest().
source(objectMapper.convertValue(book, Map.class)).index(index).id(book.id());
bulkRequest.add(indexRequest);
});
System.out.println("Ok, batch: " + b);
bulkRequest.timeout(TimeValue.timeValueSeconds(20));
try {
Thread.sleep(1_000);
} catch (InterruptedException e) {
e.printStackTrace();
}
try {
client.bulk(bulkRequest, RequestOptions.DEFAULT);
System.out.println("Ok2");
} catch (IOException e) {
e.printStackTrace();
// System.out.println(objectMapper.convertValue(book, Map.class));
}
}
Ok I found it. I just kept adding request to the BulkRequest instead of clearing it.
Related
My problem is that I do a post request to get the total number of elements in my db and I need to do a for loop until I reach that number integer division 10.
My current not working code
protected Mono<List<Long>> getAllSubscriptionIds(ProductCode productCode) {
List<Long> subscriptionIds = new ArrayList<>();
String body = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": 0,\n" +
" \"limit\": 10" +
"\n }\n" +
" }";
//first post where I get the number of elements in my db
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
body,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(response -> {
log.debug(response);
ResponseModel<DataLakeCallResponse<JsonNode>> variable = null;
try {
variable = JsonUtil.fromString(response, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
log.debug(response);
} catch (IOException e) {
throw new RuntimeException(e);
}
variable.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
//if number of elements > 10
if(variable.getPayload().getPagination().getResultCount() > 10){
//for loop on that number / 10 (so that I can use an offset
for (int i = 0; i < variable.getPayload().getPagination().getResultCount() / 10; i++){
String bodyI = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": " + (i + 1) * 10 + ",\n" +
" \"limit\": 10\n" +
" }\n" +
" }";
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
bodyI,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(resp -> {
ResponseModel<DataLakeCallResponse<JsonNode>> varia = null;
try {
varia = JsonUtil.fromString(resp, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
} catch (IOException e) {
throw new RuntimeException(e);
}
varia.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
return Mono.just(subscriptionIds);
});
}
}
return Mono.just(subscriptionIds);
});
}
I do understand why this does not work (it return inside the for loop) but I don't really understand what alternative can I use to make it work.
I tried an external method but it will still fail. I tried a Mono.zip but I think I tried it wrong.
This is an alternative that I tried but it still does not work.
protected Mono<Object> getAllSubscriptionIds(ProductCode productCode) {
this.counter = 0;
List<Long> subscriptionIds = new ArrayList<>();
List<Mono<Integer>> resultList = new ArrayList<>();
String body = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": 0,\n" +
" \"limit\": 10" +
"\n }\n" +
" }";
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
body,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(response -> {
log.debug(response);
ResponseModel<DataLakeCallResponse<JsonNode>> variable = null;
try {
variable = JsonUtil.fromString(response, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
log.debug(response);
} catch (IOException e) {
throw new RuntimeException(e);
}
variable.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
if(variable.getPayload().getPagination().getResultCount() > 10){
for (int i = 0; i < variable.getPayload().getPagination().getResultCount() / 10; i++){
resultList.add(Mono.just(i));
}
}
return Mono.zip(resultList, intMono -> {
this.counter++;
String bodyI = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": " + this.counter * 10 + ",\n" +
" \"limit\": 10\n" +
" }\n" +
" }";
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
bodyI,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(resp -> {
ResponseModel<DataLakeCallResponse<JsonNode>> varia = null;
try {
varia = JsonUtil.fromString(resp, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
} catch (IOException e) {
throw new RuntimeException(e);
}
varia.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
return Mono.just(subscriptionIds);
});
});
// return Mono.just(subscriptionIds);
});
}
Any idea how to solve this?
the issue with your code is that you are returning inside the for loop, which causes the function to return immediately after the first iteration of the loop. Instead of returning, you can use the flatMap operator to keep the pipeline going and add the results of each iteration to the subscriptionIds
Ok finally I got a solution
protected Flux<Object> getAllSubscriptionIds(ProductCode productCode) {
List<Long> subscriptionIds = new ArrayList<>();
AtomicInteger i = new AtomicInteger();
String body = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": 0,\n" +
" \"limit\": 1000" +
"\n }\n" +
" }";
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
body,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMapMany(response -> {
log.debug(response);
ResponseModel<DataLakeCallResponse<JsonNode>> variable = null;
try {
variable = JsonUtil.fromString(response, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
log.debug(response);
} catch (IOException e) {
throw new RuntimeException(e);
}
variable.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
if(variable.getPayload().getPagination().getResultCount() > 1000){
String bodyI = "{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": " + i.incrementAndGet() * 1000 + ",\n" +
" \"limit\": 1000\n" +
" }\n" +
" }";
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
bodyI,
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(resp -> {
return restClient.post(
"https://" + url,
buildRequiredHeaders(),
"{\n" +
" \"productCodes\": [\"" + productCode.name() + "\"],\n" +
" \"pagination\": {\n" +
" \"offset\": " + i.incrementAndGet() * 1000 + ",\n" +
" \"limit\": 1000\n" +
" }\n" +
" }",
String.class
)
.onErrorMap(err-> new RuntimeException(err.getMessage()))
.flatMap(respI -> {
ResponseModel<DataLakeCallResponse<JsonNode>> varia = null;
try {
varia = JsonUtil.fromString(respI, new TypeReference<ResponseModel<DataLakeCallResponse<JsonNode>>>() {
});
} catch (IOException e) {
throw new RuntimeException(e);
}
varia.getPayload().getList().forEach(
object-> subscriptionIds.add(object.get("subscriptionId").asLong()));
return Mono.just(subscriptionIds);
});
}).repeat(variable.getPayload().getPagination().getResultCount() / 1000);
}
return Mono.just(subscriptionIds);
});
}
Basically I changed the first flatMap to flatMapMany so that inside that I could have a flatMap with a repeat loop. I had to return a Flux instead of my original Mono<List> but since I know it will always result in a Mono<List> anyway I changed original caller to
return getAllSubscriptionIds(request.getEventMetadata().getProductCode()).collect(Collectors.reducing((i1, i2) -> i1)).flatMap(responseIds -> {
List<BillableApiCall> queryResults = dataLakeMapper.getBillableCallsApiCheckIban(
((ArrayList<Long>)responseIds.get()),
DateUtil.toLocalDateEuropeRome(request.getFromDate()),
DateUtil.toLocalDateEuropeRome(request.getToDate()),
request.getPagination()
);
So I had to add .collect(Collectors.reducing((i1, i2) -> i1)) (I copy/pasted this so I only guess what it does... it convert the Flux to Mono), and cast my responseIds with ((ArrayList)responseIds.get()).
repeat was not the final solution since it only repeat what's inside the flatMap (it will not repeat the post connected to it) so I had to use a trick... I removed the for loop that was not necessary, I made a post inside my flatMap repeat with another flatMap... The only missing things was to keep track of my index and I was able to find that you can use an AtomicInteger to do that.
It was not an easy task at all but I tested it and it's working.
To recap:
flatMapMany with a repeat flatMap inside (repeat only take a long as an argument, so it will repeat until it reach that value and auto-increment... but you cannot use this index for what I understand).
Another flatMap inside the flatMap repeat, that's because you cannot do another post call without this workaround as repeat will only repeat what's inside of the flatMap (not the post call before that but it can do the post call inside it).
An AtomicInteger as your index.
Change the return type to Flux, collect and cast.
Hope someone will benefit from my headache.
I am trying to figure out how to implement a First in Last out Algorithm for the given java class and I am slightly confused about how to approach the problem.
import java.util.Vector;
import java.io.*;
public class SchedulingAlgorithm {
public static Results Run(int runtime, Vector processVector, Results result) {
int i = 0;
int comptime = 0;
int currentProcess = 0;
int previousProcess = 0;
int size = processVector.size();
int completed = 0;
String resultsFile = "Summary-Processes";
result.schedulingType = "Batch (Nonpreemptive)";
result.schedulingName = "First-Come First-Served";
try {
//BufferedWriter out = new BufferedWriter(new FileWriter(resultsFile));
//OutputStream out = new FileOutputStream(resultsFile);
PrintStream out = new PrintStream(new FileOutputStream(resultsFile));
sProcess process = (sProcess) processVector.elementAt(currentProcess);
out.println("Process: " + currentProcess + " registered... (" + process.cputime + " " + process.ioblocking + " " + process.cpudone + " " + process.cpudone + ")");
while (comptime < runtime) {
if (process.cpudone == process.cputime) {
completed++;
out.println("Process: " + currentProcess + " completed... (" + process.cputime + " " + process.ioblocking + " " + process.cpudone + " " + process.cpudone + ")");
if (completed == size) {
result.compuTime = comptime;
out.close();
return result;
}
for (i = 0; i < size + 1; i++) {
process = (sProcess) processVector.elementAt(i);
if (process.cpudone < process.cputime) {
currentProcess = i;
}
}
process = (sProcess) processVector.elementAt(currentProcess);
out.println("Process: " + currentProcess + " registered... (" + process.cputime + " " + process.ioblocking + " " + process.cpudone + " " + process.cpudone + ")");
}
if (process.ioblocking == process.ionext) {
out.println("Process: " + currentProcess + " I/O blocked... (" + process.cputime + " " + process.ioblocking + " " + process.cpudone + " " + process.cpudone + ")");
process.numblocked++;
process.ionext = 0;
previousProcess = currentProcess;
for (i = size - 1; i >= 0; i--) {
process = (sProcess) processVector.elementAt(i);
if (process.cpudone < process.cputime && previousProcess != i) {
currentProcess = i;
}
}
process = (sProcess) processVector.elementAt(currentProcess);
out.println("Process: " + currentProcess + " registered... (" + process.cputime + " " + process.ioblocking + " " + process.cpudone + " " + process.cpudone + ")");
}
process.cpudone++;
if (process.ioblocking > 0) {
process.ionext++;
}
comptime++;
}
out.close();
} catch (IOException e) { /* Handle exceptions */ }
result.compuTime = comptime;
return result;
}
}
The goal of the new algorithm is to change the output of the processes file so that process 2 is processed first, followed process 1 and 0. The issue is that I cannot seem to find any written examples of a (First in Last out) scheduling Algorithm.
I have tried to modify the for loops in the algorithm to include a stack.push() for the currentprocess. currentprocess in this case, I believe is the value that indicates what process # is being evaluated by the program to print out the summary-results file like so.
for (i = size - 1; i >= 0; i--) {
process = (sProcess) processVector.elementAt(i);
if (process.cpudone < process.cputime && previousProcess != i) {
currentProcess = i;
stack.push(currentProcess)
}
}
Unfortunately, the results will still look like this. And the processes will always output and complete 0 first in the scheduling.
Scheduling Type: Batch (Nonpreemptive)
Scheduling Name: First-Come First-Served
Simulation Run Time: 2955
Mean: 1100
Standard Deviation: 510
Process # CPU Time IO Blocking CPU Completed CPU Blocked
0 639 (ms) 30 (ms) 639 (ms) 21 times
1 1158 (ms) 30 (ms) 1158 (ms) 38 times
2 1158 (ms) 30 (ms) 1158 (ms) 38 times
If you have anymore questions please let me know, as I am trying to rapidly complete this assignment. I will be thankful for any advice I could receive regarding this problem.
I am trying to get a count of all models from the cars object, which is part of a SerenityRest response.
Response response = SerenityRest.rest()
.contentType("application/json")
.when()
.get("/api/");
if (response.statusCode() == 200) {
int numUniqueModels = response.body().path("cars.size()"); // 3
}
Response:
"cars": {
"Acura": [
"ILX",
"MDX",
"TLX"
],
"Audi": [
"A3",
"A4",
"A6",
"A7"
],
"BMW": [
"x",
"y"
]
}
For example,
response.body().path("cars.size()") = 3,
but i need the sum of cars.Acura.size() + cars.Audi.size() + cars.BMW.size() to get all models. However, i don't know if the exact names Acura, Audi or BMW will exist in the response, since vehicles may change dynamically. To solve this, i will need to do some kind of a loop, where:
sum = 0;
for (int i = 0; i < response.body().path("cars.size()"); i++) {
sum += response.body().path("cars.[i].size()");
}
The sum should give a total number of car models = 9.
The problem is that this syntax: path("cars.[i].size()") is not correct. What is the correct call?
If you want to make complex request with rest-assured you have to follow the synthax described here groovy gpath as mentionned here rest-assured doc:
Note that the JsonPath implementation uses Groovy's GPath syntax and is not to be confused with Jayway's JsonPath implementation.
So you have to play with some groovy synthax:
int total = JsonPath.from("{ "
+ " \"cars\": {\n"
+ " \"Acura\": [\n"
+ " \"ILX\",\n"
+ " \"MDX\",\n"
+ " \"TLX\"\n"
+ " ],\n"
+ " \"Audi\": [\n"
+ " \"A3\",\n"
+ " \"A4\",\n"
+ " \"A6\",\n"
+ " \"A7\"\n"
+ " ],\n"
+ " \"BMW\": [\n"
+ " \"x\",\n"
+ " \"y\"\n"
+ " ]\n"
+ " }"
+ "}")
.getInt("cars.collect { it.value.size() }.sum()")
So this expression should make the job cars.collect { it.value.size() }.sum(). The collect method is like a map method in functional programming. So you map the collection cars HashMap with the size() of its values and you collect the sum()!
Edit
So you just have to do:
Response response = SerenityRest.rest()
.contentType("application/json")
.when()
.get("/api/");
if (response.statusCode() == 200) {
int numUniqueModels = response.body().path("cars.collect { it.value.size() }.sum()"); // 9
}
I wrote a piece of JDBC template code, which inserts the record in the table, but the problem is my execution is stuck on this particular snippet, it seems some kind of hang up. I didn't figure out the cause as query properly running in sqldeveloper
List<SalaryDetailReport> reports = salaryDetailReportDAO.findAll(tableSuffix, regionId, circleId);
// the above line find the required data, if data is found then it proceeds
if (reports != null && reports.size() > 0) {
for (SalaryDetailReport salaryDetail : reports) {
try {
SalaryDetail sd = new SalaryDetail();
sd.setDetailReport(salaryDetail);
salaryDetailDAO.save(sd, tableSuffix);
} catch (Exception e) {
log.error("Error occured", e);
e.printStackTrace();
throw new MyExceptionHandler(" Error :" + e.getMessage());
}
}
System.out.println("data found");
} else {
log.error("Salary Record Not Found.");
throw new MyExceptionHandler("No record Found.");
}
You people saw try-catch , my execution stuck inside try and catch and here is the insertion code in my implementation class. when i commented the above code then my application works fine, but why my application stuck here, I am not able to figure it out, kindly help me
#Override
public void save(SalaryDetail details, String tableSuffix) {
String tabName = "SALARY_DETAIL_" + tableSuffix;
// String q = "INSERT INTO " + tabName + "(ID "
String q = "INSERT INTO SALARY_DETAIL_TBL "
+ " (ID "
+ " ,EMP_NAME "
+ " ,EMP_CODE "
+ " ,NET_SALARY "
+ " ,YYYYMM "
+ " ,PAY_CODE "
+ " ,EMP_ID "
+ " ,PAY_CODE_DESC "
+ " ,REMARK "
+ " ,PAY_MODE ) "
+ " (SELECT (sd.SALARY_REPORT_ID) ID "
+ " ,(sd.emp_name) emp_name "
+ " ,(sd.EMP_CODE) EMP_CODE "
+ " ,(sd.amount) NET_SALARY "
+ " ,(sd.YYYYMM) YYYYMM "
+ " ,(sd.pay_code) pay_code "
+ " ,(sd.emp_id) emp_id "
+ " ,(sd.PAY_CODE_DESC) PAY_CODE_DESC "
+ " ,(sd.REMARK) REMARK "
+ " ,(sd.PAY_MODE)PAY_MODE "
// + " FROM SALARY_DETAIL_REPORT_" + tableSuffix + " sd "
+ " FROM SALARY_DETAIL_REPORT_TBL sd "
+ " WHERE sd.PAY_CODE = 999 "
+ " AND sd.EMP_ID IS NOT NULL "
// + " AND sd.EMP_ID NOT IN (SELECT EMP_ID FROM SALARY_DETAIL_" + tableSuffix + ") "
+ " AND sd.EMP_ID NOT IN (SELECT EMP_ID FROM SALARY_DETAIL_TBL) "
+ " ) ";
MapSqlParameterSource param = new MapSqlParameterSource();
param.addValue("id", details.getId());
param.addValue("EMP_NAME", details.getEmpName());
param.addValue("EMP_CODE", details.getEmpCode());
param.addValue("NET_SALARY", details.getNetSalary());
param.addValue("GROSS_EARNING", details.getGrossEarning());
param.addValue("GROSS_DEDUCTION", details.getGrossDeduction());
param.addValue("YYYYMM", details.getYyyymm());
param.addValue("EMP_ID", details.getEmployee() != null ? details.getEmployee().getEmpId() : null);
KeyHolder keyHolder = new GeneratedKeyHolder();
getNamedParameterJdbcTemplate().update(q, param);
// details.setId(((BigDecimal) keyHolder.getKeys().get("ID")).longValue());
}
The main problem is in your query is Not In condition. It will degrade your performance. Try to fetch the "SELECT EMP_ID FROM SALARY_DETAIL_TB" in a separate query and pass in the Not in block in the main query. This will increase the performance of your query. Every time a save is performed this will fire the select query every time.
You have to decide whether you will insert records from SELECT or from the application.
If you don't need to manipulate with data after their select then you can simply call one INSERT INTO SELECT statement without any for cycle. It will be fast because of the only one INSERT statement call.
So you will implement method like copyAllInSalaryDetail(tableSuffix, regionId, circleId) in your SalaryDetailReportDAO and that method will execute INSERT INTO salary_detail_tbl... (...) (SELECT ... WHERE ...) using the same WHERE condition as you have in findAll() method. All inserts will be done only on the Database layer.
If you want to manipulate with data before their insert you can continue with your approach using SalaryDetail bean and for cycle, but you should remove the SELECT part from the INSERT statement and use values from the provided bean. Then the save() method can look like:
#Override
public void save(SalaryDetail details, String tableSuffix) {
// use tableSuffix if it is really needed
String q = "INSERT INTO SALARY_DETAIL_TBL "
+ " (ID "
+ " ,EMP_NAME "
+ " ,EMP_CODE "
+ " ,NET_SALARY "
+ " ,YYYYMM "
+ " ,PAY_CODE "
+ " ,EMP_ID "
+ " ,PAY_CODE_DESC "
+ " ,REMARK "
+ " ,PAY_MODE ) "
+ " VALUES (:id "
+ " ,:emp_name "
+ " ,:emp_code "
+ " ,:net_salary "
+ " ,:yyyymm "
+ " ,:pay_code "
+ " ,:emp_id "
+ " ,:pay_code_desc "
+ " ,:remark "
+ " ,:pay_mode)";
MapSqlParameterSource param = new MapSqlParameterSource();
// KeyHolder keyHolder = new GeneratedKeyHolder();
// details.setId(((BigDecimal) keyHolder.getKeys().get("ID")).longValue());
param.addValue("id", details.getId());
param.addValue("emp_name", details.getEmpName());
param.addValue("emp_code", details.getEmpCode());
param.addValue("net_salary", details.getNetSalary());
param.addValue("pay_code", details.getPayCode());
param.addValue("pay_code_desc", details.getPayCodeDesc());
param.addValue("pay_mode", details.getPayMode());
param.addValue("remark", details.getPayRemark());
param.addValue("yyyymm", details.getYyyymm());
param.addValue("emp_id", details.getEmployee() != null ? details.getEmployee().getEmpId() : null);
getNamedParameterJdbcTemplate().update(q, param);
}
Am trying to maintain two MQTT connections using PAHO library.If i maintain the same object Subscription hangs on second subscription, but if i create a new client it works but closes the first subscription. Below is my code snippet. What could i be missing?.
try {
logging.info(logPreString + "| "
+ Thread.currentThread().getId() + " | "
+ "Subscribing to topic: ");
// Construct the connection options object that contains connection parameters
connectOptions = new MqttConnectOptions();
connectOptions.setCleanSession(true);
connectOptions.setKeepAliveInterval(1000);
// Construct an MQTT blocking mode client
client = new MqttClient(broker, clientID, persistence);
// Set this wrapper as the callback handler
client.setCallback(this);
// Connect to the MQTT server
logging.info(logPreString + "Connecting to broker ..." + broker);
client.connect(connectOptions);
logging.info(logPreString + "Connected and sucscribed ");
// Subscribe to a topic
System.out.println("Subscribe to topic(s): " + "| "
+ Thread.currentThread() + " | "
+ Arrays.toString(topic) + " With QOS of:" + props.getQoS());
client.subscribe(topic, QOS);
} catch (MqttException me) {
logging.error(logPreString + "MqttException caught "
+ "while connecting to broker. Error: "
+ me.getMessage());
}
}
public void subscribeAll(String[] topic, int[] QOS) {
try {
logging.info(logPreString + "subscribing to all ..." + broker);
// Subscribe to a topic
System.out.println("Subscribing to topic(s): " + "| "
+ Thread.currentThread() + " | "
+ Arrays.toString(topic)
+ " With QOS of:" + props.getQoS());
client.subscribe(topic, QOS);
} catch (MqttException me) {
logging.error(logPreString + "MqttException caught "
+ "while connecting to broker. Error: "
+ me.getMessage());
}
}