I have a requirement to fetch and write thousands of records from DB, convert to json and write to zip file. I am able to write with below implementation as well.
#Override
public StreamingResponseBody fetchAndWriteOrderAllocationsToFile(LocalDate date) {
int orderCount = orderDao.getOrderCount(date);
// Pagination
return outputStream -> {
try (ZipOutputStream zipOut = new ZipOutputStream(new BufferedOutputStream(outputStream))) {
zipOut.putNextEntry(new ZipEntry("report.txt"));
int startIndex = 0, count = orderCount;
do {
List<String> orderSerialNos = orderDao.getOrderSerialNos(date, startIndex, PAGESIZE);
orderSerialNos.parallelStream().forEach(orderSerialNo -> {
try {
writeToStream(zipOut, allocationsService.getAllocationsFromOrderItems(orderSerialNo), objectMapper);
} catch (Exception e) {
writeToStream(zipOut, Allocations.builder()
.orderSerialNo(orderSerialNo)
.build(), objectMapper);
}
});
count -= PAGESIZE;
startIndex += PAGESIZE;
} while (count > 0);
}
};
}
private static void writeToStream(OutputStream outputStream,
Object result,
ObjectMapper objectMapper) {
try {
objectMapper.writeValue(outputStream, result);
} catch (IOException e) {
log.error("Error writing results to stream", e);
}
}
However I would like to introduce a new line character(or comma) after every json being written to file.
The closest I got was overriding PrettyPrinter.writeEndObject method to something like below and use the overridden PrettyPrinter class. This obviously adds new line char to all the sub objects of json as well as every new json. The expectation is to have the new line character only after each json.
Is there any way to accomplish this?
#Override
public void writeEndObject(JsonGenerator g, int nrOfEntries) throws IOException {
g.writeRaw("}\n");
}
Above code gives:
{"orderSerialNo":"1234-ABCD","orderId":1,"shippingAllocations":[{"recipientId":25,"itemId":3893814,"itemSku":"ABC","quantity":1,"shippingItemId":3893815,"shippingSku":"DEF","shipperId":66,"allocation":0}
],"sdAllocations":[],"idAllocations":[]}
{"orderSerialNo":"6789-EFGH","orderId":2,"shippingAllocations":[{"recipientId":45,"itemId":88,"itemSku":"BLAH","quantity":1,"shippingItemId":78,"shippingSku":"HELP","shipperId":99,"allocation":7.95}
],"sdAllocations":[],"idAllocations":[]}
The expectation is:
{"orderSerialNo":"1234-ABCD","orderId":1,"shippingAllocations"[{"recipientId":25,"itemId":3893814,"itemSku":"ABC","quantity":1,"shippingItemId":3893815,"shippingSku":"DEF","shipperId":66,"allocation":0}],"sdAllocations":[],"idAllocations":[]}
{"orderSerialNo":"6789-EFGH","orderId":2,"shippingAllocations":[{"recipientId":45,"itemId":88,"itemSku":"BLAH","quantity":1,"shippingItemId":78,"shippingSku":"HELP","shipperId":99,"allocation":7.95}],"sdAllocations":[],"idAllocations":[]}
Related
I have output functions that output data from different objects to their corresponding files. Most of these functions follow the same pattern, the only differences are the objects being used and by extension, the object's inherent methods being used.
I pass in 'obj' to all the write functions, and in each individual write function we call different 'obj.get...' functions to get different objects to grab data from to output.
My output functions are called like so:
for (Object obj : objects) {
writer.writeSubOject1(obj, dir, "subObject1.csv", true);
writer.writeSubOject2(obj, dir, subObject2.csv, true);
....
}
Code for the write functions:
public class Writer{
public void writeSubOject1(Object obj, File dir, String filename, Boolean append) {
ArrayList<SubObject1> subObject1 = obj.getSubObject1();
try {
log.info("writing " + filename + "...");
ArrayList<String[]> so1Data = SubObject1.getData(subObject1);
final File out = CreateFileObject(dir, filename);
writeCsv(so1Data, out, append);
} catch (Exception e) {
log.info("Error in writeSubOject1");
log.error(e);
e.printStackTrace();
}
}
public void writeSubOject2(Object obj, File dir, String filename, Boolean append) {
ArrayList<SubObject2> subObject2 = obj.getSubObject2();
try {
log.info("writing " + filename + "...");
ArrayList<String[]> so2Data = SubObject2.getData(subObject2);
final File out = CreateFileObject(dir, filename);
writeCsv(so2Data, out, append);
} catch (Exception e) {
log.info("Error in writeSubOject2");
log.error(e);
e.printStackTrace();
}
}
}
You can see that the only differences between the two methods is the obj.getSubObjectX() call, and the getData() method which has a unique implementation in SubObject1 and 2.
Is there a better way to do this to get rid of duplicate code?
make a private method
writeSubOject(ArrayList<String[]> soData, File dir, String filename, Boolean append) {
try {
log.info("writing " + filename + "...");
final File out = CreateFileObject(dir, filename);
writeCsv(soData, out, append);
} catch (Exception e) {
log.info("Error in writeSubOject");
log.error(e);
e.printStackTrace();
}
}
then in your public methods
public void writeSubOject1(Object obj, File dir, String filename, Boolean append) {
ArrayList<SubObject1> subObject1 = obj.getSubObject1();
ArrayList<String[]> so1Data = SubObject1.getData(subObject1);
writeSubOject(so1Data,dir,filename,append);
}
public void writeSubOject2(Object obj, File dir, String filename, Boolean append) {
ArrayList<SubObject2> subObject2 = obj.getSubObject2();
ArrayList<String[]> so2Data = SubObject2.getData(subObject2);
writeSubOject(so2Data,dir,filename,append);
}
Your question is not well put, but I will try to answer as best as I can with the provided information.
First, I would write an abstract class.
public abstract class WritableObject {
List<WritableObject> children = new ArrayList<>();
public abstract List<String> getDataAsStringList();
List<WritableObject> getChildren() {
return this.children;
}
public void addChild(WritableObject wo) {
children.add(wo);
}
}
Now we can implement this interface in as many other classes as needed. Let's assume you have two for now.
public class WritableObjectOne extends WritableObject {
#Override
public List<String> getDataAsStringList() {
return Arrays.asList(""); // here the object returns it's String representation
}
}
public class WritableObjectTwo extends WritableObject {
#Override
public List<String> getDataAsStringList() {
return Arrays.asList(""); // here the object returns it's String representation
}
}
Now, the best part is that you can combine however you want. You can have children of any WritableObject and can have children for the child and so on.
Now, in your writer, you can just have one method:
public class Writer {
public void writeSubOject(WritableObject obj, File dir, String filename, Boolean append) {
List<WritableObject> children = obj.getChildren();
try {
log.info("writing " + filename + "...");
List<String> data = new ArrayList<>();
for (WritableObject child:children) {
data.addAll(child.getDataAsStringList());
}
final File out = CreateFileObject(dir, filename);
writeCsv(data, out, append);
} catch (Exception e) {
log.info("Error in writeSubOject1");
log.error(e);
e.printStackTrace();
}
}
}
Again, maybe this is not exactly how you want it (I find it weird how a list of strings is returned), but it should at least help you get to the right solution.
I have a method that appends to a .csv file but the problem is that it adds a header row everytime as well. How can I append to the .csv correctly?
I am aware that adding to a List would do the job but this method is called in separate runs.
public static void writeToCSVFileAndSend(String facilityId, int candidateStockTakeContainersCount) throws IOException {
FileWriter report = new FileWriter("/tmp/MonthlyExpectedComplianceSuggestions.csv", true);
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>() {
{
put(facilityId, candidateStockTakeContainersCount);
}
};
try (CSVPrinter printer = new CSVPrinter(report, CSVFormat.DEFAULT
.withHeader(headers))) {
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}
}
Current Output
Warehouse,Expected Count for Dec
A,2147
Warehouse,Expected Count for Dec
B,0
Expected Output
Warehouse,Expected Count for Dec
A,2147
B,0
To avoid multiple headers, you should create object of CSVPrinter once and reuse it
Depending on how you are getting the data, you may split the function in two and pass CSVPrinter object around.
public static void writeToCSVFileAndSend() throws IOException
{
File outputCSV = new File( "/tmp/MonthlyExpectedComplianceSuggestions.csv");
LocalDate today = java.time.LocalDate.now();
String[] headers = { "Warehouse", "Expected Count for "+ today.getMonth().getDisplayName(TextStyle.SHORT, Locale.ENGLISH)};
boolean headerRequired = true;
if( outputCSV.exists()){
headerRequired = false;
}
CSVPrinter printer = null;
if( headerRequired){
printer = new CSVPrinter(report, CSVFormat.DEFAULT.withHeader(headers));
}
else{
printer = new CSVPrinter(report);
}
// Iterate through combination of facilityId and candidateStockTakeContainersCount and
// call print record
Map<String, Integer> facilityExpectedMonthlyCountMap= new HashMap<String, Integer>();
// fill in your data in map here
facilityExpectedMonthlyCountMap.forEach((a, b) -> {
try {
printer.printRecord(a, b);
} catch (IOException e) {
e.printStackTrace();
}
});
}
I have multiple BLOBs that I want to concat to one another, in order to get one joined BLOB. In the process, the contents of the BLOBs may not be stored in memory.
My first idea was to merge the streams like that:
long size = blobs.get(0).length();
InputStream res = blobs.get(0).getBinaryStream();
for (int i = 1; i < blobs.size(); i++){
res = Stream.concat(res, blobs.get(i).getBinaryStream());
size += blobs.get(i).length();
}
Blob blob = Hibernate.getLobCreator(session).createBlob(res, size);
However, this obviously only works with Java 8 streams (not normal BinaryStreams) - and we use Java 7 anways.
My second idea then was to join the BLOBs by directly writing into its stream:
public Blob joinBlobsForHibernate(final Session session, final List<Blob> blobs) throws SQLException {
final LobCreator lc = Hibernate.getLobCreator(session);
final Blob resBlob = lc.createBlob(new byte[0]);
try {
OutputStream stream = resBlob.setBinaryStream(1);
for (final Blob blob : blobs) {
pipeInputStream(blob.getBinaryStream(), stream);
}
return resBlob;
} catch (IOException | SQLException e){
logger.error("Creating the blob threw an exception", e);
return null;
}
}
(pipeInputStream merely pipes the content of one stream into the other:
private void pipeInputStream (final InputStream is, final OutputStream os) throws IOException {
final int buffSize = 128000;
int n;
final byte[] buff = new byte[buffSize];
while ((n = is.read(buff)) >= 0){
os.write(buff, 0, n);
}
)
This however yields in the following exception:
java.lang.UnsupportedOperationException: Blob may not be manipulated from creating session
And besides, I have the suspicion that the BLOB would still temporarily store the whole content in memory.
As a third try I tried using a custom InputStream:
/**
* Combines multiple streams into one
*/
public class JoinedInputStream extends InputStream {
private List<InputStream> parts;
private List<InputStream> marked;
public JoinedInputStream(final List<InputStream> parts) {
this.parts = parts;
}
#Override
public int read() throws IOException {
int res = -1;
while (res == -1 && parts.size() > 0) {
try {
if ((res = parts.get(0).read()) == -1) {
// The stream is done, so we won't try to read from it again
parts.remove(0);
}
} catch (IOException e) {
e.printStackTrace();
throw e;
}
}
return res;
}
#Override
public synchronized void reset() throws IOException {
parts = new ArrayList<>(marked);
if (parts.size() > 0) {
parts.get(0).reset();
}
}
#Override
public synchronized void mark(final int readlimit) {
marked = new ArrayList<>(parts);
if (marked.size() > 0)
marked.get(0).mark(readlimit);
}
#Override
public boolean markSupported() {
return true;
}
#Override
public void close() throws IOException {
super.close();
for (final InputStream part : parts) {
part.close();
}
parts = new ArrayList<>();
marked = new ArrayList<>();
}
}
The BLOB could then be joined like that (don't mind the unnecessary functions, they have other uses):
#Override
public Blob createBlobForHibernate(final Session session, final InputStream stream, final long length) {
final LobCreator lc = Hibernate.getLobCreator(session);
return lc.createBlob(stream, length);
}
#Override
public Blob createBlobForHibernate(final Session session, final List<InputStream> streams, final long length) {
final InputStream joined = new JoinedInputStream(streams);
return createBlobForHibernate(session, joined, length);
}
#Override
public Blob joinBlobsForHibernate(final Session session, final List<Blob> blobs) throws SQLException {
long length = 0;
List<InputStream> streams = new ArrayList<>(blobs.size());
for (final Blob blob : blobs) {
length += blob.length();
streams.add(blob.getBinaryStream());
}
return createBlobForHibernate(session, streams, length);
}
However, this results in the following error (when persisting the newly created entity with the joined BLOB):
Caused by: java.sql.SQLException: LOB-Lese-/Schreibfunktionen aufgerufen, während ein anderer Lese-/Schreibvorgang ausgeführt wird: getBytes()
at oracle.jdbc.driver.T4CConnection.getBytes(T4CConnection.java:3200)
at oracle.sql.BLOB.getBytes(BLOB.java:391)
at oracle.jdbc.driver.OracleBlobInputStream.needBytes(OracleBlobInputStream.java:166)
... 101 more
Or in English:
Lob read/write functions called while another read/write is in progress: getBytes()
I already tried setting hibernate.temp.use_jdbc_metadata_defaults to false (as suggested in this post) - in fact, we already had this property set beforehand and it didn't help.
I have A big file and i want to upload that in Server side. it's very important when occured any problem (like interrupting the internet or power cut ...) if i retry to upload, file uploaded from resume and doesn't need to send file from beginning.
I try this approach with sending file chunks but it seems that's not a good way, because a send chunks(byte arrays) directly in response Entity and this isn't good idea.
whatever if anybody can develop this approach and make this code a better code with better performance i appreciate that. does anybody known Best practice way to doing that??
and if u like my code, vote me
thanks :)
RestController
#RestController
#RequestMapping("/files")
public class Controller {
#Autowired
private MyService service;
#PutMapping("/upload/resume")
public Mono<ResponseEntity> uploadWithResume(#RequestPart("chunk")byte[] chunk,
#RequestPart("fileName")String fileName,
#RequestParam("length")Long length
) throws ParseException {
try {
return service.fileResumeUpload(chunk, fileName, length);
} catch (IOException e) {
e.printStackTrace();
return Mono.just(ResponseEntity.status(HttpStatus.PERMANENT_REDIRECT).build());
}
}
#RequestMapping(value = "/get/uploaded/size", method = RequestMethod.HEAD)
public Mono<ResponseEntity> getUploadedSize(#RequestParam("fileName") String fileName) throws IOException {
if (Files.exists(Paths.get("src/main/resources/" + fileName))) {
String size = String.valueOf(Files.size(Paths.get("src/main/resources/" + fileName)));
return Mono.just(ResponseEntity.ok()
.header("upload-offset", size)
.build());
} else{
return Mono.just(ResponseEntity.notFound()
.header("upload-offset" , "0").build());
}
}
}
Service
public Mono<ResponseEntity> fileResumeUpload(byte[] chunk , String fileName,long length) throws IOException, ParseException {
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("src/main/resources/" + fileName, true));
boolean uploaded = true;
try {
out.write(chunk);
} catch (IOException e) {
uploaded = false;
System.err.println("io exception");
} finally {
if (uploaded) {
out.close();
return Mono.just(ResponseEntity.ok()
.header("expiration-date", getExpirationDate())
.build());
} else {
out.close();
return Mono.just(ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build());
}
}
}
Sending chunks with webTestClient
#Test
public void test1_upload_Expected_200StatusCode(){
try {
String fileName = "film.mkv";
RandomAccessFile raf = new RandomAccessFile(new File("src/test/resources/" + fileName), "rw");
long realSize = raf.length();
List<String> strings = webTestClient.head().uri("/files/get/uploaded/size?fileName=" + fileName)
.exchange().expectBody().returnResult().getResponseHeaders().get("upload-offset");
long uploadedSize = Long.valueOf(strings.get(0));
boolean f = false;
int sizeBuffer = 256 * 1024;
byte[] buffer = new byte[sizeBuffer];
MultiValueMap<String, Object> formData;
WebTestClient.ResponseSpec exchange = null;
System.out.println("first uploaded Size ; " + uploadedSize);
raf.seek(uploadedSize);
while (raf.read(buffer) != -1) {
formData = new LinkedMultiValueMap<>();
formData.add("fileName", fileName);
formData.add("chunk", buffer);
formData.add("length", realSize);
exchange = webTestClient.put().uri("/files/upload/resume")
.contentType(MediaType.MULTIPART_FORM_DATA)
.body(BodyInserters.fromMultipartData(formData))
.exchange();
exchange.expectStatus().isOk();
if (exchange.expectBody().returnResult().getStatus().is5xxServerError()) {
return;
}
if (uploadedSize + 256 * 1024 > realSize) {
sizeBuffer = ((int) (realSize - uploadedSize));
System.out.println(sizeBuffer);
uploadedSize = uploadedSize + sizeBuffer;
System.out.println(uploadedSize);
buffer = new byte[sizeBuffer];
f=true;
} else uploadedSize = uploadedSize + sizeBuffer;
if (f) System.out.println(uploadedSize);
//System.out.println(uploadedSize);
float percent = ((float) uploadedSize / realSize * 100);
System.out.format("%.2f\n", percent);
}
if (exchange!=null)
exchange.expectStatus().isOk();
}
catch (Exception e){
e.printStackTrace();
System.err.println("channel closed!!!");
}
}
Does anyone know where to find a little how to on using dbpedia spotlight in java or scala? Or could anyone explain how it's done? I can't find any information on this...
The DBpedia Spotlight wiki pages would be a good place to start.
And I believe the installation page has listed the most popular ways (using a jar, or set up a web service) to use the application.
It includes instructions on using the Java/Scala API with your own installation, or calling the Web Service.
There are some additional data needed to be downloaded to run your own server for full service, good time to make a coffee for yourself.
you need download dbpedia spotlight (jar file) after that u can use next two classes ( author pablomendes ) i only make some change .
public class db extends AnnotationClient {
//private final static String API_URL = "http://jodaiber.dyndns.org:2222/";
private static String API_URL = "http://spotlight.dbpedia.org:80/";
private static double CONFIDENCE = 0.0;
private static int SUPPORT = 0;
private static String powered_by ="non";
private static String spotter ="CoOccurrenceBasedSelector";//"LingPipeSpotter"=Annotate all spots
//AtLeastOneNounSelector"=No verbs and adjs.
//"CoOccurrenceBasedSelector" =No 'common words'
//"NESpotter"=Only Per.,Org.,Loc.
private static String disambiguator ="Default";//Default ;Occurrences=Occurrence-centric;Document=Document-centric
private static String showScores ="yes";
#SuppressWarnings("static-access")
public void configiration(double CONFIDENCE,int SUPPORT,
String powered_by,String spotter,String disambiguator,String showScores){
this.CONFIDENCE=CONFIDENCE;
this.SUPPORT=SUPPORT;
this.powered_by=powered_by;
this.spotter=spotter;
this.disambiguator=disambiguator;
this.showScores=showScores;
}
public List<DBpediaResource> extract(Text text) throws AnnotationException {
LOG.info("Querying API.");
String spotlightResponse;
try {
String Query=API_URL + "rest/annotate/?" +
"confidence=" + CONFIDENCE
+ "&support=" + SUPPORT
+ "&spotter=" + spotter
+ "&disambiguator=" + disambiguator
+ "&showScores=" + showScores
+ "&powered_by=" + powered_by
+ "&text=" + URLEncoder.encode(text.text(), "utf-8");
LOG.info(Query);
GetMethod getMethod = new GetMethod(Query);
getMethod.addRequestHeader(new Header("Accept", "application/json"));
spotlightResponse = request(getMethod);
} catch (UnsupportedEncodingException e) {
throw new AnnotationException("Could not encode text.", e);
}
assert spotlightResponse != null;
JSONObject resultJSON = null;
JSONArray entities = null;
try {
resultJSON = new JSONObject(spotlightResponse);
entities = resultJSON.getJSONArray("Resources");
} catch (JSONException e) {
//throw new AnnotationException("Received invalid response from DBpedia Spotlight API.");
}
LinkedList<DBpediaResource> resources = new LinkedList<DBpediaResource>();
if(entities!=null)
for(int i = 0; i < entities.length(); i++) {
try {
JSONObject entity = entities.getJSONObject(i);
resources.add(
new DBpediaResource(entity.getString("#URI"),
Integer.parseInt(entity.getString("#support"))));
} catch (JSONException e) {
LOG.error("JSON exception "+e);
}
}
return resources;
}
}
second class
/**
* #author pablomendes
*/
public abstract class AnnotationClient {
public Logger LOG = Logger.getLogger(this.getClass());
private List<String> RES = new ArrayList<String>();
// Create an instance of HttpClient.
private static HttpClient client = new HttpClient();
public List<String> getResu(){
return RES;
}
public String request(HttpMethod method) throws AnnotationException {
String response = null;
// Provide custom retry handler is necessary
method.getParams().setParameter(HttpMethodParams.RETRY_HANDLER,
new DefaultHttpMethodRetryHandler(3, false));
try {
// Execute the method.
int statusCode = client.executeMethod(method);
if (statusCode != HttpStatus.SC_OK) {
LOG.error("Method failed: " + method.getStatusLine());
}
// Read the response body.
byte[] responseBody = method.getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
} catch (HttpException e) {
LOG.error("Fatal protocol violation: " + e.getMessage());
throw new AnnotationException("Protocol error executing HTTP request.",e);
} catch (IOException e) {
LOG.error("Fatal transport error: " + e.getMessage());
LOG.error(method.getQueryString());
throw new AnnotationException("Transport error executing HTTP request.",e);
} finally {
// Release the connection.
method.releaseConnection();
}
return response;
}
protected static String readFileAsString(String filePath) throws java.io.IOException{
return readFileAsString(new File(filePath));
}
protected static String readFileAsString(File file) throws IOException {
byte[] buffer = new byte[(int) file.length()];
#SuppressWarnings("resource")
BufferedInputStream f = new BufferedInputStream(new FileInputStream(file));
f.read(buffer);
return new String(buffer);
}
static abstract class LineParser {
public abstract String parse(String s) throws ParseException;
static class ManualDatasetLineParser extends LineParser {
public String parse(String s) throws ParseException {
return s.trim();
}
}
static class OccTSVLineParser extends LineParser {
public String parse(String s) throws ParseException {
String result = s;
try {
result = s.trim().split("\t")[3];
} catch (ArrayIndexOutOfBoundsException e) {
throw new ParseException(e.getMessage(), 3);
}
return result;
}
}
}
public void saveExtractedEntitiesSet(String Question, LineParser parser, int restartFrom) throws Exception {
String text = Question;
int i=0;
//int correct =0 ; int error = 0;int sum = 0;
for (String snippet: text.split("\n")) {
String s = parser.parse(snippet);
if (s!= null && !s.equals("")) {
i++;
if (i<restartFrom) continue;
List<DBpediaResource> entities = new ArrayList<DBpediaResource>();
try {
entities = extract(new Text(snippet.replaceAll("\\s+"," ")));
System.out.println(entities.get(0).getFullUri());
} catch (AnnotationException e) {
// error++;
LOG.error(e);
e.printStackTrace();
}
for (DBpediaResource e: entities) {
RES.add(e.uri());
}
}
}
}
public abstract List<DBpediaResource> extract(Text text) throws AnnotationException;
public void evaluate(String Question) throws Exception {
evaluateManual(Question,0);
}
public void evaluateManual(String Question, int restartFrom) throws Exception {
saveExtractedEntitiesSet(Question,new LineParser.ManualDatasetLineParser(), restartFrom);
}
}
main()
public static void main(String[] args) throws Exception {
String Question ="Is the Amazon river longer than the Nile River?";
db c = new db ();
c.configiration(0.0, 0, "non", "CoOccurrenceBasedSelector", "Default", "yes");
System.out.println("resource : "+c.getResu());
}
I just add one little fix for your answer.
Your code is running, if you add the evaluate method call:
public static void main(String[] args) throws Exception {
String question = "Is the Amazon river longer than the Nile River?";
db c = new db ();
c.configiration(0.0, 0, "non", "CoOccurrenceBasedSelector", "Default", "yes");
c.evaluate(question);
System.out.println("resource : "+c.getResu());
}
Lamine
In the request method of the second class (AnnotationClient) in Adel's answer, the author Pablo Mendes hasn't finished
TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
which is an annoying warning that needs to be removed by replacing
byte[] responseBody = method.getResponseBody(); //TODO Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
// Deal with the response.
// Use caution: ensure correct character encoding and is not binary data
response = new String(responseBody);
with
Reader in = new InputStreamReader(method.getResponseBodyAsStream(), "UTF-8");
StringWriter writer = new StringWriter();
org.apache.commons.io.IOUtils.copy(in, writer);
response = writer.toString();