I have the following field inside a StacItem Object:
#JsonProperty
private List<Number> bbox = null;
I made a basic implementation with OpenCSV to write this Object into a CSV, and it mostly works with this code (i'm showing just the relevant part):
final StatefulBeanToCsv<Object> beanToCSV = new StatefulBeanToCsvBuilder<>(writer)
.withSeparator(';')
.build();
for(StacItem item : items){
beanToCSV.write(item);
}
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.set(HttpHeaders.CONTENT_TYPE, "text/csv");
httpHeaders.set(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=exportItems.csv");
logger.info("END WRITING");
return new ResponseEntity<>(new FileSystemResource(file), HttpStatus.OK);
Here's the twist! In the logs of my Microservice, I see the full structure of that StacItem, and it should have this bbox field:
bbox=[8.24275148213394, 45.5050129344147, 7.62767704092889, 45.0691351737573]
While my implementation returns just this:
"8.24637830863774"
So when I open my CSV I just found the column "bbox" with one value but I need the others too..Can you please tell me why it stops on the first one or how to get the others 3?
UPDATE:
I found that this does the trick! But then...it exports just this single field for every StacItem so I lose every other field in my Object.
#CsvBindAndSplitByName(elementType = Number.class, writeDelimiter = ",")
#JsonProperty("bbox")
private List<Number> bbox = null;
Thanks
Try using CsvBindByName on every field you want to map (specify the column attribute of the annotation is not mandatory).
You can even use CsvBindByPosition if you prefer.
Did you try to change ?
beanToCSV.write(item); -> beanToCSV.writeNext(item);
or
for(StacItem item : items){
beanToCSV.write(item);
}
// to
beanToCSV.write(items);
Related
Here is the calling code. Followed by the method that is supposed to add MetaData to the file I just uploaded through Sharepoint within our code.
Map<String, String> metaDataValues = new HashMap<>()
metaDataValues.put("DocumentType", "Contract")
metaDataValues.put("User", oidcUser.userEntity.id)
addMetadataToFile(oidcUser, file.id, metaDataValues)
This is the method it calls and when I PUT or PATCH it returns Invalid Request and I have no idea why.
private addMetadataToFile(OidcUser oidcUser, String itemId, Map<String, String> metaData) {
FieldValueSet fieldValueSet = new FieldValueSet()
metaData.each {data -> {
fieldValueSet.additionalDataManager().put(data.key, new JsonPrimitive(data.value))
}}
FieldValueSet getting = graphServiceClient.groups(oidcUser.defaultGroupId)
.drive()
.items(itemId)
.listItem()
.fields()
.buildRequest()
.get()
// If you can get, then the following should work to update.
graphServiceClient.groups(oidcUser.defaultGroupId)
.drive()
.items(itemId)
.listItem()
.fields()
.buildRequest()
.put(fieldValueSet)
}
I have spent too many hours trying any combination and nothing ever seems to work. I am exhausted. Any help to find out why it fails?
Apparently, when using the BigQuery API, there is a cacheHit property of a BigQuery result. I've tried finding this property and I'm not sure how I need to access it. Here's my Java code that uses the BigQuery API. cacheHit isn't a property of the TableResult tr that I get:
try
{
QueryJobConfiguration queryJobConfiguration =
QueryJobConfiguration.newBuilder(
"mySQLqueryText"
)
.setUseLegacySql(false)
.setAllowLargeResults(false)
.setUseQueryCache(true)
.build();
try {
TableResult tr = bigQuery.query(queryJobConfiguration);
Iterable<FieldValueList> rowList = tr.getValues();
....
}
catch (BigQueryException e) {
// do stuff
}
} catch (InterruptedException e) {
e.printStackTrace();
}
I looked at this question - BigQuery cacheHit property
... but that's not Java, and I haven't found any results() property I can use, as suggested in that question.
There's some documentation here about the JobStatistics2 object, that apparently has a cacheHit property.
I can get a JobStatistics (not a JobStatistics2 object), like this:
QueryJobConfiguration queryJobConfiguration =
QueryJobConfiguration.newBuilder(
"myQueryString"
)
.setUseLegacySql(false)
.setAllowLargeResults(false)
.setUseQueryCache(true)
.build();
JobId jobId = JobId.of(UUID.randomUUID().toString());
Job queryJob = bigQuery.create(JobInfo.newBuilder(queryJobConfiguration).setJobId(jobId).build());
try {
queryJob = queryJob.waitFor();
if (queryJob != null) {
JobStatistics js = queryJob.getStatistics();
Iterable<FieldValueList> rowList = bigQuery.query(queryJobConfiguration).getValues();
... but I don't see any cacheHit property on js. When I try creating a JobStatistics2 instead, by changing the line where I'm instantiating JobStatistics, like this:
JobStatistics2 js = queryJob.getStatistics();
I get an error Type parameter S has incompatible upper bounds: JobStatistics and JobStatistics2. This doesn't mean much, and when I Google the error there are no useful results.
I'm not finding the Google documentation too useful. How can I access the cacheHit property, and still obtain my rowList as shown in the code example?
QueryStatistics is one of the nested classes of JobStatistics as can be seen here and has a getCacheHit() method:
import com.google.cloud.bigquery.JobStatistics.QueryStatistics;
...
QueryStatistics js = queryJob.getStatistics();
System.out.println(js.getCacheHit());
See full code here for my test.
Regarding JobStatistics2 this is for com.google.api.services.bigquery library and not com.google.cloud.bigquery. In that case you could use getQuery() from JobStatistics to get a JobStatistics2 object and then use getCacheHit().
I'm working with the MarkLogic POJO Databinding Interface at the moment. I'm able to write POJOs to MarkLogic. Now I want to search those POJOs and retrieve the search results. I'm following the instructions from: https://docs.marklogic.com/guide/java/binding#id_89573 However, the search results don't seem to return the correct objects. I'm getting a JSONMappingException. Here's the code:
HashMap<String, MatchedPropertyInfo> matchedProperties = new HashMap<String, MatchedPropertyInfo>();
PropertyMatches PM = new PropertyMatches(123,"uri/prefix/location2", "uri/prefix", 1234,0,"/aKey","/aLocation",true,matchedProperties);
MatchedPropertyInfo MPI1 = new MatchedPropertyInfo("matched/property/uri1", "matched/property/key1", "matched/property/location1", true,"ValueMatch1", 12, 1*1.0/3, true);
MatchedPropertyInfo MPI2 = new MatchedPropertyInfo("matched/property/uri2", "matched/property/key2", "matched/property/location2", true,"ValueMatch2", 14, 1.0/2.0, true);
PM.getMatchedProperties().put("matched/property/prefix/location1", MPI1);
PM.getMatchedProperties().put("matched/property/prefix/location2", MPI2);
PojoRepository myClassRepo = client.newPojoRepository(PropertyMatches.class, Long.class);
myClassRepo.write(PM);
PojoQueryBuilder qb = myClassRepo.getQueryBuilder();
PojoPage<PropertyMatches> matches = myClassRepo.search(qb.value("uri", "uri/prefix/location2"),1);
if (matches.hasContent()) {
while (matches.hasNext()) {
PropertyMatches aPM = matches.next();
System.out.println(" " + aPM.getURI());
}
} else {
System.out.println(" No matches");
}
The PropertyMatches (PM) object is succesfully written to the MarkLogic database. This class contains a member: private String URI which is initiated with "uri/prefix/location2". The matches.hasContent() returns true in the example above. However, I'm getting an error on PropertyMatches aPM = matches.next();
Searching POJOs in MarkLogic and read them into your Java program requires the POJOs to have an empty constructor. In this case PropertyMatches should have public PropertyMatches(){} and MatchedPropertyInfo should have public MatchedPropertyInfo(){}
Thanks #sjoerd999 for posting the answer you found. Just to add some documentation references, this topic is discussed here: http://docs.marklogic.com/guide/java/binding#id_54408 and here: https://docs.marklogic.com/javadoc/client/com/marklogic/client/pojo/PojoRepository.html.
Also worth noting is you can have multiple parameters in the consructor, you just have to do it the Jackson way. Here are examples of two ways (with annotations and without): https://manosnikolaidis.wordpress.com/2015/08/25/jackson-without-annotations/
I'd recommend using annotations as that's built-in with Jackson. But if you want to do it without annotations, here's the code:
ObjectMapper mapper = new ObjectMapper();
// Avoid having to annotate the Person class
// Requires Java 8, pass -parameters to javac
// and jackson-module-parameter-names as a dependency
mapper.registerModule(new ParameterNamesModule());
// make private fields of Person visible to Jackson
mapper.setVisibility(FIELD, ANY);
If you want to do this with PojoRepository you'll have to use the unsupported getObjectMapper method to get the ObjectMapper and call registerModule and setVisibility on that:
ObjectMapper objectMapper = ((PojoRepositoryImpl) myClassRepo).getObjectMapper();
I'm trying to add some Custom Properties to an existing document:
HWPFDocument document = new HWPFDocument(new FileInputStream(sourceFile));
DocumentSummaryInformation docSumInf = document.getDocumentSummaryInformation();
CustomProperties customProperties = docSumInf.getCustomProperties();
CustomProperty singleProp = null;
//...
singleProp = new CustomProperty();
singleProp.setName(entry.getKey());
singleProp.setType(Variant.VT_LPWSTR);
singleProp.setValue((String) entry.getValue());
//..
customProperties.put(entry.getKey(), singleProp);
docSumInf.setCustomProperties(customProperties);
return document;
However, the properties never make it to the file. I tried to
document.getDocumentSummaryInformation().getCustomProperties().putAll(customProperties);
I also tried
document.getDocumentSummaryInformation().getCustomProperties().put(entry.getKey(), singleProp);
System.out.println(document.getDocumentSummaryInformation().getCustomProperties().size() + " Elemente in Map");
in a loop. The printed size was allways one.
With the first attemp (docSumInf.setCustomProperties(customProperties);) I printed out customProperties before setting it to docSumInf. There where all new Properties I miss, as soon as I set them to the document summary.
I don't see what I am missing...
entry.getKey() = null
or entry.getKey() has common value for all CustomProperties in the map.
and because of that you have only one element in the map of CustomProperties.
You need to set non null values here
singleProp.setName(entry.getKey());
CustomProperty class represents custom properties in the document summary information stream. The difference to normal properties is that custom properties have an optional name. If the name is not null it will be maintained in the section's dictionary.
iam trying to orderlookup droplet API by passing some parameters.I assume that the parameters which are mandatory is userId and organisationIds which i have passed and additionally i have also passed "state" parameter.All these params are passed thru request and then the service method of droplet is invoked.But the service method returns nothing.My goal is to check whether this droplet this retrieving the expected set of orders or not.We can use droplet invoker but i tried that way but it didnt work may be i missed something.Please help me out!!
this is my code when i tried to use OrderLookUp API
DynamoHttpServletRequest request = ServletUtil.getCurrentRequest();
mTestService.setCurrentRequest(request);
if (request == null) {
mTestService.vlogError("Request is null.");
Assert.fail("Request is null ");
}
else
{
Object droplet = mTestService
.getRequestScopedComponent("OrderLookupDroplet");
OrderLookupDroplet=(OrderLookup) droplet;
request.setParameter("state", "submitted");
request.setParameter("organisationIds", organizationIds);
request.setParameter("userId", userId);
ByteBuffer buffer = ByteBuffer.allocate(1024);
DynamoHttpServletRequest dynRequest = (DynamoHttpServletRequest) request;
TestingDynamoHttpServletRequest wrappedRequest = new TestingDynamoHttpServletRequest(
dynRequest, buffer);
TestingDynamoHttpServletResponse wrappedResponce = new TestingDynamoHttpServletResponse(
dynRequest.getResponse());
OrderLookupDroplet.service(wrappedRequest, wrappedResponce);
}
the above sample is only part of the code..
this is the code when i tried using droplet invoker
DropletInvoker invoker = new DropletInvoker(mNucleus);
invoker.getRequest().setParameter("state", "submitted");
// String [] siteIds = {"siteA", "siteB"};
// invoker.getRequest().setParameter("siteIds", Arrays.asList(siteIds));
String [] organizationIds = {"OrgA", "OrgB"};
invoker.getRequest().setParameter("organizationIds", organizationIds);
String [] orderIds = {"orderautouser001OrgA" , "orderautouser001OrgB"};
invokeDroplet(invoker, "autouser001", orderIds);
......
protected void invokeDroplet(DropletInvoker pInvoker, String pUserId, String[] pOrderIds) throws Exception
{
Map<String, Object> localParams = new HashMap();
localParams.put("userId", pUserId);
DropletResult result = pInvoker.invokeDroplet("/atg/commerce/order/OrderLookup", localParams);
RenderedOutputParameter oparam = result.getRenderedOutputParameter("output", 0);
assertNotNull("'output' oparam was not rendered", oparam);
assertEquals("Check totalCount.", pOrderIds.length, oparam.getFrameParameter("totalCount"));
List<Order> orders = (List<Order>)oparam.getFrameParameter("result");
assertEquals("Check order array length.", pOrderIds.length, orders.size());
for (int index = 0; index < pOrderIds.length; index++) {
boolean found = false;
for (Order order: orders) {
if (pOrderIds[index].equals(order.getId())) {
found = true;
break;
}
}
assertTrue("Expected orderId " + pOrderIds[index] + " not found in result array", found);
}
in first case i donno how to retrieve the orders by directly using orderlookup api....and in second case though i know how to use it ,iam still failing!! please help me out..thanks in advance
You should't use droplets in java classes they should be used only inside jsp pages. Documentation of OrderLookup with example hot to use it on jsp page is here.
If you want to get orders or any other data stored in a repository you should use repository API with RQL (Repository Query Language). Example how to get data from repository you can find here and RQL grammar here.
Thanks for giving your opinions.Good news is we can invoke droplets from any other API
OrderLookup droplet = (OrderLookup) sNucleus.resolveName("/atg/commerce/order/OrderLookup");
ServletTestUtils utils = new ServletTestUtils();
mRequest = utils.createDynamoHttpServletRequestForSession(sNucleus, null, null);
ServletUtil.setCurrentRequest(mRequest);
mResponse = new DynamoHttpServletResponse();
mRequest.setResponse(mResponse);
mResponse.setRequest(mRequest);
mResponse.setResponse(new GenericHttpServletResponse());
mRequest.setParameter("userId", "publishing");
droplet.setSearchByUserId(true);
droplet.service(mRequest, mResponse);
ArrayList<Order> orders = (ArrayList<Order>) mRequest.getObjectParameter("result");
here the "result" param is output param which this droplet sets.and the userId i have hardcoded as "publishing" which i have created.Ignore servletTestUtils class that is created by me which has not much to do with droplet theory here :)
I assume from your code example, and the fact that you mention DropletInvoker that you are writing a unit test, and that this is not functional code.
If it is functional code, you really, really, should not invoke a droplet from another Nucleus component. A droplet exists solely to be used in a JSP page. If you need the functionality of the droplet in Java code, you should refactor the droplet into a service that holds the main logic, and a droplet that simply acts as a façade to the service to allow it to be invoked from a page.
In the case of the OrderLookup look droplet, you don't need to refactor anything. The service to use should be OrderManager or OrderTools depending on what you need. Note, there is a difference between Order objects and Order repository items, and you should prefer to use order objects - so only use the Order Repository directly if you really need to.