I am trying to learn how to use the Google Spreadsheets API through the Developer's Guide: Java. My application can authenticate to the spreadsheets service,retrieve a worksheet-based feed and create a table. The next step would be to create a table record what I am trying to do. My problem is when I run the application I obtain this error :
Service failure
com.google.gdata.util.InvalidEntryException: Bad Request
[Line 1, Column 429, element entry] Required extension element http://schemas.google.com/spreadsheets/2006:header not found.
This is the code part for the creation of the table :
//Creating a table record
String nameValuePairs = "Column A=Rosa";
RecordEntry entryToChange = new RecordEntry();
// Split first by the commas between the different fields.
for (String nameValuePair : nameValuePairs.split(",")) {
// Then split by the equal sign.
String[] parts = nameValuePair.split("=", 2);
String name = parts[0]; // such as "name"
String value = parts[1]; // such as "Fred"
entryToChange.addField(new Field(null, name, value));
}
try {
myService.insert(tableFeedUrl, entryToChange);
} catch (IOException e) {
System.err.println("I/0 problem");
e.printStackTrace();
} catch (ServiceException e) {
System.err.println("Service failure");
e.printStackTrace();
}
tableFeedUrl :
tableFeedUrl = factory.getTableFeedUrl(entry.getKey());
entry :
entry = spreadsheets.get(0);
Apparently the problem come from :
myService.insert(tableFeedUrl, entryToChange);
but I am not sure and I don't understand why...
Thank you for your help.
I have found the solution. I answer my own question for those who will have the same problem.
I use :
myService.insert(tableFeedUrl, entryToChange);
"tableFeedUrl" corresponds to :
http://spreadsheets.google.com/feeds/<key>/tables/
this's not the right link to access to the table (the doc).
But in the Java developers guide it's notice we have to use :
myService.insert(recordFeedUrl, entryToChange);
"recordFeedUrl" results from :
URL recordFeedUrl = tableEntry.getRecordFeedUrl();
But the method : getRecordFeedUrl() doesn't exist... (the doc)
The solution is to create manually the URL :
recordFeedUrl = new URL(table.getId().toString().replace("tables", "records"));
So the URL corresponds to the table :
https://spreadsheets.google.com/feeds/<key>/records/60
"60" corresponds to the table number.
I hope it will help!
Related
I have been looking for a solution to create a sort of alert when new documents are added to ES via Logstash. I have seen some threads on here such as : stackoverflow.com/a/51980618/4604579, but that does not really serve my purposes as the plug-ins mentioned do not work with the newest version of ELK and there is no Changes API out yet.
So I have resorted to trying 2 different approaches:
Create a Scroll and run over all the documents in a given index using the Search API, retain the last document's ID and use it after a given timeout period to get all documents that were added after it
Creating a Watcher that checks after a given interval (for example 5 minutes) if new documents have been added to an index.
I have advanced on approach 1, where I can scroll through about 50k documents that are currently in ES and retrieve the last documents id (i sort the query based on timestamp in ascending order, that way I know that the last document will be the latest that was inserted). But I don't know how efficient this approach is and I know that a scroller may time out after a given delay, so if no new documents are inserted, that means the scroll will be removed.
I was looking also into using a Watcher, but I don't really understand how I can set up the condition to check if a new document was inserted in a given index.
I imagine I can do something of the genre:
PUT _watcher/watch/new_docs
{
"trigger" : {
"schedule" : {
"interval" : "5s"
}
},
"input" : {
"search" : {
"request" : {
"indices" : "logstash",
"body" : {
"size" : 0,
"query" : { "match" : { "#timestamp" : "now-5s" } }
}
}
}
},
"condition" : {
"compare" : { ?? }
},
"actions" : {
"my_webhook" : {
"webhook" : {
"method" : "POST",
"host" : "mylisteninghost",
"port" : 9200,
"path" : "/{{watch_id}}",
"body" : "New document {{document ID}} errors"
}
}
I am not exactly sure how to define or use the Watcher and if it would even work.
Can anyone let me know what the best course of action would be?
Thank you
EDIT:
For those interested I found a way to poll the ES REST API using Search After. The difference is that using Scroll, there is a snapshot taken of the documents in the ES DB, so any new documents added wont be in this snapshot. Contrary to that, Search After is state-less, which means that it will use unique sorting parameters (in my case timestamp/id) and hold the last one fetched, afterwards we query all documents that come after the held parameters. This way if any new documents are added, they will come after the held timestamp and will be fetched by the query.
Code:
public static void searchAfterElasticData()
throws FileNotFoundException, IOException, InterruptedException {
//create a search request for a given index
SearchRequest search_request = new SearchRequest(elastic_index);
SearchSourceBuilder source_builder =
getSearchSourceBuilder("#timestamp", "_id", 100);
search_request.source(source_builder);
SearchResponse search_response = null;
try {
search_response = client.search(search_request, RequestOptions.DEFAULT);
} catch (ElasticsearchException | ConnectException ex) {
log.info("Error while querying Elastic API: {}", ex.toString());
}
if (search_response != null) {
SearchHit[] search_hits = search_response.getHits().getHits();
Object[] sort_values = null;
while (search_hits != null) {
if (search_hits.length > 0) {
//if there are records retrieved, parse them
for (SearchHit hit: search_hits) {
Map<String, Object> source_map = hit.getSourceAsMap();
try {
parse((String)source_map.get("message"));
} catch (Exception ex) {
log.error("Error while parsing: {}",
(String)source_map.get("message"));
}
}
//get sorting value of last record and do new request
log.info("Getting sorting values");
sort_values = search_response.getHits()
.getAt(search_hits.length-1).getSortValues();
} else {
log.info("Waiting 1 minute for new entries");
Thread.sleep(60000);
}
source_builder.searchAfter(sort_values);
search_request.source(source_builder);
search_response =
client.search(search_request, RequestOptions.DEFAULT);
search_hits = search_response.getHits().getHits();
log.info("Fetched hits: {}", search_hits.length);
log.info("Searching after for new hits");
}
}
}
I still would like to know if it is possible to do the same using a Watcher, also if anyone has any suggestions to make the code more elegant, please share.
Thank you
I need to raise a warning during one of my scenario but i don't stop to have this error appearing : "Cannot infer type arguments for Result.Warning<>"
I actually tried to raise the Warning the same way i was raising Failure until now :
new Result.Warning<>(targetKey, Messages.format(TaroMessages.WARNING_RESOURCES_VALUE_DIFFERENCE_AFTER_REAFFECTATION, existing_value, new_value), true, oscarAccesClientPage.getCallBack());
The custom step i am using it inside is the following : I'm trying to go over a list of Element and checking that the existing value of them is the same or not as the one saved before.
protected void checkXyResourcesValue(Integer xyIterator, List<WebElement> elements, String keyParameter) throws TechnicalException, FailureException {
try {
Integer resIterator = 1;
for(WebElement element : elements) {
String targetKey = "XY" + xyIterator + "RES" + resIterator + keyParameter;
String new_value = element.getAttribute(VALUE) != null ? element.getAttribute(VALUE) : element.getText();
String existing_value = Context.getValue(targetKey) != null ? Context.getValue(targetKey) : targetKey;
if (new_value != existing_value) {
new Result.Warning<>(targetKey, Messages.format(TaroMessages.WARNING_RESOURCES_VALUE_DIFFERENCE_AFTER_REAFFECTATION, existing_value, new_value), true, oscarAccesClientPage.getCallBack());
}
resIterator++;
}
} catch (Exception e) {
new Result.Failure<>(e.getMessage(), Messages.format(TaroMessages.FAIL_MESSAGE_ACCES_CLIENT_XY_CHECK_RESOURCES_VALUE, keyParameter, xyIterator), true, oscarAccesClientPage.getCallBack());
}
}
For the method to check and saved value I actually inspired myself for the piece of code from NoraUI to save a value on Context or read it from.
I'm using Eclipse Luna 4.4.2 and i try to compile using JDK1.8.0_131.
It may be more related to me not knowing how this work in Java than a real problem so thank you in advance for your help or insights. Don't hesitate to ask if you need more information on the piece of code or the context.
new Result.Warning<>(targetKey, Messages.format(TaroMessages.WARNING_RESOURCES_VALUE_DIFFERENCE_AFTER_REAFFECTATION, existing_value, new_value), true, 0);
use 0 if you do not use any Model (data serialized) or use id of your Object in the serial.
I am trying to export repeat grid data to excel. To do this, I have provided a button which runs "MyCustomActivity" activity via clicking. The button is placed above the grid in the same layout. It also worth pointing out that I am utulizing an article as a guide to configure. According to the guide my "MyCustomActivity" activity contains two steps:
Method: Property-Set, Method Parameters: Param.exportmode = "excel"
Method: Call pzRDExportWrapper. And I pass current parameters (There is only one from the 1st step).
But after I had got an issue I have changed the 2nd step by Call Rule-Obj-Report-Definition.pzRDExportWrapper
But as you have already understood the solution doesn't work. I have checked the log files and found interesting error:
2017-04-11 21:08:27,992 [ WebContainer : 4] [OpenPortal] [ ] [ MyFW:01.01.02] (ctionWrapper._baseclass.Action) ERROR as1|172.22.254.110 bar - Activity 'MyCustomActivity' failed to execute; Failed to find a 'RULE-OBJ-ACTIVITY' with the name 'PZRESOLVECOPYFILTERS' that applies to 'COM-FW-MyFW-Work'. There were 3 rules with this name in the rulebase, but none matched this request. The 3 rules named 'PZRESOLVECOPYFILTERS' defined in the rulebase are:
2017-04-11 21:08:42,807 [ WebContainer : 4] [TABTHREAD1] [ ] [ MyFW:01.01.02] (fileSetup.Code_Security.Action) ERROR as1|172.22.254.110 bar - External authentication failed:
If someone have any suggestions and share some, I will appreciate it.
Thank you.
I wanted to provide a functionality of exporting retrieved works to a CSV file. The functionality should has a feature to choose fields to retrieve, all results should be in Ukrainian and be able to use any SearchFilter Pages and Report Definition rules.
At a User Portal I have two sections: the first section contains text fields and a Search button, and a section with a Repeat Grid to display results. The textfields are used to filter results and they use a page Org-Div-Work-SearchFilter.
I made a custom parser to csv. I created two activities and wrote some Java code. I should mention that I took some code from the pzPDExportWrapper.
The activities are:
ExportToCSV - takes parameters from users, gets data, invokes the ConvertResultsToCSV;
ConvertResultsToCSV - converts retrieved data to a .CSV file.
Configurations of the ExportToCSV activity:
The Pages And Classes tab:
ReportDefinition is an object of a certain Report Definition.
SearchFilter is a Page with values inputted by user.
ReportDefinitionResults is a list of retrieved works to export.
ReportDefinitionResults.pxResults denotes a type of a certain work.
The Parameters tab:
FileName is a name of a generated file
ColumnsNames names of columns separated by comma. If the parameter is empty then CSVProperties is exported.
CSVProperties is a props to display in a spreadsheet separated by comma.
SearchPageName is a name of a page to filter results.
ReportDefinitionName is a RD's name used to retrieve results.
ReportDefinitionClass is a class of utilized report definition.
The Step tab:
Lets look through the steps:
1. Get an SearchFilte Page with a name from a Parameter with populated fields:
2. If SearchFilter is not Empty, call a Data Transform to convert SearchFilter's properties to Paramemer properties:
A fragment of the data Transform:
3. Gets an object of a Report Definition
4. Set parameters for the Report Definition
5. Invoke the Report Definition and save results to ReportDefinitionResults:
6. Invoke the ConvertResultsToCSV activity:
7. Delete the result page:
The overview of the ConvertResultsToCSV activity.
The Parameters tab if the ConvertResultsToCSV activity:
CSVProperties are the properties to retrieve and export.
ColumnsNames are names of columns to display.
PageListProperty a name of the property to be read in the primay page
FileName the name of generated file. Can be empty.
AppendTimeStampToFileName - if true, a time of the file generation.
CSVString a string of generated CSV to be saved to a file.
FileName a name of a file.
listSeperator is always a semicolon to separate fields.
Lets skim all the steps in the activity:
Get a localization from user settings (commented):
In theory it is able to support a localization in many languages.
Set always "uk" (Ukrainian) localization.
Get a separator according to localization. It is always a semicolon in Ukrainian, English and Russian. It is required to check in other languages.
The step contains Java code, which form a CSV string:
StringBuffer csvContent = new StringBuffer(); // a content of buffer
String pageListProp = tools.getParamValue("PageListProperty");
ClipboardProperty resultsProp = myStepPage.getProperty(pageListProp);
// fill the properties names list
java.util.List<String> propertiesNames = new java.util.LinkedList<String>(); // names of properties which values display in csv
String csvProps = tools.getParamValue("CSVProperties");
propertiesNames = java.util.Arrays.asList(csvProps.split(","));
// get user's colums names
java.util.List<String> columnsNames = new java.util.LinkedList<String>();
String CSVDisplayProps = tools.getParamValue("ColumnsNames");
if (!CSVDisplayProps.isEmpty()) {
columnsNames = java.util.Arrays.asList(CSVDisplayProps.split(","));
} else {
columnsNames.addAll(propertiesNames);
}
// add columns to csv file
Iterator columnsIter = columnsNames.iterator();
while (columnsIter.hasNext()) {
csvContent.append(columnsIter.next().toString());
if (columnsIter.hasNext()){
csvContent.append(listSeperator); // listSeperator - local variable
}
}
csvContent.append("\r");
for (int i = 1; i <= resultsProp.size(); i++) {
ClipboardPage propPage = resultsProp.getPageValue(i);
Iterator iterator = propertiesNames.iterator();
int propTypeIndex = 0;
while (iterator.hasNext()) {
ClipboardProperty clipProp = propPage.getIfPresent((iterator.next()).toString());
String propValue = "";
if(clipProp != null && !clipProp.isEmpty()) {
char propType = clipProp.getType();
propValue = clipProp.getStringValue();
if (propType == ImmutablePropertyInfo.TYPE_DATE) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
long mills = dtu.parseDateString(propValue);
java.util.Date date = new Date(mills);
String sdate = dtu.formatDateTimeStamp(date);
propValue = dtu.formatDateTime(sdate, "dd.MM.yyyy", "", "");
}
else if (propType == ImmutablePropertyInfo.TYPE_DATETIME) {
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
propValue = dtu.formatDateTime(propValue, "dd.MM.yyyy HH:mm", "", "");
}
else if ((propType == ImmutablePropertyInfo.TYPE_DECIMAL)) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, new BigDecimal(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_DOUBLE) {
propValue = PRNumberFormat.format(localeCode,PRNumberFormat.DEFAULT_DECIMAL, false, null, Double.parseDouble(propValue));
}
else if (propType == ImmutablePropertyInfo.TYPE_TEXT) {
propValue = clipProp.getLocalizedText();
}
else if (propType == ImmutablePropertyInfo.TYPE_INTEGER) {
Integer intPropValue = Integer.parseInt(propValue);
if (intPropValue < 0) {
propValue = new String();
}
}
}
if(propValue.contains(listSeperator)){
csvContent.append("\""+propValue+"\"");
} else {
csvContent.append(propValue);
}
if(iterator.hasNext()){
csvContent.append(listSeperator);
}
propTypeIndex++;
}
csvContent.append("\r");
}
CSVString = csvContent.toString();
5. This step forms and save a file in server's catalog tree
char sep = PRFile.separatorChar;
String exportPath= tools.getProperty("pxProcess.pxServiceExportPath").getStringValue();
DateTimeUtils dtu = ThreadContainer.get().getDateTimeUtils();
String fileNameParam = tools.getParamValue("FileName");
if(fileNameParam.equals("")){
fileNameParam = "RecordsToCSV";
}
//append a time stamp
Boolean appendTimeStamp = tools.getParamAsBoolean(ImmutablePropertyInfo.TYPE_TRUEFALSE,"AppendTimeStampToFileName");
FileName += fileNameParam;
if(appendTimeStamp) {
FileName += "_";
String currentDateTime = dtu.getCurrentTimeStamp();
currentDateTime = dtu.formatDateTime(currentDateTime, "HH-mm-ss_dd.MM.yyyy", "", "");
FileName += currentDateTime;
}
//append a file format
FileName += ".csv";
String strSQLfullPath = exportPath + sep + FileName;
PRFile f = new PRFile(strSQLfullPath);
PROutputStream stream = null;
PRWriter out = null;
try {
// Create file
stream = new PROutputStream(f);
out = new PRWriter(stream, "UTF-8");
// Bug with Excel reading a file starting with 'ID' as SYLK file. If CSV starts with ID, prepend an empty space.
if(CSVString.startsWith("ID")){
CSVString=" "+CSVString;
}
out.write(CSVString);
} catch (Exception e) {
oLog.error("Error writing csv file: " + e.getMessage());
} finally {
try {
// Close the output stream
out.close();
} catch (Exception e) {
oLog.error("Error of closing a file stream: " + e.getMessage());
}
}
The last step calls #baseclass.DownloadFile to download the file:
Finally, we can post a button on some section or somewhere else and set up an Actions tab like this:
It also works fine inside "Refresh Section" action.
A possible result could be
Thanks for reading.
I have a collection with some documents in it. And in my application I am creating this collection first and then inserting documents. Also, based on the requirement I need to truncate (delete all documents) the collection as well. Using document db java api I have written the following code for my this purpose-
DocumentClient documentClient = getConnection(masterkey, server, portNo);
List<Database> databaseList = documentClient.queryDatabases("SELECT * FROM root r WHERE r.id='" + schemaName + "'", null).getQueryIterable().toList();
DocumentCollection collection = null;
Database databaseCache = (Database)databaseList.get(0);
List<DocumentCollection> collectionList = documentClient.queryCollections(databaseCache.getSelfLink(), "SELECT * FROM root r WHERE r.id='" + collectionName + "'", null).getQueryIterable().toList();
// truncate logic
if (collectionList.size() > 0) {
collection = ((DocumentCollection) collectionList.get(0));
if (truncate) {
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
}
} else { // create logic
RequestOptions requestOptions = new RequestOptions();
requestOptions.setOfferType("S1");
collection = new DocumentCollection();
collection.setId(collectionName);
try {
collection = documentClient.createCollection(databaseCache.getSelfLink(), collection, requestOptions).getResource();
} catch (DocumentClientException e) {
e.printStackTrace();
}
With the above code I am able to create a new collection successfully. Also, I am able to insert documents as well in this collection. But while truncating the collection I am getting below error-
com.microsoft.azure.documentdb.DocumentClientException: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'delete
colls
eyckqjnw0ae=
I am using Azure Document DB Java API version 1.9.5.
It will be of great help if you can point out the error in my code or if there is any other better way of truncating collection. I would really appreciate any kind of help here.
According to your description & code, I think the issue was caused by the code below.
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
It seems that you want to delete a document via the code above, but pass the argument documentLink with a collection link.
So if your real intention is to delete a collection, please using the method DocumentClient.deleteCollection(collectionLink, options).
I've searched high and low to find an example as what I am trying does not seem to be working. I am getting this error:
com.google.gdata.util.InvalidEntryException: Bad Request
Invalid request URI
at com.google.gdata.client.http.HttpGDataRequest.handleErrorResponse(HttpGDataRequest.java:594)
at com.google.gdata.client.http.GoogleGDataRequest.handleErrorResponse(GoogleGDataRequest.java:563)
at com.google.gdata.client.http.HttpGDataRequest.checkResponse(HttpGDataRequest.java:552)
at com.google.gdata.client.http.HttpGDataRequest.execute(HttpGDataRequest.java:530)
at com.google.gdata.client.http.GoogleGDataRequest.execute(GoogleGDataRequest.java:535)
at com.google.gdata.client.Service.update(Service.java:1563)
at com.google.gdata.client.Service.update(Service.java:1530)
at com.google.gdata.client.GoogleService.update(GoogleService.java:583)
at CalendarConnect.deleteEvent(CalendarConnect.java:37)
at ProgMain.main(ProgMain.java:26)
Here is an example of the code I am using:
CalendarService service = new CalendarService("Fourth Year Project");
service.setUserCredentials(username, password);
URL postUrl = new URL("https://www.google.com/calendar/feeds/default/private/full");
CalendarEventEntry myEntry = new CalendarEventEntry();
myEntry.setTitle(new PlainTextConstruct("TEST"));
myEntry.setContent(new PlainTextConstruct("TEST"));
DateTime startTime = DateTime.parseDateTime(StartDateTime);
DateTime endTime = DateTime.parseDateTime(EndDateTime);
When eventTimes = new When();
eventTimes.setStartTime(startTime);
eventTimes.setEndTime(endTime);
myEntry.addTime(eventTimes);
CalendarEventEntry insertedEntry = service.update(postUrl, myEntry);
URL deleteUrl = new URL(insertedEntry.getEditLink().getHref());
service.delete(deleteUrl);
It has been chopped and changed so much that I am not sure where I am with it now. Has anyone encountered this problem? If so, how did you overcome it? Has anyone got an example of some code that works as Google only provide one line of code in their explanation.
Can you try
CalendarEventEntry insertedEntry = service.insert(postUrl, myEntry);
instead of
CalendarEventEntry insertedEntry = service.update(postUrl, myEntry);
?
I think the rest of the code that you have is for inserting an Event entry and you cannot call update with it. If you change it to insert, it will insert and delete the entry (if it worked) and I don't see a point in it. If you are trying to retrieve an entry and delete it, there are some examples in the post.
The only thing you need to do is to have an "handle" on your event, there is different way to get it, i'll show you how i do if you want to delete a specific event in your calendar
String title = "Event title that i want to delete";
try{
URL calendarUrl= new URL(calendar.getLink(Link.Rel.ALTERNATE, Link.Type.ATOM).getHref());
CalendarEventFeed resultFeed = service.getFeed(calendarUrl, CalendarEventFeed.class);
for (int i = 0; i < resultFeed.getEntries().size(); i++) {
CalendarEventEntry entry = resultFeed.getEntries().get(i);
if(entry.getTitle().getPlainText().equals(title)){
entry.delete();
break;
}
}
}
catch (IOException e) {
e.printStackTrace();
}
catch (ServiceException e) {
e.printStackTrace();
}
So you only need to set the title(or anyone of the setting) of a specific calendar, if you don't know how to get the "handle" on the calendar that you have already create earlier, i can explain you how. In my code, "calendar" is a ClendarEntry.