I'm working with Google Spreadsheets Api and Android Studio. I'm reading info from a sheet, and showing the results into a List View. My problem is that I can only retrieve the info from the first row, but not from the next ones. How can I do that?
I have the next code:
private class MakeRequestTask extends AsyncTask<Void, Void, List<String>> {
private Exception mLastError = null;
MakeRequestTask(GoogleAccountCredential credential) {
}
#Override
protected List<String> doInBackground(Void... params) {
try {
return getDataFromApi();
} catch (Exception e) {
mLastError = e;
cancel(true);
return null;
}
}
private List<String> getDataFromApi() throws IOException {
String range = "Sheet1!A2:H";
List<String> results = new ArrayList<String>();
ValueRange response = mService.spreadsheets().values()
.get(spreadsheet_id, range)
.execute();
List<List<Object>> values = response.getValues();
if (values == null) {
//No me deja mostrar toast aqui
} else {
for (List row : values) {
row.get(0);
row.get(1);
row.get(2);
row.get(5);
System.out.println("resultados:" + row.get(0) + ", " + row.get(1));
}
}
return results;
}
Try to check the sample code on Reading multiples ranges
To read multiple discontinuous ranges, use a spreadsheets.values.batchGet, which lets you specify any number of ranges to retrieve:
Here is a sample Java code:
List<String> ranges = Arrays.asList(
//Range names ...
);
BatchGetValuesResponse result = service.spreadsheets().values().batchGet(spreadsheetId)
.setRanges(ranges).execute();
Another thing is try to replace the for loop to:
for (List row : values) {
// Print columns A and E, which correspond to indices 0 and 4.
System.out.printf("%s, %s\n", row.get(0), row.get(4));
}
You can specify the indices you needed. In the example index 0 to 4.
Hope this is what you are looking for.
Related
I have a categorized Notes view, let say the first categorized column is TypeOfVehicle the second categorized column is Model and the third categorized column is Manufacturer.
I would like to collect only the values for the first category and return it as json object:
I am facing two problems:
- I can not read the value for the category, the column values are emptry and when I try to access the underlying document it is null
the script won't hop over to the category/sibling on the same level.
can someone explain me what am I doing wrong here?
private Object getFirstCategory() {
JsonJavaObject json = new JsonJavaObject();
try{
String server = null;
String filepath = null;
server = props.getProperty("server");
filepath = props.getProperty("filename");
Database db;
db = utils.getSession().getDatabase(server, filepath);
if (db.isOpen()) {
View vw = db.getView("transport");
if (null != vw) {
vw.setAutoUpdate(false);
ViewNavigator nav;
nav = vw.createViewNav();
JsonJavaArray arr = new JsonJavaArray();
Integer count = 0;
ViewEntry tmpentry;
ViewEntry entry = nav.getFirst();
while (null != entry) {
Vector<?> columnValues = entry.getColumnValues();
if(entry.isCategory()){
System.out.println("entry notesid = " + entry.getNoteID());
Document doc = entry.getDocument();
if(null != doc){
if (doc.hasItem("TypeOfVehicle ")){
System.out.println("category has not " + "TypeOfVehicle ");
}
else{
System.out.println("category IS " + doc.getItemValueString("TypeOfVehicle "));
}
} else{
System.out.println("doc is null");
}
JsonJavaObject row = new JsonJavaObject();
JsonJavaObject jo = new JsonJavaObject();
String TypeOfVehicle = String.valueOf(columnValues.get(0));
if (null != TypeOfVehicle ) {
if (!TypeOfVehicle .equals("")){
jo.put("TypeOfVehicle ", TypeOfVehicle );
} else{
jo.put("TypeOfVehicle ", "Not categorized");
}
} else {
jo.put("TypeOfVehicle ", "Not categorized");
}
row.put("request", jo);
arr.put(count, row);
count++;
tmpentry = nav.getNextSibling(entry);
entry.recycle();
entry = tmpentry;
} else{
//tmpentry = nav.getNextCategory();
//entry.recycle();
//entry = tmpentry;
}
}
json.put("data", arr);
vw.setAutoUpdate(true);
vw.recycle();
}
}
} catch (Exception e) {
OpenLogUtil.logErrorEx(e, JSFUtil.getXSPContext().getUrl().toString(), Level.SEVERE, null);
}
return json;
}
What you're doing wrong is trying to treat any single view entry as both a category and a document. A single view entry can only be one of a category, a document, or a total.
If you have an entry for which isCategory() returns true, then for the same entry:
isDocument() will return false.
getDocument() will return null.
getNoteID() will return an empty string.
If the only thing you need is top-level categories, then get the first entry from the navigator and iterate over entries using nav.getNextSibling(entry) as you're already doing, but:
Don't try to get documents, note ids, or fields.
Use entry.getColumnValues().get(0) to get the value of the first column for each category.
If the view contains any uncategorised documents, it's possible that entry.getColumnValues().get(0) might throw an exception, so you should also check that entry.getColumnValues().size() is at least 1 before trying to get a value.
If you need any extra data beyond just top-level categories, then note that subcategories and documents are children of their parent categories.
If an entry has a subcategory, nav.getChild(entry) will get the first subcategory of that entry.
If an entry has no subcategories, but is a category which contains documents, nav.getChild(entry) will get the first document in that category.
I have a CSV file and I need to sort the content by two columns. The first column is a date and the fourth is text. I need that file sorted by first and fourth column.
I'm trying like this:
//i read file and iterate in lines
PriorityQueue<String[]> linesOrdenered = new PriorityQueue<>(new LineComparator());
while ((line = br.readLine()) != null) {
String[] row = line.split(CVS_SPLIT_BY);
if ((!isHeader) && row[0].equals("Route Date")) {
header = line;
isHeader = true;
continue;
}
linesOrdenered.add(row);
}
public class LineComparator implements Comparator<String[]> {
//string[3] == key
#Override
public int compare(String[] strings, String[] t1) {
int keyComparator = 0;
try {
Date dataOne = format.parse(strings[0]);
Date dataTwo = format.parse(t1[0]);
keyComparator = dataOne.compareTo(dataTwo);
} catch (ParseException e) {
LOGGER.error("Error on parse Data [{}]", e.getMessage());
}
if (keyComparator == 0) {
keyComparator = strings[3].compareTo(t1[3]);
}
return keyComparator;
}
}
The result is a PriorityQueue with just date being ordered.
Example content of CSV file:
12/01/2018,6:30,C-993BNT,A1001
12/03/2018,6:30,C-993BNT,A1001
12/04/2018,6:30,C-993BNT,A1001
12/05/2018,6:30,C-993BNT,A1001
12/01/2018,6:30,C-555BQJ,A1003
12/03/2018,6:30,C-555BQJ,A1003
12/04/2018,6:30,C-555BQJ,A1003
12/05/2018,6:30,C-555BQJ,A1003
12/06/2018,6:30,C-555BQJ,A1003
12/07/2018,6:30,C-555BQJ,A1003
Your input file and "format" are not available for us but my educated guess is that all dates are different when parsed using the format (at least to the extent that comparing them does not return 0).
Carrying on with Roy Shahaf's idea, assuming :
private static DateFormat format = new SimpleDateFormat("MM/dd/yyyy");
I get:
12/01/2018,6:30,C-993BNT,A1001
12/01/2018,6:30,C-555BQJ,A1003
12/03/2018,6:30,C-993BNT,A1001
12/03/2018,6:30,C-555BQJ,A1003
12/04/2018,6:30,C-993BNT,A1001
12/04/2018,6:30,C-555BQJ,A1003
12/05/2018,6:30,C-993BNT,A1001
12/05/2018,6:30,C-555BQJ,A1003
12/06/2018,6:30,C-555BQJ,A1003
12/07/2018,6:30,C-555BQJ,A1003
which looks right to me. see https://jdoodle.com/a/QXP
I'm in the process of building a basic database using csv files, and i'm currently testing the select function out when i ran into something strange.
private ArrayList<Record> selectField(String selectTerm)
{
Log.log("Selection " + selectTerm,2,"DB_io");
ArrayList<Record> ret = new ArrayList<Record>();
if (titleRow.values.contains(selectTerm))
{
Log.log("Adding values to " + selectTerm);
int ordinal = titleRow.values.indexOf(selectTerm);
Log.log("Ordinal " + ordinal);
List<String> tempList = new ArrayList<String>();
for (Record r : data)
{
List<String> tempList = new ArrayList<String>();
tempList.add(r.values.get(ordinal));
Record s = new Record(tempList);
ret.add(s);
tempList.clear();
}
Log.log("Number of records in ret " + ret.size());
for (Record t : ret)
{
Log.log(t.toString());
}
}
else
{
Log.log("keyField does not contain that field");
return null;
}
Log.log("Values " + ret.toString());
return ret;
}
When i do this, the part where it logs t.ToString() shows the record to be empty, whereas if i log it before tempList.clear(), it shows the record to be containing data like it should.
If i move the tempList declaration into the Record r : data loop, then it works fine and the Record t : ret loop works outputs the contents of the record like it should
Why is this?
Edit : Record class
public class Record
{
List<String> values = new ArrayList<String>();
public Record(List<String> terms)
{
this.values = terms;
}
public Record(String[] s)
{
this.values = Arrays.asList(s);
}
public String toString()
{
return values.toString();
}
}
Your Record instance holds a reference to the ArrayList instance you passed to its constructor. Therefore, when you call tempList.clear(), you clear the same List that your Record instance is holding a reference to.
You shouldn't call tempList.clear(), since you are creating a new ArrayList in each iteration of your loop anyway.
you are referencing object from more than one place and clear method is cleaning object by setting its reference to null:
instead of ret.add(s); you can use ret.add(s.clone());
I have two ArrayList sourceMessageList and TargetMessageList. I need to compare both the message list data.
Now lets say List1 - 100 Records. List2 - 1000 records
From List1- 1st record is compared with each record in list2 and then List1- 2nd record is compared with each record in list2.
But list2 is getting the value hasNext() to true for 1st source data in list1.
private void compareMessageList(ArrayList<String> source_messageList, ArrayList<String> target_messageList)
throws Exception {
// TODO Auto-generated method stub
Iterator<String> sourceMessageIterator = source_messageList.iterator();
Iterator<String> targetMessageIterator = null;
while (sourceMessageIterator.hasNext()) {
String sourceMessage = (String) sourceMessageIterator.next();
targetMessageIterator = target_messageList.iterator();
while (targetMessageIterator.hasNext()) {
String targetMessage = (String) targetMessageIterator.next();
if (getCorpValue(sourceMessage).equalsIgnoreCase(getCorpValue(targetMessage))) {
assertXMLEquals(convertSwiftMessageToXML(sourceMessage), convertSwiftMessageToXML(targetMessage));
}
}
}
if (buffer.toString().length() > 0) {
writeDifferenceTofile(buffer.toString());
buffer.delete(0, buffer.length());
throw new CatsException("There are some differences in the files.");
}
System.out.println("Exiting now ...");
}
The above code is taking too much time to execute.
To speed things up:
HashMap<String, String> lowers = new HashMap<String, String>();
for (String source : source_messageList) {
lowers.put(getCorpValue(source).toLowerCase(), source);
}
for (String target : target_messageList) {
final String corpTarget = getCorpValue(target).toLowerCase();
if(lowers.containsKey(corpTarget)) {
assertXMLEquals(
convertSwiftMessageToXML(lowers.get(corpTarget)),
convertSwiftMessageToXML(target)
);
}
}
I have created twitter stream filtered by some keywords as follows.
TwitterStream twitterStream = getTwitterStreamInstance();
FilterQuery filtre = new FilterQuery();
String[] keywordsArray = { "iphone", "samsung" , "apple", "amazon"};
filtre.track(keywordsArray);
twitterStream.filter(filtre);
twitterStream.addListener(listener);
What is the best way to segregate tweets based on keywords matched. e.g. All the tweets that matches "iphone" should be stored into "IPHONE" table and all the tweets that matches "samsung" will be stored into "SAMSUNG" table and so on. NOTE: The no of filter keywords is about 500.
It seems that the only way to find out to which keyword a tweet belongs to is iterating over multiple properties of the Status object. The following code requires a database service with a method insertTweet(String tweetText, Date createdAt, String keyword) and every tweet is stored in the database multiple times, if multiple keywords are found. If at least one keyword is found in the tweet text, the additional properties are not searched for more keywords.
// creates a map of the keywords with a compiled pattern, which matches the keyword
private Map<String, Pattern> keywordsMap = new HashMap<>();
private TwitterStream twitterStream;
private DatabaseService databaseService; // implement and add this service
public void start(List<String> keywords) {
stop(); // stop the streaming first, if it is already running
if(keywords.size() > 0) {
for(String keyword : keywords) {
keywordsMap.put(keyword, Pattern.compile(keyword, Pattern.CASE_INSENSITIVE));
}
twitterStream = new TwitterStreamFactory().getInstance();
StatusListener listener = new StatusListener() {
#Override
public void onStatus(Status status) {
insertTweetWithKeywordIntoDatabase(status);
}
/* add the unimplemented methods from the interface */
};
twitterStream.addListener(listener);
FilterQuery filterQuery = new FilterQuery();
filterQuery.track(keywordsMap.keySet().toArray(new String[keywordsMap.keySet().size()]));
filterQuery.language(new String[]{"en"});
twitterStream.filter(filterQuery);
}
else {
System.err.println("Could not start querying because there are no keywords.");
}
}
public void stop() {
keywordsMap.clear();
if(twitterStream != null) {
twitterStream.shutdown();
}
}
private void insertTweetWithKeywordIntoDatabase(Status status) {
// search for keywords in tweet text
List<String> keywords = getKeywordsFromTweet(status.getText());
if (keywords.isEmpty()) {
StringBuffer additionalDataFromTweets = new StringBuffer();
// get extended urls
if (status.getURLEntities() != null) {
for (URLEntity url : status.getURLEntities()) {
if (url != null && url.getExpandedURL() != null) {
additionalDataFromTweets.append(url.getExpandedURL());
}
}
}
// get retweeted status -> text
if (status.getRetweetedStatus() != null && status.getRetweetedStatus().getText() != null) {
additionalDataFromTweets.append(status.getRetweetedStatus().getText());
}
// get retweeted status -> quoted status -> text
if (status.getRetweetedStatus() != null && status.getRetweetedStatus().getQuotedStatus() != null
&& status.getRetweetedStatus().getQuotedStatus().getText() != null) {
additionalDataFromTweets.append(status.getRetweetedStatus().getQuotedStatus().getText());
}
// get retweeted status -> quoted status -> extended urls
if (status.getRetweetedStatus() != null && status.getRetweetedStatus().getQuotedStatus() != null
&& status.getRetweetedStatus().getQuotedStatus().getURLEntities() != null) {
for (URLEntity url : status.getRetweetedStatus().getQuotedStatus().getURLEntities()) {
if (url != null && url.getExpandedURL() != null) {
additionalDataFromTweets.append(url.getExpandedURL());
}
}
}
// get quoted status -> text
if (status.getQuotedStatus() != null && status.getQuotedStatus().getText() != null) {
additionalDataFromTweets.append(status.getQuotedStatus().getText());
}
// get quoted status -> extended urls
if (status.getQuotedStatus() != null && status.getQuotedStatus().getURLEntities() != null) {
for (URLEntity url : status.getQuotedStatus().getURLEntities()) {
if (url != null && url.getExpandedURL() != null) {
additionalDataFromTweets.append(url.getExpandedURL());
}
}
}
String additionalData = additionalDataFromTweets.toString();
keywords = getKeywordsFromTweet(additionalData);
}
if (keywords.isEmpty()) {
System.err.println("ERROR: No Keyword found for: " + status.toString());
} else {
// insert into database
for(String keyword : keywords) {
databaseService.insertTweet(status.getText(), status.getCreatedAt(), keyword);
}
}
}
// returns a list of keywords which are found in a tweet
private List<String> getKeywordsFromTweet(String tweet) {
List<String> result = new ArrayList<>();
for (String keyword : keywordsMap.keySet()) {
Pattern p = keywordsMap.get(keyword);
if (p.matcher(tweet).find()) {
result.add(keyword);
}
}
return result;
}
Here's how you'd use a StatusListener to interrogate the received Status objects:
final Set<String> keywords = new HashSet<String>();
keywords.add("apple");
keywords.add("samsung");
// ...
final StatusListener listener = new StatusAdapter() {
#Override
public void onStatus(Status status) {
final String statusText = status.getText();
for (String keyword : keywords) {
if (statusText.contains(keyword)) {
dao.insert(keyword, statusText);
}
}
}
};
final TwitterStream twitterStream = getTwitterStreamInstance();
final FilterQuery fq = new FilterQuery();
fq.track(keywords.toArray(new String[0]));
twitterStream.addListener(listener);
twitterStream.filter(fq);
I see the DAO being defined along the lines of:
public interface StatusDao {
void insert(String tableSuffix, Status status);
}
You would then have a DB table corresponding with each keyword. The implementation would use the tableSuffix to store the Status in the correct table, the sql would roughly look like:
INSERT INTO status_$tableSuffix$ VALUES (...)
Notes:
This implementation would insert a Status into multiple tables if a Tweet contained 'apple' and 'samsung' for instance.
Additionally, this is quite a naive implementation, you might want to consider batching inserts into the tables... but it depends on the volume of Tweets you'll be receiving.
As noted in the comments, the API considers other attributes when matching e.g. URLs and an embedded Tweet (if present) so searching the status text for a keyword match may not be sufficient.
Well, you could create a class similar to an ArrayList but make it so you can create an array of ArrayLists, call it TweetList. This class will need an insert function.
Then use two for loops to search through the tweets and find matching keywords that are contained in a normal array list, and then add them to the TweetList that matches the index of the keyword in the keywords ArrayList
for (int i = 0; i < tweets.length; i++)
{
String[] split = tweets[i].split(" ");// split the tweet up
for (int j = 0; j < split.length; j++)
if (keywords.contains(split[j]))//check each word against the keyword list
list[keywords.indexOf(j)].insert[tweets[i]];//add the tweet to the tree index that matches index of the keyword
}