I wrote below program to understand how elastic search could be used to do full text search. Here when I search for individual words it works right but I want to search for combinations of words and that is not working.
package in.blogspot.randomcompiler.elastic_search_demo;
import in.blogspot.randomcompiler.elastic_search_impl.Event;
import java.util.Date;
import org.elasticsearch.action.count.CountRequestBuilder;
import org.elasticsearch.action.count.CountResponse;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.FilterBuilder;
import org.elasticsearch.index.query.FilterBuilders;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.SearchHits;
import com.fasterxml.jackson.core.JsonProcessingException;
public class ElasticSearchDemo
{
public static void main( String[] args ) throws JsonProcessingException
{
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress("localhost", 9301));
DeleteResponse deleteResponse1 = client.prepareDelete("chat-data", "event", "1").execute().actionGet();
DeleteResponse deleteResponse2 = client.prepareDelete("chat-data", "event", "2").execute().actionGet();
DeleteResponse deleteResponse3 = client.prepareDelete("chat-data", "event", "3").execute().actionGet();
Event e1 = new Event("LOGIN", new Date(), "Agent1 logged into chat");
String e1Json = e1.prepareJson();
System.out.println("JSON: " + e1Json);
IndexResponse indexResponse1 = client.prepareIndex("chat-data", "event", "1").setSource(e1Json).execute().actionGet();
printIndexResponse("e1", indexResponse1);
Event e2 = new Event("LOGOUT", new Date(), "Agent1 logged out of chat");
String e2Json = e2.prepareJson();
System.out.println("JSON: " + e2Json);
IndexResponse indexResponse2 = client.prepareIndex("chat-data", "event", "2").setSource(e2Json).execute().actionGet();
printIndexResponse("e2", indexResponse2);
Event e3 = new Event("BREAK", new Date(), "Agent1 went on break in the middle of a chat");
String e3Json = e3.prepareJson();
System.out.println("JSON: " + e3Json);
IndexResponse indexResponse3 = client.prepareIndex("chat-data", "event", "3").setSource(e3Json).execute().actionGet();
printIndexResponse("e3", indexResponse3);
FilterBuilder filterBuilder = FilterBuilders.termFilter("value", "break middle");
SearchRequestBuilder searchBuilder = client.prepareSearch();
searchBuilder.setPostFilter(filterBuilder);
CountRequestBuilder countBuilder = client.prepareCount();
countBuilder.setQuery(QueryBuilders.constantScoreQuery(filterBuilder));
CountResponse countResponse1 = countBuilder.execute().actionGet();
System.out.println("HITS: " + countResponse1.getCount());
SearchResponse searchResponse1 = searchBuilder.execute().actionGet();
SearchHits hits = searchResponse1.getHits();
for(int i=0; i<hits.hits().length; i++) {
SearchHit hit = hits.getAt(i);
System.out.println("[" + i + "] " + hit.getId() + " : " +hit.sourceAsString());
}
client.close();
}
private static void printIndexResponse(String description, IndexResponse response) {
System.out.println("Index response for: " + description);
System.out.println("Index name: " + response.getIndex());
System.out.println("Index type: " + response.getType());
System.out.println("Index id: " + response.getId());
System.out.println("Index version: " + response.getVersion());
}
}
The issue I am facing is that when I search for "break middle" it returns nothing, expectation is that it should return the 3rd event.
I understand that I need to configure a different analyzer rather the default one to make it index appropriately.
Could someone please help me in understanding how to do that. Some complete example would to great to have.
The problem is caused because you are using the Term filter:
FilterBuilder filterBuilder = FilterBuilders.termFilter("value", "break middle");
A Term filter doesn't analyse the data in the query string - so Elasticsearch is looking for the exact string "break middle".
However the third document will probably have been broken down by ES into individual terms as follows:
Agent1
went
on
break
in
the
middle
of
a
chat
to fix the issue, use a filter or query that analyses the string you're passing - for example use a Query_String query or Match query.
For example:
QueryBuilder qb = QueryBuilders.matchQuery("event", "break middle");
or:
QueryBuilder qb = QueryBuilders.queryString("break middle");
See the Java API documentation for Elasticsearch for more info.
Related
I have a client sending in a JWKS like this:
{
"keys": [
"kty": "RSA",
"use": "sig",
"alg": "RS256",
"kid": "...",
"x5t": "...",
"custom_field_1": "this is some content",
"custom_field_2": "this is some content, too",
"n": "...",
"x5c": "..."
]
}
Using the com.nimbusds.jose.jwk.JWKSet, I'd like to rip through the keys via the getKeys() method, which gives me com.nimbusds.jose.jwk.JWK objects and read those custom fields. I need to perform some custom logic based on these fields.
Is there a way to do this?
There does not appear to be a way to deal with this with standard libraries. I was unable to find anything that indicates that a custom parameter in a JWK is legitimate but the Jose library seems to just ignore it. So the only thing I can find is to read it "by hand". Something like:
import com.jayway.jsonpath.JsonPath;
import java.util.HashMap;
import java.util.List;
public class JWKSParserDirect {
private static final String jwks = "{\"keys\": [{" +
"\"kty\": \"RSA\"," +
"\"use\": \"sig\"," +
"\"alg\": \"RS256\"," +
"\"e\": \"AQAB\"," +
"\"kid\": \"2aktWjYabDofafVZIQc_452eAW9Z_pw7ULGGx87ufVA\"," +
"\"x5t\": \"5FTiZff07R_NuqNy5QXUK7uZNLo\"," +
"\"custom_field_1\": \"this is some content\"," +
"\"custom_field_2\": \"this is some content, too\"," +
"\"n\": \"foofoofoo\"," +
"\"x5c\": [\"blahblahblah\"]" +
"}" +
"," +
"{" +
"\"kty\": \"RSA\"," +
"\"use\": \"sig\"," +
"\"alg\": \"RS256\"," +
"\"e\": \"AQAB\"," +
"\"kid\": \"2aktWjYabDofafVZIQc_452eAW9Z_pw7ULGGx87ufVA\"," +
"\"x5t\": \"5FTiZff07R_NuqNy5QXUK7uZNLo\"," +
"\"custom_field_1\": \"this is some content the second time\"," +
"\"custom_field_2\": \"this is some content, too and two\"," +
"\"n\": \"foofoofoo\"," +
"\"x5c\": [\"blahblahblah\"]" +
"}]}";
#SuppressWarnings("unchecked")
public static void main(String[] argv) {
List<Object> keys = JsonPath.read(jwks, "$.keys[*]");
for (Object key : keys) {
HashMap<String, String> keyContents = (HashMap<String, String>) key;
System.out.println("custom_field_1 is \"" + keyContents.get("custom_field_1") + "\"");
System.out.println("custom_field_2 is \"" + keyContents.get("custom_field_2") + "\"");
}
}
}
or, to go direct to the JWK:
import com.jayway.jsonpath.JsonPath;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
import java.util.HashMap;
import java.util.List;
public class JWKSParserURL {
#SuppressWarnings("unchecked")
public static void main(String[] argv) {
try {
URL url = new URL("https://someserver.tld/auth/realms/realmname/protocol/openid-connect/certs");
URLConnection urlConnection = url.openConnection();
InputStream inputStream = urlConnection.getInputStream();
List<Object> keys = JsonPath.read(inputStream, "$.keys[*]");
for( Object key: keys) {
HashMap<String, String> keyContents = (HashMap<String, String>)key;
System.out.println("custom_field_1 is \"" + keyContents.get("custom_field_1") + "\"");
System.out.println("custom_field_2 is \"" + keyContents.get("custom_field_2") + "\"");
}
}
catch (IOException ioe) {
ioe.printStackTrace(System.err);
}
}
}
There isn't a way that I can find to have a regex for the Json Path key so you'll need to grab them with the full path. You can also have something like:
List<String> customField1 = JsonPath.read(jwks, "$.key[*].custom_field_1");
to get a list of the "custom_field_1" values. To me this is more difficult as you get all of the custom field values separately and not within each key.
Again, I'm not finding support for custom JWK fields anywhere. JWT - no problem but not JWK. But if you've got this I think you'll need to extract these fields without standard libraries.
I'm having trouble finding examples of what I'm trying to do...
I'd like to create a Lambda function in Java. I thought I'd always use Javascript for Lambda functions, but in this case I'll end up re-using application logic already written in Java, so it makes sense.
In the past I've written Javascript Lambda functions that are triggered by Kinesis events. Super simple, function receives the events as a parameter, do something, voila. I'd like to do the same thing with Java. Really simple :
Kinesis Event(s) -> Trigger Function -> (Java) Receive Kinesis Events, do something with them
Anyone have experience with this kind of use case?
Here is some sample code I wrote to demonstrate the same concept internally. This code forwards events from one stream to another.
Note this code does not handle retries if there are errors in forwarding, nor is it meant to be performant in a production environment, but it does demonstrate how to handle the records from the publishing stream.
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.kinesis.AmazonKinesisClient;
import com.amazonaws.services.kinesis.model.PutRecordsRequest;
import com.amazonaws.services.kinesis.model.PutRecordsRequestEntry;
import com.amazonaws.services.kinesis.model.PutRecordsResult;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class KinesisToKinesis {
private LambdaLogger logger;
final private AmazonKinesisClient kinesisClient = new AmazonKinesisClient();
public PutRecordsResult eventHandler(KinesisEvent event, Context context) {
logger = context.getLogger();
if (event == null || event.getRecords() == null) {
logger.log("Event contains no data" + System.lineSeparator());
return null;
} else {
logger.log("Received " + event.getRecords().size() +
" records from " + event.getRecords().get(0).getEventSourceARN() + System.lineSeparator());
}
final Long startTime = System.currentTimeMillis();
// set up the client
Region region;
final Map<String, String> environmentVariables = System.getenv();
if (environmentVariables.containsKey("AWS_REGION")) {
region = Region.getRegion(Regions.fromName(environmentVariables.get("AWS_REGION")));
} else {
region = Region.getRegion(Regions.US_WEST_2);
logger.log("Using default region: " + region.toString() + System.lineSeparator());
}
kinesisClient.setRegion(region);
Long elapsed = System.currentTimeMillis() - startTime;
logger.log("Finished setup in " + elapsed + " ms" + System.lineSeparator());
PutRecordsRequest putRecordsRequest = new PutRecordsRequest().withStreamName("usagecounters-global");
List<PutRecordsRequestEntry> putRecordsRequestEntryList = event.getRecords().parallelStream()
.map(r -> new PutRecordsRequestEntry()
.withData(ByteBuffer.wrap(r.getKinesis().getData().array()))
.withPartitionKey(r.getKinesis().getPartitionKey()))
.collect(Collectors.toList());
putRecordsRequest.setRecords(putRecordsRequestEntryList);
elapsed = System.currentTimeMillis() - startTime;
logger.log("Processed " + putRecordsRequest.getRecords().size() +
" records in " + elapsed + " ms" + System.lineSeparator());
PutRecordsResult putRecordsResult = kinesisClient.putRecords(putRecordsRequest);
elapsed = System.currentTimeMillis() - startTime;
logger.log("Forwarded " + putRecordsRequest.getRecords().size() +
" records to Kinesis " + putRecordsRequest.getStreamName() +
" in " + elapsed + " ms" + System.lineSeparator());
return putRecordsResult;
}
}
Could any one help me in below issue
I want to pass test cases in QC through Java, I used con4j and reached till test sets but I am unable to fetch the test cases under respective test set.
could any one please help me in how to pass test cases in QC through com4j
import com.qc.ClassFactory;
import com.qc.ITDConnection;
import com.qc.ITestLabFolder;
import com.qc.ITestSetFactory;
import com.qc.ITestSetTreeManager;
import com.qc.ITestSetFolder;
import com.qc.IList;
import com.qc.ITSTest;
import com.qc.ITestSet;
import com.qc.ITestFactory;
import com4j.*;
import com4j.stdole.*;
import com4j.tlbimp.*;
import com4j.tlbimp.def.*;
import com4j.tlbimp.driver.*;
import com4j.util.*;
import com4j.COM4J;
import java.util.*;
import com.qc.IRun;
import com.qc.IRunFactory;
public class Qc_Connect {
public static void main(String[] args) {
// TODO Auto-generated method stub
String url="http://abc/qcbin/";
String domain="abc";
String project="xyz";
String username="132222";
String password="Xyz";
String strTestLabPath = "Root\\Test\\";
String strTestSetName = "TestQC";
try{
ITDConnection itd=ClassFactory.createTDConnection();
itd.initConnectionEx(url);
System.out.println("COnnected To QC:"+ itd.connected());
itd.connectProjectEx(domain,project,username,password);
System.out.println("Logged into QC");
//System.out.println("Project_Connected:"+ itd.connected());
ITestSetFactory objTestSetFactory = (itd.testSetFactory()).queryInterface(ITestSetFactory.class);
ITestSetTreeManager objTestSetTreeManager = (itd.testSetTreeManager()).queryInterface(ITestSetTreeManager.class);
ITestSetFolder objTestSetFolder =(objTestSetTreeManager.nodeByPath(strTestLabPath)).queryInterface(ITestSetFolder.class);
IList its1 = objTestSetFolder.findTestSets(strTestSetName, true, null);
//IList ls= objTestSetFolder.findTestSets(strTestSetName, true, null);
System.out.println("No. of Test Set:" + its1.count());
ITestSet tst= (ITestSet) objTestSetFolder.findTestSets(strTestSetName, true, null).queryInterface(ITSTest.class);
System.out.println(tst.name());
//System.out.println( its1.queryInterface(ITestSet.class).name());
/* foreach (ITestSet testSet : its1.queryInterface(ITestSet.class)){
ITestSetFolder tsFolder = (ITestSetFolder)testSet.TestSetFolder;
ITSTestFactory tsTestFactory = (ITSTestFactory)testSet.TSTestFactory;
List tsTestList = tsTestFactory.NewList("");
}*/
/* Com4jObject comObj = (Com4jObject) its1.item(0);
ITestSet tst = comObj.queryInterface(ITestSet.class);
System.out.println("Test Set Name : " + tst.name());
System.out.println("Test Set ID : " + tst.id());
System.out.println("Test Set ID : " + tst.status());
System.out.println("Test Set ID : " );*/
System.out.println(its1.count());
System.out.println("TestSet Present");
Iterator itr = its1.iterator();
System.out.println(itr.hasNext());
while (itr.hasNext())
{
Com4jObject comObj = (Com4jObject) itr.next();
ITestSet sTestSet = comObj.queryInterface(ITestSet.class);
System.out.println(sTestSet.name());
Com4jObject comObj2 = sTestSet.tsTestFactory();
ITestSetFactory test = comObj2.queryInterface(ITestSetFactory.class);
}
// ITSTest tsTest=null;
// tsTest.
//its1.
/* comObj = (Com4jObject) its1.item(1);
ITSTest tst2=comObj.queryInterface(ITSTest.class);*/
// System.out.println( tst2.name());
/* foreach (ITSTest tsTest : tst2)
{
IRun lastRun = (IRun)tsTest.lastRun();
if (lastRun == null)
{
IRunFactory runFactory = (IRunFactory)tsTest.runFactory;
String date = "20160203";
IRun run = (IRun)runFactory.addItem( date);
run.status("Pass");
run.autoPost();
}
}*/
}
catch(Exception e){
e.printStackTrace();
}
}
}
I know the post is quite old. I have to struggle alot in OTA with Java and couldn't get a complete post for solving the issue.
Now i have running code after too much research.
so thought of sharing my code in case someone is looking for help.
Here is complete Solution.
`
ITestFactory sTestFactory = (connection.testFactory())
.queryInterface(ITestFactory.class);
ITest iTest1 = (sTestFactory.item(12081)).queryInterface(ITest.class);
System.out.println(iTest1.execDate());
System.out.println(iTest1.name());
ITestSetFactory sTestSetFactory = (connection.testSetFactory())
.queryInterface(ITestSetFactory.class);
ITestSet sTestSet = (sTestSetFactory.item(1402))
.queryInterface(ITestSet.class);
System.out.println(sTestSet.name() + "\n Test Set ID" + sTestSet.id());
IBaseFactory testFactory1 = sTestSet.tsTestFactory().queryInterface(
IBaseFactory.class);
testFactory1.addItem(iTest1);
System.out.println("Test case has been Added");
System.out.println(testFactory1.newList("").count());
IList tsTestlist = testFactory1.newList("");
ITSTest tsTest;
for (int tsTestIndex = 1; tsTestIndex <= tsTestlist.count(); tsTestIndex++) {
Com4jObject comObj = (Com4jObject) tsTestlist.item(tsTestIndex);
tsTest = comObj.queryInterface(ITSTest.class);
if (tsTest.name().equalsIgnoreCase("[3]TC_OTA_API_Test")) {
System.out.println("Hostname" + tsTest.hostName() + "\n"
+ tsTest.name() + "\n" + tsTest.status());
IRun lastRun = (IRun) tsTest.lastRun();
// IRun lastRun = comObjRun.queryInterface(IRun.class);
// don't update test if it may have been modified by someone
// else
if (lastRun == null) {
System.out.println("I am here last Run = Null");
runFactory = tsTest.runFactory().queryInterface(
IRunFactory.class);
System.out.println(runFactory.newList("").count());
String runName = "TestRun_Automated";
Com4jObject comObjRunForThisTS = runFactory
.addItem(runName);
IRun runObjectForThisTS = comObjRunForThisTS
.queryInterface(IRun.class);
runObjectForThisTS.status("Passed");
runObjectForThisTS.post();
runObjectForThisTS.refresh();
}
}
}
`
Why not build a client to access the REST API instead of passing through the OTA interface?
Once you build a basic client, you can post runs and update their status quite easily.
If you use c#/vb.net this has been easily completed. But you are working on java, I would suggest to provide interface above dlls to deal with operation. This will be much more easier than using com4j.
Similar query, probably following may help you. I would suggest to drop idea of using com4j and use solution provided in thread below which is proven,fail safe and auto-recoverable.
QC API JAR to connect using java
it was always been difficult to use com4j specially for HPQC/ALM. As dlls for QC are faulty and there are memory leaking/allocation problems which crashes dll executions frequently on certain platforms.
in my code example i create three document in a lucene index.
two of them not storing the field LASTNAME, but have stored termvector, one have non of them stored.
with LUKE i am able to iterate through all terms in this field (LASTNAME).
in my code example iterate through the TermFreqVectors, that works fine for document with stored TermVectors.
how can i get all this non stored Terms? how is LUKE doing that?
my original problem is, that i want to extend a big index (60GB) with nearly 100 fields with another field without re-creating the index from scratch, because with our db-setup it needs with 40 parallel computing server a couple of days.
it is very fast to read all the data from the index and just add this new field to all stored documents.
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.MockAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.RandomIndexWriter;
import org.apache.lucene.index.TermFreqVector;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.NIOFSDirectory;
import org.apache.lucene.util.LuceneTestCase;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
public class TestDocTerms extends LuceneTestCase {
public void testDocTerms() throws IOException, ParseException {
Analyzer analyzer = new MockAnalyzer(random);
String fieldF = "FIRSTNAME";
String fieldL = "LASTNAME";
// To store an index on disk, use this instead:
Directory directory = NIOFSDirectory.open(new File("/tmp/_index_tester/"));
RandomIndexWriter iwriter = new RandomIndexWriter(random, directory, analyzer);
iwriter.w.setInfoStream(VERBOSE ? System.out : null);
Document doc = new Document();
doc.add(newField(fieldF, "Alex", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Miller", Field.Store.NO,Field.Index.ANALYZED,Field.TermVector.YES));
iwriter.addDocument(doc);
doc = new Document();
doc.add(newField(fieldF, "Chris", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Smith", Field.Store.NO, Field.Index.ANALYZED));
iwriter.addDocument(doc);
doc = new Document();
doc.add(newField(fieldF, "Alex", Field.Store.YES, Field.Index.ANALYZED));
doc.add(newField(fieldL, "Beatle", Field.Store.NO, Field.Index.ANALYZED,Field.TermVector.YES));
iwriter.addDocument(doc);
iwriter.close();
// Now search the index:
IndexSearcher isearcher = new IndexSearcher(directory, true); // read-only=true
QueryParser parser = new QueryParser(TEST_VERSION_CURRENT, fieldF, analyzer);
Query query = parser.parse(fieldF + ":" + "Alex");
TopDocs hits = isearcher.search(query, null, 2);
assertEquals(2, hits.totalHits);
// Iterate through the results:
for (int i = 0; i < hits.scoreDocs.length; i++) {
Document hitDoc = isearcher.doc(hits.scoreDocs[i].doc);
assertEquals("Alex", hitDoc.get(fieldF));
System.out.println("query for:" +query.toString()+ " with this results firstN:" + hitDoc.get(fieldF) + " and lastN:" + hitDoc.get(fieldL));
}
parser = new QueryParser(TEST_VERSION_CURRENT, fieldL, analyzer);
query = parser.parse(fieldL + ":" + "Miller");
hits = isearcher.search(query, null, 2);
assertEquals(1, hits.totalHits);
// Iterate through the results:
for (int i = 0; i < hits.scoreDocs.length; i++) {
Document hitDoc = isearcher.doc(hits.scoreDocs[i].doc);
assertEquals("Alex", hitDoc.get(fieldF));
System.out.println("query for:" + query.toString() + " with this results firstN:" +hitDoc.get(fieldF)+ " and lastN:" +hitDoc.get(fieldL));
}
isearcher.close();
// examine terms
IndexReader ireader = IndexReader.open(directory, true); // read-only=true
int numDocs = ireader.numDocs();
for (int i = 0; i < numDocs; i++) {
doc = ireader.document(i);
System.out.println("docNum:" + i + " with:" + doc.toString());
TermFreqVector t = ireader.getTermFreqVector(i, fieldL);
if (t != null){
System.out.println("Field:" + fieldL + " contains terms:" + t.toString());
}
TermFreqVector[] termFreqVectors = ireader.getTermFreqVectors(i);
if (termFreqVectors != null){
for (TermFreqVector tfv : termFreqVectors){
String[] terms = tfv.getTerms();
String field = tfv.getField();
System.out.println("Field:" +field+ " contains terms:" + Arrays.toString(terms));
}
}
}
ireader.close();
}
}
Reconstructing unstored documents is necessarily a best effort. You can't generally reverse changes made to the value by the analyzer.
When TermVectors are not available, Luke enumerates the terms associated with the field. This may not respect the ordering of the terms, or any formatting. That may be neither here nor there, though. I don't know what your newField method does exactly, but I suspect it's default is not Field.TermVector.NO.
If you want to know more of the implementation details, I would grab the Luke source code, and read org.getopt.luke.DocReconstructor
-what I want to do
I would like to get data from Google Spreadsheet using Google Spreadsheet API Java library without authentication.
The Google Spreadsheet is published with public.
I would like to use the following method:
com.google.gdata.data.spreadsheet.CustomElementCollection
-Issue
CustomElementCollection return collect data with authentication.
But CustomElementCollection return null without authentication.
As listEntry.getPlainTextContent() shows data, so I think I should be able to get the data in any ways.
-Source code attached
With authentication: Auth.java
import java.net.URL;
import java.util.List;
import com.google.gdata.client.spreadsheet.ListQuery;
import com.google.gdata.client.spreadsheet.SpreadsheetService;
import com.google.gdata.data.spreadsheet.CustomElementCollection;
import com.google.gdata.data.spreadsheet.ListEntry;
import com.google.gdata.data.spreadsheet.ListFeed;
import com.google.gdata.data.spreadsheet.SpreadsheetEntry;
import com.google.gdata.data.spreadsheet.WorksheetEntry;
public class Auth {
public static void main(String[] args) throws Exception{
String applicationName = "AppName";
String user = args[0];
String pass = args[1];
String key = args[2];
String query = args[3];
SpreadsheetService service = new SpreadsheetService(applicationName);
service.setUserCredentials(user, pass); //set client auth
URL entryUrl = new URL("http://spreadsheets.google.com/feeds/spreadsheets/" + key);
SpreadsheetEntry spreadsheetEntry = service.getEntry(entryUrl, SpreadsheetEntry.class);
WorksheetEntry worksheetEntry = spreadsheetEntry.getDefaultWorksheet();
ListQuery listQuery = new ListQuery(worksheetEntry.getListFeedUrl());
listQuery.setSpreadsheetQuery( query );
ListFeed listFeed = service.query(listQuery, ListFeed.class);
List<ListEntry> list = listFeed.getEntries();
for( ListEntry listEntry : list )
{
System.out.println( "content=[" + listEntry.getPlainTextContent() + "]");
CustomElementCollection elements = listEntry.getCustomElements();
System.out.println(
" name=" + elements.getValue("name") +
" age=" + elements.getValue("age") );
}
}
}
Without authentication: NoAuth.java
import java.net.URL;
import java.util.List;
import com.google.gdata.client.spreadsheet.FeedURLFactory;
import com.google.gdata.client.spreadsheet.ListQuery;
import com.google.gdata.client.spreadsheet.SpreadsheetService;
import com.google.gdata.data.spreadsheet.CustomElementCollection;
import com.google.gdata.data.spreadsheet.ListEntry;
import com.google.gdata.data.spreadsheet.ListFeed;
import com.google.gdata.data.spreadsheet.WorksheetEntry;
import com.google.gdata.data.spreadsheet.WorksheetFeed;
public class NoAuth {
public static void main(String[] args) throws Exception{
String applicationName = "AppName";
String key = args[0];
String query = args[1];
SpreadsheetService service = new SpreadsheetService(applicationName);
URL url = FeedURLFactory.getDefault().getWorksheetFeedUrl(key, "public", "basic");
WorksheetFeed feed = service.getFeed(url, WorksheetFeed.class);
List<WorksheetEntry> worksheetList = feed.getEntries();
WorksheetEntry worksheetEntry = worksheetList.get(0);
ListQuery listQuery = new ListQuery(worksheetEntry.getListFeedUrl());
listQuery.setSpreadsheetQuery( query );
ListFeed listFeed = service.query( listQuery, ListFeed.class );
List<ListEntry> list = listFeed.getEntries();
for( ListEntry listEntry : list )
{
System.out.println( "content=[" + listEntry.getPlainTextContent() + "]");
CustomElementCollection elements = listEntry.getCustomElements();
System.out.println(
" name=" + elements.getValue("name") +
" age=" + elements.getValue("age") );
}
}
}
Google Spreadsheet:
https://docs.google.com/spreadsheet/pub?key=0Ajawooo6A9OldHV0VHYzVVhTZlB6SHRjbGc5MG1CakE&output=html
-Result
Without authentication
content=[age: 23]
name=null age=null
With authentication
content=[age: 23]
name=Taro age=23
Please let me know the useful information to avoid the issue.
I don't know why it works like that, but when you don't access request with credentials, you are not able to retrieve cells via:
CustomElementCollection elements = listEntry.getCustomElements();
System.out.println(" name=" + elements.getValue("name") + " age=" + elements.getValue("age") );
I've tested it and I have found only this way to retrieve data:
List<ListEntry> list = listFeed.getEntries();
for (ListEntry row : list) {
System.out.println(row.getTitle().getPlainText() + "\t"
+ row.getPlainTextContent());
}
It prints:
Taro age: 23
Hanako age: 16
As you see, you should parse text and retrieve age from raw String.
I believe the problem is that you are using the "basic" projection for your spreadsheet. If you use the "values" projection, everything should work as expected.
I was wondering about this as well. I looked at the feed coming in (just paste the URL to the sheet into Chrome), and it seems like there is no XML markup, and are all coming in under the <content> tag. So it makes sense that the parser is lumping it all into the text content of the BaseEntry (instead of making a ListEntry).