Apache Ignite SqlFieldQuery on top of cache storing BinaryObject - java

I cannot seem to get this to work, and have scoured the net for documentation or examples to no avail
Goal
To run a simple aggregation query on an Ignite Cache backed by BinaryObject values with UUID as the key
Put Operation Code
IgniteBinary binary = ignite.binary();
IgniteCache<UUID, BinaryObject> rowCache = ignite.getOrCreateCache(CACHE_NAME).withKeepBinary();
// put
final int NUM_ROW = 100000;
final int NUM_COL = 100;
for (int i = 0; i < NUM_ROW; i++) {
BinaryObjectBuilder builder = binary.builder(ROW);
for (int j = 0; j < NUM_COL; j++) {
builder.setField("col" + j, Math.random(), Double.class);
}
BinaryObject obj = builder.build();
rowCache.put(UUID.randomUUID(), obj);
}
Read Operation Code
IgniteCache<UUID, BinaryObject> cache = ignite.cache(CACHE_NAME).withKeepBinary();
final SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery("SELECT COUNT(col1)" + cache.getName());
FieldsQueryCursor<List<?>> result = cache.query(sqlFieldsQuery);
Error
org.h2.jdbc.JdbcSQLException: Column "COL1" not found; SQL statement
EDIT
I've since added a QueryEntity to the cache configuration to make the problem disappear
final QueryEntity queryEntity = new QueryEntity();
queryEntity.setTableName(CACHE_NAME);
queryEntity.setKeyFieldName("key");
queryEntity.setKeyType(String.class.getName());
queryEntity.setValueType(Row.class.getName());
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("key", String.class.getName());
for (int i = 0; i < 55; i++) {
fields.put("col" + i, Double.class.getName());
}
queryEntity.setFields(fields);
return queryEntity;
However, it is unclear to me how QueryEntity's setValueType and setValueFieldName does? My value type is an arbitrary Binary object with arbitrary key, values
I would like to declare these via fields.put(<colName>, <colType>); ...
I am able to get everything to work using POJOs, but not BinaryObject as the value type
Is there anything I am doing wrong?

new SqlFieldsQuery("SELECT COUNT(col1)" + cache.getName())
Cache name is a schema name, and class name (Row) is a table name. Looks like you have incorrect table name.
Also make sure that ROW in binary.builder(ROW) equals to QueryEntity.valueType.

Related

Using dnsjava to query using java?

I am trying to query using the google public dns server (8.8.8.8) to get the IP address of some known URL. However, it seems like I am not able to get that using the following code? I am using the dnsjava java library. This is my current code
The results
Lookup lookup = new Lookup("stackoverflow.com", Type.NS);
SimpleResolver resolver=new SimpleResolver("8.8.8.8");
lookup.setDefaultResolver(resolver);
lookup.setResolver(resolver);
Record [] records = lookup.run();
for (int i = 0; i < records.length; i++) {
Record r = (Record ) records[i];
System.out.println(r.getName()+","+r.getAdditionalName());
}
}
catch ( Exception ex) {
ex.printStackTrace();
logger.error(ex.getMessage(),ex);
}
Results:
stackoverflow.com.,ns-1033.awsdns-01.org.
stackoverflow.com.,ns-cloud-e1.googledomains.com.
stackoverflow.com.,ns-cloud-e2.googledomains.com.
stackoverflow.com.,ns-358.awsdns-44.com.
You don’t need a DNS library just to look up an IP address. You can simply use JNDI:
Properties env = new Properties();
env.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.dns.DnsContextFactory");
env.setProperty(Context.PROVIDER_URL, "dns://8.8.8.8");
DirContext context = new InitialDirContext(env);
Attributes list = context.getAttributes("stackoverflow.com",
new String[] { "A" });
NamingEnumeration<? extends Attribute> records = list.getAll();
while (records.hasMore()) {
Attribute record = records.next();
String name = record.get().toString();
System.out.println(name);
}
If you insist on using the dnsjava library, you need to use Type.A (as your code was originally doing, before your edit).
Looking at the documentation for the Record class, notice the long list under Direct Known Subclasses. You need to cast each Record to the appropriate subclass, which in this case is ARecord.
Once you’ve done that cast, you have an additional method available, getAddress:
for (int i = 0; i < records.length; i++) {
ARecord r = (ARecord) records[i];
System.out.println(r.getName() + "," + r.getAdditionalName()
+ " => " + r.getAddress());
}

Insert Multiple Rows realm android

Hello ive been trying to insert multiple rows to my realm database using values from arraylists , whenever i try to insert through a for loop it only adds the last one, if you need something else (code, xml) pls let me know
here is my code:
realm.executeTransactionAsync(new Realm.Transaction() { //ASYNCHRONOUS TRANSACCION TO EXECUTE THE QUERY ON A DIFFERENT THREAD
#Override
public void execute(Realm bgRealm) {
// increment index
Invoices inv = bgRealm.createObject(Invoices.class, RealmController.autoincrement(bgRealm, Invoices.class)); //METHOD THAT GIVES US THE AUTONINCREMENTE FUNCTION
//inv.id = nextId; //THE 2ND PARAMETER IN CREATE OBJECTE DEFINES THE PK
//...
//realm.insertOrUpdate(user); // using insert API
inv.number = n;
inv.serial = s;
inv.client = c;
inv.subtotal = sub;
inv.tax = tax;
inv.total = tot;
Invoice_lines invl = bgRealm.createObject(Invoice_lines.class, RealmController.autoincrement(bgRealm, Invoice_lines.class));//ID FROM ANOHTER TABLE (ROW)
for(int i=0; i<price.size(); i++) {
invl.description = description.get(i);
invl.price = price.get(i);
invl.quantity = quantity.get(i);
invl.invoice = inv;
bgRealm.insert(invl);
}
}
}
I'm not sure. You create only one realm object in this line:
Invoice_lines invl = bgRealm.createObject(Invoice_lines.class, RealmController.autoincrement(bgRealm, Invoice_lines.class));//ID FROM ANOHTER TABLE (ROW)
And in cycle you change invl fields, but don't insert new objects.
Try to create objects inside cycle.
Because what you wanted to do is
Invoice_lines invl = new Invoice_lines(); // unmanaged object
for(int i = 0; i < price.size(); i++) {
inv1.setId(RealmController.autoincrement(bgRealm, Invoice_lines.class));//ID FROM ANOHTER TABLE (ROW)
invl.description = description.get(i);
invl.price = price.get(i);
invl.quantity = quantity.get(i);
invl.invoice = inv;
bgRealm.insert(invl);
}

AWS QueryRequest not returning any values from secondary index query

So I am trying to query one of my dynamoDB tables and for some reason it is not returning any results. I have another user table that has almost the exact same index and it returns a value. Here is my code:
String fbId = requestInfo.get("requestFbId");
System.out.println("The id is " + fbId);
Map<String, AttributeValue> exAttributeVal = new HashMap<String, AttributeValue>();
exAttributeVal.put(":val", new AttributeValue().withS(fbId));
QueryRequest friendsQuery = new QueryRequest()
.withTableName(Keys.friendsTable)
.withIndexName("User-Friends-index")
.withKeyConditionExpression("userId = :val")
.withExpressionAttributeValues(exAttributeVal);
QueryResult friendsQueryResult = dynamoDbClient.query(friendsQuery);
System.out.println("The size is " + friendsQueryResult.getItems().size());
for (int i = 0; i < friendsQueryResult.getItems().size(); i++) {
System.out.println("The result is " + friendsQueryResult.getItems().get(i));
}
Does anybody know what I could be doing wrong here? This also used to work directly on my android app, but it is not working now that I have moved it into lambda

How to create nested object and array in parquet file?

How do I create a parquet file with nested fields? I have the following:
public static void main(String[] args) throws IOException {
int fileNum = 10; //num of files constructed
int fileRecordNum = 50; //record num of each file
int rowKey = 0;
for (int i = 0; i < fileNum; ++i) {
Map<String, String> metas = new HashMap<>();
metas.put(HConstants.START_KEY, genRowKey("%10d", rowKey + 1));
metas.put(HConstants.END_KEY, genRowKey("%10d", rowKey + fileRecordNum));
ParquetWriter<Group> writer = initWriter("pfile/scanner_test_file" + i, metas);
for (int j = 0; j < fileRecordNum; ++j) {
rowKey++;
Group group = sfg.newGroup().append("rowkey", genRowKey("%10d", rowKey))
.append("cf:name", "wangxiaoyi" + rowKey)
.append("cf:age", String.format("%10d", rowKey))
.append("cf:job", "student")
.append("timestamp", System.currentTimeMillis());
writer.write(group);
}
writer.close();
}
}
I want to create two fields:
Hobbies which contains a list of hobbies ("Swimming", "Kickboxing")
A teacher object that contains subfields like:
{
'teachername': 'Rachel',
'teacherage':50
}
Can someone provide an example how to do this in Java?
Parquet is columned (mini-storages) key-value storage... I.e. this kind of storage cannot keep nested data, but this storage accepts converting logical types of data to binary format (byte array with header that contains data to understand what kind of convertation should be applied to this data).
I'm not sure about how should you implement your converter, but basically you should work with Binary class as data container and create some converter... sample converter you can find for String data type.

JDBC getString(i) significant slower in server

I have an Oracle 12c database query, which pulls a table of 13 columns and more than 114470 rows in a daily basis.
I was not concerned with this issue until I moved the same code from my DEV server to my PROD server.
On my DEV environment the query takes 3 min:26 sec to complete its execution.
However on PROD the exact same code takes 15 min:34 sec for finishing.
These times were retrieved adding logs on the following code execution:
private List<Map<String, String>> getFieldInformation(ResultSet sqlResult) throws SQLException {
//Map between each column name and the designated data as String
List<Map<String, String>> rows = new ArrayList<Map<String,String>>();
// Count number of returned records
ResultSetMetaData rsmd = sqlResult.getMetaData();
int numberOfColumns = rsmd.getColumnCount();
boolean continueLoop = sqlResult.next();
// If there are no results we return an empty list not null
if(!continueLoop) {
return rows;
}
while (continueLoop) {
Map<String, String> columns = new LinkedHashMap<String, String>();
// Reset Variables for data
String columnLabel = null;
String dataInformation = null;
// Append to the map column name and related data
for(int i = 1; i <= numberOfColumns; i++) {
columnLabel = rsmd.getColumnLabel(i);
dataInformation = sqlResult.getString(i);
if(columnLabel!=null && columnLabel.length()>0 && (dataInformation==null || dataInformation.length() <= 0 )) {
dataInformation = "";
}
columns.put(columnLabel, dataInformation);
}
rows.add(columns);
continueLoop = sqlResult.next();
}
return rows;
}
I understand that "getString" is not the best way for retrieving non TEXT data, but due to the nature of the project I not always know the data type.
Furthermore, I checked in PROD under task manager, that "Memory (Private Working Set)" is being reserved very slowly.
So I would appreciate if you could help in the following questions:
Why there is a discrepancy in the execution timings for both environments? Can you please highlight some ways for checking this issue?
Is there a way were I can see my result set required memory and reserve the same upfront? Will this have some improvements in Performance?
How can I improve the performance for getString(i) method?
Thank you in advance for your assistance.
Best regards,
Ziza

Categories

Resources