HBase setting timestamp - java

I'm having problems setting row timestamp using java api.
When I'm trying to add a timestamp value to put constructor (or into put.add()) nothing happens and after reading rows from table I get system provided timestamps.
public static boolean addRecord(String tableName, String rowKey,
String family, String qualifier, Object value)
{
try {
HTable table = new HTable(conf, tableName);
Put put = new Put(Bytes.toBytes(rowKey), 12345678l);
put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes(value.toString()));
table.put(put);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
HBase 0.92.1 running in standalone mode.
Thanks in advance for any help!

Most likely, you already have rows in the table that have timestamp > 12345678l. To confirm that this is not the case, try it with a very large value for timestamp, say Long.MAX_VALUE.
If it is indeed the case, you can simply delete the older versions. Then this entry will show up.

Related

Oracle STRUCT use OBJECT TYPE with CLOB attribute

I am trying to use STRUCT for the insertion of massive data in my table DATA_TABLE, but generate error (java.sql.SQLException: Fail to convert to internal representation) with data type field CLOB and I can't find a solution to my problem, my code:
My table
CREATE TABLE DATA_TABLE (DAT_ID NUMBER,
DAT_CODE VARCHAR2(10),
DAT_TEXT CLOB);
Create type object
CREATE OR REPLACE TYPE TY_OBJ_DATA AS OBJECT (DAT_ID NUMBER,
DAT_CODE VARCHAR2(10),
DAT_TEXT CLOB);
create type table from type object
CREATE OR REPLACE TYPE TY_TABLE_DATA AS TABLE OF SCHEMA.TY_OBJ_DATA;
My simplified java method
public static void bulkData(List<DataTable> listDataInfo) throws Exception {
DataSource ds = (DataSource) getEntityManager().getEntityManagerFactory().getProperties().get("javax.persistence.jtaDataSource");
OracleConnection connection = ds.getConnection().unwrap(OracleConnection.class);
try{
StructDescriptor typeTableObject = StructDescriptor.createDescriptor("SCHEMA.TY_OBJ_DATA", conect);
STRUCT[] structData = new STRUCT[DataTable.size()];
int counter= 0;
for (DataTable d : listDataInfo) {
Clob clob = connection.createClob();
STRUCT m = new STRUCT(typeTableObject, connection,
new Object[]{d.getDatId(),
d.getDatCode,
clob.setString(1, d.getDatText)});
structData [counter++] = m;
}
ArrayDescriptor tyTable = ArrayDescriptor.createDescriptor("SCHEMA.TY_TABLE_DATA", connection);
ARRAY array = new ARRAY(tyTable, connection, structData);
String sqlQuery = "{ CALL PACKAGE_BULK.PL_BULK_DATA }";
CallableStatement cst = conect.prepareCall(sqlQuery);
cst.setArray(1, array );
cst.execute();
} catch (Exception e) {
throw new Exception(e);
} finally {
try {
connection.close();
} catch (SQLException e) {
throw new Exception(e);
}
}
}
I omit the package code, because it is working correctly and is not my main problem. i Use ojdbc6 version 11.2.0, java 8 and Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit.
Is it possible to use STRUCT with fields of type CLOB? I am doing something wrong? my DatText field when it is remapped is of the String type and that was the best conversion that I managed to do from String to a clob but I still have problems, any idea how I can solve this? Thank you.
I know this is very old, but as I just came by:
I solved this issue by simply skipping the descriptors and using:
cst.setArray(i, ((OracleConnection) connection).createOracleArray("TY_TABLE_DATA", shiftArrayOneUp(new Object[]{d.getDatId(),
d.getDatCode(),
d.getDatText()}));
shiftArrayOneUp is just adding one empty value at the beginning (oracle array starts at 1, not at 0)
private Object[] shiftArrayOneUp(Object[] values) {
Object[] result = new Object[values.length + 1];
System.arraycopy(values, 0, result, 1, values.length);
return result;
}
As I had to adjust my code accordingly, I did not test what I was writing here.
Maybe some small adjustments need to be done

Hbase mapreduce job: all column values are null

I am trying to create a map-reduce job in Java on table from a HBase database. Using the examples from here and other stuff from the internet, I managed to successfully write a simple row-counter. However, trying to write one that actually does something with the data from a column was unsuccessful, since the received bytes are always null.
A part of my Driver from the job is this:
/* Set main, map and reduce classes */
job.setJarByClass(Driver.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
Scan scan = new Scan();
scan.setCaching(500);
scan.setCacheBlocks(false);
/* Get data only from the last 24h */
Timestamp timestamp = new Timestamp(System.currentTimeMillis());
try {
long now = timestamp.getTime();
scan.setTimeRange(now - 24 * 60 * 60 * 1000, now);
} catch (IOException e) {
e.printStackTrace();
}
/* Initialize the initTableMapperJob */
TableMapReduceUtil.initTableMapperJob(
"dnsr",
scan,
Map.class,
Text.class,
Text.class,
job);
/* Set output parameters */
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(TextOutputFormat.class);
As you can see, the table is called dnsr. My mapper looks like this:
#Override
public void map(ImmutableBytesWritable row, Result value, Context context)
throws InterruptedException, IOException {
byte[] columnValue = value.getValue("d".getBytes(), "fqdn".getBytes());
if (columnValue == null)
return;
byte[] firstSeen = value.getValue("d".getBytes(), "fs".getBytes());
// if (firstSeen == null)
// return;
String fqdn = new String(columnValue).toLowerCase();
String fs = (firstSeen == null) ? "empty" : new String(firstSeen);
context.write(new Text(fqdn), new Text(fs));
}
Some notes:
the column family from the dnsr table is just d. There are multiple columns, some of them being called fqdn and fs (firstSeen);
even if the fqdn values appear correctly, the fs are always the "empty" string (I added this check after I had some errors that were saying that you can't convert null to a new string);
if I change the fs column name with something else, for example ls (lastSeen), it works;
the reducer doesn't do anything, just outputs everything it receives.
I created a simple table scanner in javascript that is querying the exact same table and columns and I can clearly see the values are there. Using the command line and doing queries manually, I can clearly see the fs values are not null, they are bytes that can e later converted into a string (representing a date).
What can be the problem I'm always getting null?
Thanks!
Update:
If I get all the columns in a specific column family, I don't receive fs. However, a simple scanner implemented in javascript return fs as a column from the dnsr table.
#Override
public void map(ImmutableBytesWritable row, Result value, Context context)
throws InterruptedException, IOException {
byte[] columnValue = value.getValue(columnFamily, fqdnColumnName);
if (columnValue == null)
return;
String fqdn = new String(columnValue).toLowerCase();
/* Getting all the columns */
String[] cns = getColumnsInColumnFamily(value, "d");
StringBuilder sb = new StringBuilder();
for (String s : cns) {
sb.append(s).append(";");
}
context.write(new Text(fqdn), new Text(sb.toString()));
}
I used an answer from here to get all the column names.
In the end, I managed to find the 'problem'. Hbase is a column oriented datastore. Here, data is stored and retrieved in columns and hence can read only relevant data if only some data is required. Every column family has one or more column qualifiers (columns) and each column has multiple cells. The interesting part is that every cell has its own timestamp.
Why was this the problem? Well, when you are doing a ranged search, only the cells whose timestamp is in that range are returned, so you may end up with a row with "missing cells". In my case, I had a DNS record and other fields such as firstSeen and lastSeen. lastSeen is a field that is updated every time I see that domain, firstSeen will remain unchanged after the first occurrence. As soon as I changed the ranged map reduce job to a simple map reduce job (using all time data), everything was fine (but the job took longer to finish).
Cheers!

Cassandra 3.3.0 - Triggers - data access from Partition

I am using Cassandra 3.3.0 to store some data in a ColumnFamily created like that:
CREATE COLUMNFAMILY inventory(objectid varint, timestamp varint,
numerator varint, data text, PRIMARY KEY ((objectid), timestamp,
numerator)) with clustering order by (timestamp desc, numerator asc);
After each new entry, I want to inform some third party application about new entries and its properties, but I am too stupid to access the changed data using the augment function.
public Collection<Mutation> augment(Partition update) {
String key = null;
String timestamp = null;
String numerator = null;
String data = null;
try {
UnfilteredRowIterator it = update.unfilteredIterator();
CFMetaData cfMetaData = Schema.instance.getCFMetaData("keyspace", "inventory");
while (it.hasNext()) {
Unfiltered un = it.next();
Clustering clt = (Clustering) un.clustering();
Iterator<Cell> cls = update.getRow(clt).cells().iterator();
while(cls.hasNext()){
Cell cell = cls.next();
data = new String(cell.value().array());
}
un.toString(cfMetaData); //RETURNS timestamp, numerator and data as one string....
}
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
update.toString() returns all information in a String, including the ones I want, but I actually do not want to parse it through. Does anybody know how to access the values of "objectid", "timestamp" and "numerator" directly?

updating a tuple in a sqlite table in android studio

before voting down please read my question , which I have searched a lot but I couldn't find the answer yet, so I would appreciate if you give me hand to overcome the problem.
Actually I need to update a tuple in a table named "Demographics". But it seems my code does not work correctly, and in fact after running the app , I got the result "0" for updating which means nothing get updated.
12-21 12:34:54.190 2351-2367/? D/Update Result:: =0
I guess my problem is due to not pointing to the right row of the table based on Primary key. Actually when a user Register to my app the following things should happen:
1- Create a tuple in "Demographics" table --> username, password and email will be inserted. An auto increment primary key also constructed and inserted.
2- user logins , then he can complete rest of information in "Demographics" table. --> this MODIFICATION is the "update" process which I', asking.
Would you please tell me if the following codes are wrong or have any implicit error?
DemographicsCRUD.java
public long UpdateDemographics(Demographics_to demoId) {
//SQLiteDatabase db = dbHelper.getWritableDatabase();
ContentValues values = new ContentValues();
values.put(DataBaseHelper.lastName, demoId.getD_lastName());
values.put(DataBaseHelper.firstName, demoId.getD_firstName());
values.put(DataBaseHelper.dateOfBirth, demoId.getD_dateOfBirth())
long result = database.update(dbHelper.Demographics_Table, values,
WHERE_ID_EQUALS,
new String[]{String.valueOf(demoId.getD_patientID())});
Log.d("Update Result:", "=" + result);
// db.close();
return result;
}
here is where I call the above code:
private void updateDemographicsTable()
{
ep_demoId = new Demographics_to();
String ep_na = ep_name.getText().toString();
String ep_fa = ep_family.getText().toString();
.
.
.
ep_demoId.setD_dateOfBirth(ep_bd);
ep_demoId.setD_firstName(ep_na);
ep_demoId.setD_lastName(ep_fa);
}
#Override
protected Long doInBackground(Void... arg0) {
long result = ep_demoCRUD.UpdateDemographics(ep_demoId);
return result;
}
#Override
protected void onPostExecute(Long result) {
if (activityWeakRef.get() != null
&& !activityWeakRef.get().isFinishing()) {
if (result != -1)
Toast.makeText(activityWeakRef.get(), "Information Updated!",
Toast.LENGTH_LONG).show();
}}
Looks like whatever you are passing in as the patientID does not have a matching record in the database or the dataobject "Demographics_to" has the patient ID set incorrectly.

Duplicate Keys in Oracle Berkeley DB Java Edition

I'm using Oracle Berkeley DB Java Edition with tables having key/value format. I'm trying to insert duplicate keys, but keep getting SecondaryIntegrityException. According to Oracle, if the setSortedDuplicates() is set to true, then duplicates are allowed. This does not work in my case. Below is some code with key=bob, value=smith. The first I run it, it runs as expected. If I run it a second time changing only value=johnson, I get SecondaryIntegrityException. Is there something I'm doing wrong? Thanks.
String key = "bob";
String value = "smith";
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setAllowCreate(true);
envConfig.setTransactional(false);
Environment myDBenvironment = new Environment(new File(filePath), envConfig);
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(false);
Database myDatabase = myDBenvironment.openDatabase(null, dbname,
dbConfig);
// create secondary database
SecondaryConfig mySecConfig = new SecondaryConfig();
mySecConfig.setAllowCreate(true);
mySecConfig.setSortedDuplicates(true);
mySecConfig.setTransactional(false);
mySecConfig.setKeyCreator(new SecondKeyCreator());
SecondaryDatabase mySecondaryDatabase = myDBenvironment
.openSecondaryDatabase(null, secdbname, myDatabase,
mySecConfig);
DatabaseEntry myKey = new DatabaseEntry(key.getBytes("UTF-8"));
Record mydata = new Record();
mydata.setobjectVal(value);
DatabaseEntry myrecord = new DatabaseEntry();
new RecordTupleBinding().objectToEntry(mydata, myrecord);
myDatabase.put(null, myKey, myrecord);
mySecondaryDatabase.close();
myDatabase.close();
myDBenvironment.close();
public class SecondKeyCreator implements SecondaryKeyCreator{
#Override
public boolean createSecondaryKey(SecondaryDatabase arg0,
DatabaseEntry key, DatabaseEntry data, DatabaseEntry secondKey) {
RecordTupleBinding binding = new RecordTupleBinding();
Record record = (Record) binding.entryToObject(data);
try {
secondKey.setData(data.getData());
} catch (Exception e) {
e.printStackTrace();
}
return true;
}
}
Although I am nota an expert on the topic, let me try to help you.
According to Oracle documentation, "If a primary database is to be associated with one or more secondary databases, it may not be configured for duplicates". Do you have an association from this database? If so, this may be the reason.
I hope it helps.
A secondary database is needed and required to allow duplicates. The above works if
secondKey.setData(data.getData());
is changed to
secondKey.setData(((String)record.getobjectVal()).getBytes());

Categories

Resources