I have the following rows with these keys in hbase table "mytable"
user_1
user_2
user_3
...
user_9999999
I want to use the Hbase shell to delete rows from:
user_500 to user_900
I know there is no way to delete, but is there a way I could use the "BulkDeleteProcessor" to do this?
I see here:
https://github.com/apache/hbase/blob/master/hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java
I want to just paste in imports and then paste this into the shell, but have no idea how to go about this. Does anyone know how I can use this endpoint from the jruby hbase shell?
Table ht = TEST_UTIL.getConnection().getTable("my_table");
long noOfDeletedRows = 0L;
Batch.Call<BulkDeleteService, BulkDeleteResponse> callable =
new Batch.Call<BulkDeleteService, BulkDeleteResponse>() {
ServerRpcController controller = new ServerRpcController();
BlockingRpcCallback<BulkDeleteResponse> rpcCallback =
new BlockingRpcCallback<BulkDeleteResponse>();
public BulkDeleteResponse call(BulkDeleteService service) throws IOException {
Builder builder = BulkDeleteRequest.newBuilder();
builder.setScan(ProtobufUtil.toScan(scan));
builder.setDeleteType(deleteType);
builder.setRowBatchSize(rowBatchSize);
if (timeStamp != null) {
builder.setTimestamp(timeStamp);
}
service.delete(controller, builder.build(), rpcCallback);
return rpcCallback.get();
}
};
Map<byte[], BulkDeleteResponse> result = ht.coprocessorService(BulkDeleteService.class, scan
.getStartRow(), scan.getStopRow(), callable);
for (BulkDeleteResponse response : result.values()) {
noOfDeletedRows += response.getRowsDeleted();
}
ht.close();
If there exists no way to do this through JRuby, Java or alternate way to quickly delete multiple rows is fine.
Do you really want to do it in shell because there are various other better ways. One way is using the native java API
Construct an array list of deletes
pass this array list to Table.delete method
Method 1: if you already know the range of keys.
public void massDelete(byte[] tableName) throws IOException {
HTable table=(HTable)hbasePool.getTable(tableName);
String tablePrefix = "user_";
int startRange = 500;
int endRange = 999;
List<Delete> listOfBatchDelete = new ArrayList<Delete>();
for(int i=startRange;i<=endRange;i++){
String key = tablePrefix+i;
Delete d=new Delete(Bytes.toBytes(key));
listOfBatchDelete.add(d);
}
try {
table.delete(listOfBatchDelete);
} finally {
if (hbasePool != null && table != null) {
hbasePool.putTable(table);
}
}
}
Method 2: If you want to do a batch delete on the basis of a scan result.
public bulkDelete(final HTable table) throws IOException {
Scan s=new Scan();
List<Delete> listOfBatchDelete = new ArrayList<Delete>();
//add your filters to the scanner
s.addFilter();
ResultScanner scanner=table.getScanner(s);
for (Result rr : scanner) {
Delete d=new Delete(rr.getRow());
listOfBatchDelete.add(d);
}
try {
table.delete(listOfBatchDelete);
} catch (Exception e) {
LOGGER.log(e);
}
}
Now coming down to using a CoProcessor. only one advice, 'DON'T USE CoProcessor' unless you are an expert in HBase.
CoProcessors have many inbuilt issues if you need I can provide a detailed description to you.
Secondly when you delete anything from HBase it's never directly deleted from Hbase there is tombstone marker get attached to that record and later during a major compaction it gets deleted, so no need to use a coprocessor which is highly resource exhaustive.
Modified code to support batch operation.
int batchSize = 50;
int batchCounter=0;
for(int i=startRange;i<=endRange;i++){
String key = tablePrefix+i;
Delete d=new Delete(Bytes.toBytes(key));
listOfBatchDelete.add(d);
batchCounter++;
if(batchCounter==batchSize){
try {
table.delete(listOfBatchDelete);
listOfBatchDelete.clear();
batchCounter=0;
}
}}
Creating HBase conf and getting table instance.
Configuration hConf = HBaseConfiguration.create(conf);
hConf.set("hbase.zookeeper.quorum", "Zookeeper IP");
hConf.set("hbase.zookeeper.property.clientPort", ZookeeperPort);
HTable hTable = new HTable(hConf, tableName);
If you already aware of the rowkeys of the records that you want to delete from HBase table then you can use the following approach
1.First create a List objects with these rowkeys
for (int rowKey = 1; rowKey <= 10; rowKey++) {
deleteList.add(new Delete(Bytes.toBytes(rowKey + "")));
}
2.Then get the Table object by using HBase Connection
Table table = connection.getTable(TableName.valueOf(tableName));
3.Once you have table object call delete() by passing the list
table.delete(deleteList);
The complete code will look like below
Configuration config = HBaseConfiguration.create();
config.addResource(new Path("/etc/hbase/conf/hbase-site.xml"));
config.addResource(new Path("/etc/hadoop/conf/core-site.xml"));
String tableName = "users";
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf(tableName));
List<Delete> deleteList = new ArrayList<Delete>();
for (int rowKey = 500; rowKey <= 900; rowKey++) {
deleteList.add(new Delete(Bytes.toBytes("user_" + rowKey)));
}
table.delete(deleteList);
Related
We created a program to make the use of the database easier in other programs. So the code im showing gets used in multiple other programs.
One of those other programs gets about 10,000 records from one of our clients and has to check if these are in our database already. If not we insert them into the database (they can also change and have to be updated then).
To make this easy we load all the entries from our whole table (at the moment 120,000), create a class for every entry we get and put all of them into a Hashmap.
The loading of the whole table this way takes around 5 minutes. Also we sometimes have to restart the program because we run into a GC overhead error because we work on limited hardware. Do you have an idea of how we can improve the performance?
Here is the code to load all entries (we have a global limit of 10.000 entries per query so we use a loop):
public Map<String, IMasterDataSet> getAllInformationObjects(ISession session) throws MasterDataException {
IQueryExpression qe;
IQueryParameter qp;
// our main SDP class
Constructor<?> constructorForSDPbaseClass = getStandardConstructor();
SimpleDateFormat itaTimestampFormat = new SimpleDateFormat("yyyyMMddHHmmssSSS");
// search in standard time range (modification date!)
Calendar cal = Calendar.getInstance();
cal.set(2010, Calendar.JANUARY, 1);
Date startDate = cal.getTime();
Date endDate = new Date();
Long startDateL = Long.parseLong(itaTimestampFormat.format(startDate));
Long endDateL = Long.parseLong(itaTimestampFormat.format(endDate));
IDescriptor modDesc = IBVRIDescriptor.ModificationDate.getDescriptor(session);
// count once before to determine initial capacities for hash map/set
IBVRIArchiveClass SDP_ARCHIVECLASS = getMasterDataPropertyBag().getSDP_ARCHIVECLASS();
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
int nrOfHitsTotal = session.getDocumentServer().queryCount(session, qp, "*");
int initialCapacity = (int) (nrOfHitsTotal / 0.75 + 1);
// MD sets; and objects already done (here: document ID)
HashSet<String> objDone = new HashSet<>(initialCapacity);
HashMap<String, IMasterDataSet> objRes = new HashMap<>(initialCapacity);
qp.close();
// do queries until hit count is smaller than 10.000
// use modification date
boolean keepGoing = true;
while(keepGoing) {
// construct query expression
// - basic part: Modification date & class type
// a. doc. class type
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
// b. ID
qe = SearchUtil.appendQueryExpressionWithANDoperator(session, qe,
new PlainExpression(modDesc.getQueryLiteral() + " BETWEEN " + startDateL + " AND " + endDateL));
// 2. Query Parameter: set database; set expression
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
// order by modification date; hitlimit = 0 -> no hitlimit, but the usual 10.000 max
qp.setOrderByExpression(session.getDocumentServer().getClassFactory().getOrderByExpressionInstance(modDesc, true));
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
// Do not sort by modification date;
qp.setHints("+NoDefaultOrderBy");
keepGoing = false;
IInformationObject[] hits = null;
IDocumentHitList hitList = null;
hitList = session.getDocumentServer().query(qp, session);
IDocument doc;
if (hitList.getTotalHitCount() > 0) {
hits = hitList.getInformationObjects();
for (IInformationObject hit : hits) {
String objID = hit.getID();
if(!objDone.contains(objID)) {
// do something with this object and the class
// here: construct a new SDP sub class object and give it back via interface
doc = (IDocument) hit;
IMasterDataSet mdSet;
try {
mdSet = (IMasterDataSet) constructorForSDPbaseClass.newInstance(session, doc);
} catch (Exception e) {
// cause for this
String cause = (e.getCause() != null) ? e.getCause().toString() : MasterDataException.ERRMSG_PART_UNKNOWN;
throw new MasterDataException(MasterDataException.ERRMSG_NOINSTANCE_POSSIBLE, this.getClass().getSimpleName(), e.toString(), cause);
}
objRes.put(mdSet.getID(), mdSet);
objDone.add(objID);
}
}
doc = (IDocument) hits[hits.length - 1];
Date lastModDate = ((IDateValue) doc.getDescriptor(modDesc).getValues()[0]).getValue();
startDateL = Long.parseLong(itaTimestampFormat.format(lastModDate));
keepGoing = (hits.length >= 10000 || hitList.isResultSetTruncated());
}
qp.close();
}
return objRes;
}
Loading 120,000 rows (and more) each time will not scale very well, and your solution may not work in the future as the record size grows. Instead let the database server handle the problem.
Your table needs to have a primary key or unique key based on the columns of the records. Iterate through the 10,000 records performing JDBC SQL update to modify all field values with where clause to exactly match primary/unique key.
update BLAH set COL1 = ?, COL2 = ? where PKCOL = ?; // ... AND PKCOL2 =? ...
This modifies an existing row or does nothing at all - and JDBC executeUpate() will return 0 or 1 indicating number of rows changed. If number of rows changed was zero you have detected a new record which does not exist, so perform insert for that new record only.
insert into BLAH (COL1, COL2, ... PKCOL) values (?,?, ..., ?);
You can decide whether to run 10,000 updates followed by however many inserts are needed, or do update+optional insert, and remember JDBC batch statements / auto-commit off may help speed things up.
We are using MongoDb for saving and fetching data.
All calls that are putting data into collections are working fine and are through common method.
All calls that are fetching data from collections are working fine sometimes and are through common method.
But Sometimes, only for one of the collection, i get my calls being stuck for forever, consuming CPU usage. I have to manually kill the threads otherwise it consumes my whole CPU.
Mongo Connection
MongoClient mongo = new MongoClient(hostName , Integer.valueOf(port));
DB mongoDb = mongo.getDB(dbName);
Code To fetch
DBCollection collection = mongoDb.getCollection(collectionName);
DBObject dbObject = new BasicDBObject("_id" , key);
DBCursor cursor = collection.find(dbObject);
Though i have figured out the collection for which it is causing issues, but how can i improve upon this, since it is occurring for this particular collection and sometimes.
EDIT
Code to save
DBCollection collection = mongoDb.getCollection(collectionName);
DBObject query = new BasicDBObject("_id" , key);
DBObject update = new BasicDBObject();
update.put("$set" , JSON.parse(value));
collection.update(query , update , true , false);
Bulk Write / collection
DB mongoDb = controllerFactory.getMongoDB();
DBCollection collection = mongoDb.getCollection(collectionName);
BulkWriteOperation bulkWriteOperation = collection.initializeUnorderedBulkOperation();
Map<String, Object> dataMap = (Map<String, Object>) JSON.parse(value);
for (Entry<String, Object> entrySet : dataMap.entrySet()) {
BulkWriteRequestBuilder bulkWriteRequestBuilder = bulkWriteOperation.find(new BasicDBObject("_id" ,
entrySet.getKey()));
DBObject update = new BasicDBObject();
update.put("$set" , entrySet.getValue());
bulkWriteRequestBuilder.upsert().update(update);
}
How can i set timeout for fetch calls..??
A different approach is to use the proposed method for MongoDB 3.2 Driver. Keep in mind that you have to update your .jar libraries (if you haven't) to the latest version.
public final MongoClient connectToClient(String hostName, String port) {
try {
MongoClient client = new MongoClient(hostName, Integer.valueOf(port));
return client;
} catch(MongoClientException e) {
System.err.println("Cannot connect to Client.");
return null;
}
}
public final MongoDatabase connectToDB(String databaseName) {
try {
MongoDatabase db = client.getDatabase(databaseName);
return db;
} catch(Exception e) {
System.err.println("Error in connecting to database " + databaseName);
return null;
}
public final void closeConnection(MongoClient client) {
client.close();
}
public final void findDoc(MongoDatabase db, String collectionName) {
MongoCollection<Document> collection = db.getCollection(collectionName);
try {
FindIterable<Document> iterable = collection
.find(new Document("_id", key));
Document doc = iterable.first();
//For an Int64 field named 'special_id'
long specialId = doc.getLong("special_id");
} catch(MongoException e) {
System.err.println("Error in retrieving document.");
} catch(NullPointerException e) {
System.err.println("Document with _id " + key + " does not exist.");
}
}
public final void insertToDB(MongoDatabase db, String collectioName) {
try {
db.getCollection(collectionName).insertOne(new Document()
.append("special_id", 5)
//Append anything
);
catch(MongoException e) {
System.err.println("Error in inserting new document.");
}
}
public final void updateDoc(MongoDatabase db, String collectionName, long id) {
MongoCollection<Document> collection = db.getCollection(collectionName);
try {
collection.updateOne(new Document("_id", id),
new Document("$set",
new Document("special_id",
7)));
catch(MongoException e) {
System.err.println("Error in updating new document.");
}
}
public static void main(String[] args) {
String hostName = "myHost";
String port = "myPort";
String databaseName = "myDB";
String collectionName = "myCollection";
MongoClient client = connectToClient(hostName, port);
if(client != null) {
MongoDatabase db = connectToDB(databaseName);
if(db != null) {
findDoc(db, collectionName);
}
client.closeConnection();
}
}
EDIT: As the others suggested, check from the command line if the procedure of finding the document by its ID is slow too. Then maybe this is a problem with your hard drive. The _id is supposed to be indexed but for better or for worse, re-create the index on the _id field.
The answers posted by others are great, but did not solve my purpose.
Actually issue was in my existing code itself , my cursor was waiting in while loop infinite time.
I was missing few checks which has been resolved now.
Just some possible explanations/thoughts.
In general "query by id" has to be fast since _id is supposed to be indexed, always. The code snippet looks correct, so probably the reason is in mongo itself. This leads me to a couple of suggestions:
Try to connect to mongo directly from the command line and run the "find" from there. The chances are that you'll still be able to observe occasional slowness.
In this case:
Maybe its about the disks (maybe this particular server is deployed on the slow disk or at least there is a correlation with some slowness of accessing the disk).
Maybe your have a sharded configuration and one shard is slower than others
Maybe its a network issue that occurs sporadically. If you run mongo locally/on staging env. with the same collection does this reproduce?
Maybe (Although I hardly believe that) the query runs in sub un-optimal way. In this case you can use "explain()" as someone has already suggested here.
If you happen to have replica set, please figure out what is the [Read Preference]. Who knows, maybe you prefer to get this id from the sub-optimal server
How do I return all timestamped versions of an HBase cell with the Get.setMaxVersions(10) method where 10 is an arbitrary number (could be something else like 20 or 5)? The following is a console main method that creates a table, inserts 10 random integers, and tries to retrieve all of them to print out.
public static void main(String[] args)
throws ZooKeeperConnectionException, MasterNotRunningException, IOException, InterruptedException {
final String HBASE_ZOOKEEPER_QUORUM_IP = "localhost.localdomain"; //set ip in hosts file
final String HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT = "2181";
final String HBASE_MASTER = HBASE_ZOOKEEPER_QUORUM_IP + ":60010";
//identify a data cell with these properties
String tablename = "characters";
String row = "johnsmith";
String family = "capital";
String qualifier = "A";
//config
Configuration config = HBaseConfiguration.create();
config.clear();
config.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM_IP);
config.set("hbase.zookeeper.property.clientPort", HBASE_ZOOKEEPER_PROPERTY_CLIENTPORT);
config.set("hbase.master", HBASE_MASTER);
//admin
HBaseAdmin hba = new HBaseAdmin(config);
//create a table
HTableDescriptor descriptor = new HTableDescriptor(tablename);
descriptor.addFamily(new HColumnDescriptor(family));
hba.createTable(descriptor);
hba.close();
//get the table
HTable htable = new HTable(config, tablename);
//insert 10 different timestamps into 1 record
for(int i = 0; i < 10; i++) {
String value = Integer.toString(i);
Put put = new Put(Bytes.toBytes(row));
put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier), System.currentTimeMillis(), Bytes.toBytes(value));
htable.put(put);
Thread.sleep(200); //make sure each timestamp is different
}
//get 10 timestamp versions of 1 record
final int MAX_VERSIONS = 10;
Get get = new Get(Bytes.toBytes(row));
get.setMaxVersions(MAX_VERSIONS);
Result result = htable.get(get);
byte[] value = result.getValue(Bytes.toBytes(family), Bytes.toBytes(qualifier)); // returns MAX_VERSIONS quantity of values
String output = Bytes.toString(value);
//show me what you got
System.out.println(output); //prints 9 instead of 0 through 9
}
The output is 9 (because the loop ended at i=9, and I don't see multiple versions in Hue's HBase Browser web UI. What can I do to fix the versions so it gives me 10 individual results for 0 - 9 instead of one result of only the number 9?
You should use getColumnCells on Result to get all versions (depending on MAX_VERSION_COUNT you have set in Get). getValue returns the latest value.
Sample Code:
List<Cell> values = result.getColumnCells(Bytes.toBytes(family), Bytes.toBytes(qualifier));
for ( Cell cell : values )
{
System.out.println( Bytes.toString( CellUtil.cloneValue( cell ) ) );
}
This is a deprecated approach which matches the version of HBase I am currently working on.
List<KeyValue> kvpairs = result.getColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier));
String line = "";
for(KeyValue kv : kvpairs) {
line += Bytes.toString(kv.getValue()) + "\n";
}
System.out.println(line);
Then, going one step further, it is important to note the setMaxVersions method must be called at table creation to allow for more than a default three values to be inserted into a cell. Here's the updated table creation:
//create a table based on variables from question above
HTableDescriptor tableDescriptor = new HTableDescriptor(tablename);
HColumnDescriptor columnDescriptor = new HColumnDescriptor(columnFamily);
columnDescriptor.setMaxVersions(MAX_VERSIONS);
tableDescriptor.addFamily(columnDescriptor);
hba.createTable(tableDescriptor);
hba.close();
I can't seem to update an existing record in my table using a strongly-typed dataset. I can add a new record, but if I make changes to an existing record it doesn't work.
Here is my code:
private void AddEmplMaster()
{
dsEmplMast dsEmpMst = new dsEmplMast();
SqlConnection cn = new SqlConnection();
cn.ConnectionString = System.Configuration.ConfigurationSettings.AppSettings["cn.ConnectionString"];
SqlDataAdapter da1 = new SqlDataAdapter("SELECT * FROM UPR00100", cn);
SqlCommandBuilder cb1 = new SqlCommandBuilder(da1);
da1.Fill(dsEmpMst.UPR00100);
DataTable dtMst = UpdateEmpMst(dsEmpMst);
da1.Update(dsEmpMst.UPR00100);
}
This procedure is called from above to assign the changed fields to a record:
private DataTable UpdateEmpMst(dsEmplMast dsEmpMst)
{
DataTable dtMst = new DataTable();
try
{
dsEmplMast.UPR00100Row empRow = dsEmpMst.UPR00100.NewUPR00100Row();
empRow.EMPLOYID = txtEmplId.Text.Trim();
empRow.LASTNAME = txtLastName.Text.Trim();
empRow.FRSTNAME = txtFirstName.Text.Trim();
empRow.MIDLNAME = txtMidName.Text.Trim();
empRow.ADRSCODE = "PRIMARY";
empRow.SOCSCNUM = txtSSN.Text.Trim();
empRow.DEPRTMNT = ddlDept.SelectedValue.Trim();
empRow.JOBTITLE = txtJobTitle.Text.Trim();
empRow.STRTDATE = DateTime.Today;
empRow.EMPLOYMENTTYPE = "1";
dsEmpMst.UPR00100.Rows.Add(empRow);
}
catch { }
return dtMst;
}
Thank you
UPDATE:
Ok I figured it out. In my UpdateEmpMst() procedure I had to check if the record exists then to retrieve it first. If not then create a new record to add. Here is what I added:
try
{
dsEmplMast.UPR00100Row empRow;
empRow = dsEmpMst.UPR00100.FindByEMPLOYID(txtEmplId.Text.Trim());
if (empRow == null)
{
empRow = dsEmpMst.UPR00100.NewUPR00100Row();
dsEmpMst.UPR00100.Rows.Add(empRow);
}
then I assign my data to the new empRow I created and updates fine.
In order to edit an existing record in a dataset, you need to access a particular column of data in a particular row. The data in both typed and untyped datasets can be accessed via the following:
With the indices of the tables, rows, and columns collections.
By passing the table and column names as strings to their respective collections.
Although typed datasets can use the same syntax as untyped datasets, there are additional advantages to using typed datasets. For more information, see the "To update existing records using typed datasets" section below.
To update existing records in either typed or untyped datasets
Assign a value to a specific column within a DataRow object.
The table and column names of untyped datasets are not available at design time and must be accessed through their respective indices.
I have a hbase table where all keys have the following structure ID,DATE,OTHER_DETAILS
For example:
10,2012-05-01,"some details"
10,2012-05-02,"some details"
10,2012-05-03,"some details"
10,2012-05-04,"some details"
...
How can I write a scan that get all the rows that older than some date?
For example 2012-05-01 and 2012-05-02 are older than 2012-05-03.
Scan scan = new Scan();
Filter f = ???
scan.setFilter(f);
scan.setCaching(1000);
ResultScanner rs = table.getScanner(scan);
You can create your own Filter and implement the method filterRowKey. To make scan more faster you can also implement the method getNextKeyHint, but this is a bit complicated. The disadvantage of this approach is that you need to put jar file with your filter into the HBase classpath and restart cluster.
This approximate implementation of this filter.
#Override
public void reset() {
this.filterOutRow = false;
}
#Override
public Filter.ReturnCode filterKeyValue(KeyValue v) {
if(this.filterOutRow) {
return ReturnCode.SEEK_NEXT_USING_HINT;
}
return Filter.ReturnCode.INCLUDE;
}
#Override
public boolean filterRowKey(byte[] data, int offset, int length) {
if(startDate < getDate(data) && endDate > getDate(data)) {
this.filterOutRow = true;
}
return this.filterOutRow;
}
#Override
public KeyValue getNextKeyHint(KeyValue currentKV) {
if(getDate(currentKV) < startDate){
String nextKey = getId(currentKV)+","+startDate.getTime();
return KeyValue.createFirstOnRow(Bytes.toBytes(nextKey));
}
if(getDate(currentKV) > endDate){
String nextKey = (getId(currentKV)+1)+","+startDate.getTime();
return KeyValue.createFirstOnRow(Bytes.toBytes(nextKey));
}
return null;
}
#Override
public boolean filterRow() {
return this.filterOutRow;
}
store the key of the very first row somewhere. it will always be there in your final resultset, being the 'first' row, which makes it older than all other rows(am i correct??)
now take the date, which you want to use to filter out the results and create a RowFilter with RegexStringComparator using this date. this will give the row matching the specified criteria. now, using this row and the first row, which you had store earlier, do a range query.
and if you have multiple rows having the same date, say:
10,2012-05-04,"some details"
10,2012-05-04,"some new details"
take the last row, which you would have got after the RowFilter, and use the same technique.
HTH
i was trying to say that you can use range query to achieve this. where the "startrowkey" will be the first row of your table. being the first row it'll always be the oldest row which means you will always have this row in your result. and the "stoprowkey" for your range query will be the row which contains the given date. to find the stoprowkey you can set a "RowFilter" with "RegexStringComparator".
byte[] startRowKey = FIRST_ROW_OF_THE_TABLE;
Scan scan = new Scan();
Filter rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL,new RegexStringComparator("YOUR_REGEX"));
scan.setFilter(filter);
ResultScanner scanner1 = table.getScanner(scan);
for (Result res : scanner1) {
byte[] stopRowKey = res.getRow();
}
scanner1.close();
scan.setStartRow(startRowKey);
scan.setStopRow(stopRowKey);
ResultScanner scanner2 = table.getScanner(scan);
for (Result res : scanner2) {
//you final result
}