I want to replace cosmos batch with Stored Proc as my requirement is to upsert 100+ records which cosmos batch does not support. I am adding 2 java objects and 1 CosmosPatchOperations
in List and passing to below method.Whenver I am adding cosmos patch object no rows got inserted/updated otherwise it is working fine.I want to perform both insertion and patch operation in same transaction. Can somebody please guide how to modify SP so that it supports both insert and patch operation.
String rowsUpserted = "";
try
{
rowsUpserted = container
.getScripts()
.getStoredProcedure("createEvent")
.execute(Arrays.asList(listObj), options)
.getResponseAsString();
}catch(Exception e){
e.printStackTrace();
}
Stored Proc
function createEvent(items) {
var collection = getContext().getCollection();
var collectionLink = collection.getSelfLink();
var count = 0;
if (!items) throw new Error("The array is undefined or null.");
var numItems = items.length;
if (numItems == 0) {
getContext().getResponse().setBody(0);
return;
}
tryCreate(items[count], callback);
function tryCreate(item, callback) {
var options = { disableAutomaticIdGeneration: false };
var isAccepted = collection.upsertDocument(collectionLink, item, options, callback);
if (!isAccepted) getContext().getResponse().setBody(count);
}
function callback(err, item, options) {
if (err) throw err;
count++;
if (count >= numItems) {
getContext().getResponse().setBody(count);
} else {
tryCreate(items[count], callback);
}
}
}
Patching doesn't appear to be supported by the Collection type in the Javascript stored proc API. I suspect this was done as it's more an optimisiation for remote calls and SP execute locally so it's not really neccessary.
API reference is here: http://azure.github.io/azure-cosmosdb-js-server/Collection.html
upsertDocument is expecting the full document.
Related
I am following this tutorial by Raywenderlich on paging-library-for-android-with-kotlin on how to use android paging library. This is one of the easiest tutorials on the net and I have followed it thoroughly. However, I would like to make some changes so that I can intelligently switch between online data and offline data.
That is, I have old some posts in my database. Initially I have internet connection. So I load latest data from internet, then insert it into my database. Finally, I show this latest data in my recyclerView / PagedListAdapter.
If for some reason, there is no internet connection after sometime, I should show the old posts from database.
How can I do this?
My attempts:
This is my code on github repository.
Here, I tried to create a factory pattern. It checks if initially I have internet, the factory returns pagedList from online dataSource. ELse, the factory returns pagedList from offline dataSource.
But this doesnot intelligently switch between the 2 states.
I tried some random codes such as creating a boundary callback. But I am not sure how to make the necessary changes.
I am not adding codes here (at least for now) to keep it short and precise.
Can anyone help me?
Edit:
To be specific, I am loading paged data primarily from network. If there is a network error, I don't want to show the user an error. Instead I load paged data from cache / database and continuously show it to my user as long as possible. If the network is back,switch back to network paged data. (that's what instagram / facebook does I think). What is the appropriate way to implement this? See my code / attemp in the answer.
Okay, so after trying out some codes for 2 days, this is what I came up with. However, I really don't know if this is a good pratice or not. So I am open to any acceptable answers.
Explanation:
Since I have multiple data sources(network and database), I created ProfilePostDataSource: PageKeyedDataSource<Pair<Long, Long>, ProfilePost> here the key is a pair, the 1st one for network pagination, the 2nd one is for database pagination.
I used kotlin's Coroutine to write some asynchronous codes in a simple if-else like manner. So we can write it in a psudo-code like this:
Database db;
Retrofit retrofit;
inside loadInitial/loadBefore / loadAfter:
currNetworkKey = params.key.first;
currDBKey = params.key.second;
ArrayList<Model> pagedList;
coroutine{
ArrayList<Model> onlineList = retrofit.getNetworkData(currNetworkKey); // <-- we primarily load data from network
if(onlineList != null) {
pagedList = onlineList;
db.insertAll(onlineList); // <-- update our cache
}else{
ArrayList<Model> offlineList = db.getOfflineData(currDBKey); // <-- incase the network fails, we load cache from database
if(offlineList !=null){
pagedList = offlineList;
}
}
if(pagedList != null or empty) {
nextNetworkKey = // update it accordingly
nextDBKey = // update it accordingly
Pair<int, int> nextKey = new Pair(nextNetworkKey, nextDBKey);
pagingLibraryCallBack.onResult(pagedList, nextKey); // <-- submit the data to paging library via callback. this updates your adapter, recyclerview etc...
}
}
So in apps like facebook, instagram etc, we see them primarily loading data from network. But if the network is down, they show you a cashed data. We can intelligently make this switch like this code.
Here is a relevant code snippet, the PageKeyedDataSource written in kotlin:
ProfilePostDataSource.kt
/** #brief: <Key, Value> = <Integer, ProfilePost>. The key = pageKey used in api. Value = single item data type in the recyclerView
*
* We have a situation. We need a 2nd id to fetch profilePosts from database.
* Change of plan: <Key, Value> = < Pair<Int, Int>, ProfilePost>. here the
*
* key.first = pageKey used in api. <-- Warning: Dont switch these 2!
* Key.second = db last items id
* used as out db page key
*
* Value = single item data type in the recyclerView
*
* */
class ProfilePostDataSource: PageKeyedDataSource<Pair<Long, Long>, ProfilePost> {
companion object{
val TAG: String = ProfilePostDataSource::class.java.simpleName;
val INVALID_KEY: Long = -1;
}
private val context: Context;
private val userId: Int;
private val liveLoaderState: MutableLiveData<NetworkState>;
private val profilePostLocalData: ProfilePostLocalDataProvider;
public constructor(context: Context, userId: Int, profilePostLocalData: ProfilePostLocalDataProvider, liveLoaderState: MutableLiveData<NetworkState>) {
this.context = context;
this.userId = userId;
this.profilePostLocalData = profilePostLocalData;
this.liveLoaderState = liveLoaderState;
}
override fun loadInitial(params: LoadInitialParams<Pair<Long, Long>>, pagingLibraryCallBack: LoadInitialCallback<Pair<Long, Long>, ProfilePost>) {
val initialNetworkKey: Long = 1L; // suffix = networkKey cz later we'll add dbKey
var nextNetworkKey = initialNetworkKey + 1;
val prevNetworkKey = null; // cz we wont be using it in this case
val initialDbKey: Long = Long.MAX_VALUE; // dont think I need it
var nextDBKey: Long = 0L;
GlobalScope.launch(Dispatchers.IO) {
val pagedProfilePosts: ArrayList<ProfilePost> = ArrayList(); // cz kotlin emptyList() sometimes gives a weird error. So use arraylist and be happy
val authorization : String = AuthManager.getInstance(context).authenticationToken;
try{
setLoading();
val res: Response<ProfileServerResponse> = getAPIService().getFeedProfile(
sessionToken = authorization, id = userId, withProfile = false, withPosts = true, page = initialNetworkKey.toInt()
);
if(res.isSuccessful && res.body()!=null) {
pagedProfilePosts.addAll(res.body()!!.posts);
}
}catch (x: Exception) {
Log.e(TAG, "Exception -> "+x.message);
}
if(pagedProfilePosts.isNotEmpty()) {
// this means network call is successfull
Log.e(TAG, "key -> "+initialNetworkKey+" size -> "+pagedProfilePosts.size+" "+pagedProfilePosts.toString());
nextDBKey = pagedProfilePosts.last().id;
val nextKey: Pair<Long, Long> = Pair(nextNetworkKey, nextDBKey);
pagingLibraryCallBack.onResult(pagedProfilePosts, prevNetworkKey, nextKey);
// <-- this is paging library's callback to a pipeline that updates data which inturn updates the recyclerView. There is a line: adapter.submitPost(list) in FeedProfileFragment. this callback is related to that line...
profilePostLocalData.insertProfilePosts(pagedProfilePosts, userId); // insert the latest data in db
}else{
// fetch data from cache
val cachedList: List<ProfilePost> = profilePostLocalData.getProfilePosts(userId);
pagedProfilePosts.addAll(cachedList);
if(pagedProfilePosts.size>0) {
nextDBKey = cachedList.last().id;
}else{
nextDBKey = INVALID_KEY;
}
nextNetworkKey = INVALID_KEY; // <-- probably there is a network error / sth like that. So no need to execute further network call. thus pass invalid key
val nextKey: Pair<Long, Long> = Pair(nextNetworkKey, nextDBKey);
pagingLibraryCallBack.onResult(pagedProfilePosts, prevNetworkKey, nextKey);
}
setLoaded();
}
}
override fun loadBefore(params: LoadParams<Pair<Long, Long>>, pagingLibraryCallBack: LoadCallback<Pair<Long, Long>, ProfilePost>) {} // we dont need it in feedProflie
override fun loadAfter(params: LoadParams<Pair<Long, Long>>, pagingLibraryCallBack: LoadCallback<Pair<Long, Long>, ProfilePost>) {
val currentNetworkKey: Long = params.key.first;
var nextNetworkKey = currentNetworkKey; // assuming invalid key
if(nextNetworkKey!= INVALID_KEY) {
nextNetworkKey = currentNetworkKey + 1;
}
val currentDBKey: Long = params.key.second;
var nextDBKey: Long = 0;
if(currentDBKey!= INVALID_KEY || currentNetworkKey!= INVALID_KEY) {
GlobalScope.launch(Dispatchers.IO) {
val pagedProfilePosts: ArrayList<ProfilePost> = ArrayList(); // cz kotlin emptyList() sometimes gives a weird error. So use arraylist and be happy
val authorization : String = AuthManager.getInstance(context).authenticationToken;
try{
setLoading();
if(currentNetworkKey!= INVALID_KEY) {
val res: Response<ProfileServerResponse> = getAPIService().getFeedProfile(
sessionToken = authorization, id = userId, withProfile = false, withPosts = true, page = currentNetworkKey.toInt()
);
if(res.isSuccessful && res.body()!=null) {
pagedProfilePosts.addAll(res.body()!!.posts);
}
}
}catch (x: Exception) {
Log.e(TAG, "Exception -> "+x.message);
}
if(pagedProfilePosts.isNotEmpty()) {
// this means network call is successfull
Log.e(TAG, "key -> "+currentNetworkKey+" size -> "+pagedProfilePosts.size+" "+pagedProfilePosts.toString());
nextDBKey = pagedProfilePosts.last().id;
val nextKey: Pair<Long, Long> = Pair(nextNetworkKey, nextDBKey);
pagingLibraryCallBack.onResult(pagedProfilePosts, nextKey);
setLoaded();
// <-- this is paging library's callback to a pipeline that updates data which inturn updates the recyclerView. There is a line: adapter.submitPost(list) in FeedProfileFragment. this callback is related to that line...
profilePostLocalData.insertProfilePosts(pagedProfilePosts, userId); // insert the latest data in db
}else{
// fetch data from cache
// val cachedList: List<ProfilePost> = profilePostLocalData.getProfilePosts(userId);
val cachedList: List<ProfilePost> = profilePostLocalData.getPagedProfilePosts(userId, nextDBKey, 20);
pagedProfilePosts.addAll(cachedList);
if(pagedProfilePosts.size>0) {
nextDBKey = cachedList.last().id;
}else{
nextDBKey = INVALID_KEY;
}
nextNetworkKey = INVALID_KEY; // <-- probably there is a network error / sth like that. So no need to execute further network call. thus pass invalid key
val nextKey: Pair<Long, Long> = Pair(nextNetworkKey, nextDBKey);
pagingLibraryCallBack.onResult(pagedProfilePosts, nextKey);
setLoaded();
}
}
}
}
private suspend fun setLoading() {
withContext(Dispatchers.Main) {
liveLoaderState.value = NetworkState.LOADING;
}
}
private suspend fun setLoaded() {
withContext(Dispatchers.Main) {
liveLoaderState.value = NetworkState.LOADED;
}
}
}
Thank you for reading this far. If you have a better solution, feel free to let me know. I'm open to any working solutions.
I have an android program that passes 2 value to mySQL, that is Time in and Time out. I don`t know what happen but I have a scenario that the data passed is incomplete. For example i will create a record and the only need to put is time in after i type the time i will click save so the record is time in has a data and time out is null. After saving, my auto passer sends that data to mySQL. The next thing I will do is to edit that record to add Time Out so both of them has a data. auto passer will run after checking to my database my both of columns has a data. Now this is where the error begins, I have a button call refresh which will retrieve my data from mySQL,create a JSON of that then send it in my android after the process the returned data has no Time Out and when i check it the data in mySQL has no time out also even i add it. I dont know what happened
What I did in my Java is to create a JSONArray the convert it to string the pass it in my php file then my php file decodes it then loop it while saving to database.
This is how i create a json
JSONArray vis_array = new JSONArray();
Cursor unsync_vis = Sync.Unsync_Visit_All();
while (unsync_vis.moveToNext()) {
JSONObject vis_data = new JSONObject();
try {
vis_data.put("t1", formatInsert_n(unsync_vis.getString(unsync_vis.getColumnIndex("t1"))));
vis_data.put("t2", formatInsert_n(unsync_vis.getString(unsync_vis.getColumnIndex("t2"))));
vis_array.put(vis_data);
} catch (JSONException e) {
e.printStackTrace();
Log.e("Auto Sync Error", e.getMessage());
}
}
public String formatInsert_n(String data) {
if (TextUtils.isEmpty(data) || data.length() == 0) {
data = "null";
} else {
data = data.replace("'", "''");
data = data.replace("null", "");
if (data.toString().matches("")) {
data = "null";
} else {
data = "'" + data + "'";
}
}
return data;
}
after creating that json, i will convert it to string the pass it using stringRequest then in my php i will decode it use for loop the save it in mySQL
Here is the php code
<?php
header('Content-Type: application/json');
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
$insert_visit_new = isset($_POST['insert_visit_new']) ? $_POST['insert_visit_new'] : "";
if ($insert_visit_new != "") {
$vis_array = json_decode($insert_visit_new, true);
$vis_count = count($vis_array);
for ($i = 0; $i < $vis_count; $i++) {
$vis = $vis_array[$i];
$t1 = $vis['t1'];
$t2 = $vis['t2'];
$ins_sql = "INSERT INTO table t1,t2 VALUES ('$t1','$t2') ON DUPLICATE KEY UPDATE t1 = $t1,t2 = $t2"
$stmt = $DB->prepare($ins_sql);
$stmt->execute();
}
}
echo "done";
?>
by the way the code above is inside an AsyncTask and the class is a BroadcastReceiver
is the cause is i dont unregister my BroadcastReceiver?
or my jsonArray name from this class and inside my refresh button are same?
my question is whats wrong? looks like it still passes the old data. any help is appreciated TYSM
I have a application which useds TFS JAVA SDK 14.0.3 .
I have a shared query on my tfs , how can i run the shared query and get the response back using TFS SDK 14.0.3
Also I could see that the query url will expire in every 90 days , so any better way to execute the shared query?
Now I have a method to run a query , i want method to run shared query also.
public void getWorkItem(TFSTeamProjectCollection tpc, Project project){
WorkItemClient workItemClient = project.getWorkItemClient();
// Define the WIQL query.
String wiqlQuery = "Select ID, Title,Assigned from WorkItems where (State = 'Active') order by Title";
// Run the query and get the results.
WorkItemCollection workItems = workItemClient.query(wiqlQuery);
System.out.println("Found " + workItems.size() + " work items.");
System.out.println();
// Write out the heading.
System.out.println("ID\tTitle");
// Output the first 20 results of the query, allowing the TFS SDK to
// page
// in data as required
final int maxToPrint = 5;
for (int i = 0; i < workItems.size(); i++) {
if (i >= maxToPrint) {
System.out.println("[...]");
break;
}
WorkItem workItem = workItems.getWorkItem(i);
System.out.println(workItem.getID() + "\t" + workItem.getTitle());
}
}
Shared query is a query which has been run and saved, so what you need should be getting a a shared query, not run a shared query. You could refer to case Access TFS Team Query from Client Object API:
///Handles nested query folders
private static Guid FindQuery(QueryFolder folder, string queryName)
{
foreach (var item in folder)
{
if (item.Name.Equals(queryName, StringComparison.InvariantCultureIgnoreCase))
{
return item.Id;
}
var itemFolder = item as QueryFolder;
if (itemFolder != null)
{
var result = FindQuery(itemFolder, queryName);
if (!result.Equals(Guid.Empty))
{
return result;
}
}
}
return Guid.Empty;
}
static void Main(string[] args)
{
var collectionUri = new Uri("http://TFS/tfs/DefaultCollection");
var server = new TfsTeamProjectCollection(collectionUri);
var workItemStore = server.GetService<WorkItemStore>();
var teamProject = workItemStore.Projects["TeamProjectName"];
var x = teamProject.QueryHierarchy;
var queryId = FindQuery(x, "QueryNameHere");
var queryDefinition = workItemStore.GetQueryDefinition(queryId);
var variables = new Dictionary<string, string>() {{"project", "TeamProjectName"}};
var result = workItemStore.Query(queryDefinition.QueryText,variables);
}
By the way, you could also check the REST API in the following link:
https://learn.microsoft.com/en-us/rest/api/vsts/wit/queries/get
I've got a list of futures that perform data deletion for given list of studentIds from cassandra:
val studentIds: List<String> = getStudentIds(...)
val boundStatements: List<BoundStatement> = studentIds.map(bindStudentDelete(it))
val deleteFutures = boundStatements.map { session.executeAsync(it) }
deleteFutures.forEach {
// callback that will send metrics for monitoring
Futures.addCallback(it, MyCallback(...))
}
Above I have registered a callback MyCallback(...) for each future for sending metrics. Then I do:
Futures.inCompletionOrder(deleteFutures).forEach { it.get() }
to wait for the completion of all the deletes. If for any reason that some of the futures end up failing (cancelled, something else goes wrong, etc.), I want to return the list of studentIds so that I can deal with it later.
What is the best way to achieve that?
EDIT
The callback could be a way to mutate a state to track success/failure of all the deletions.
class MyCallback(
private val statsDClient: StatsdClient,
private val tags: Array<String>,
val failures: MutableList<String>
) : FutureCallback<Any> {
override fun onSuccess(result: Any?) {
//send success metrics
...
}
override fun onFailure(t: Throwable) {
// send failure metrics
...
// do something here to get the associated studentId
val currId = ...
failures.add(currId)
}
}
Similarly, I could mutate a state in Futures.inCompletionOrder(deleteFutures).forEach block with a try/catch:
val failedDeletes = mutableListOf<String>()
Futures.inCompletionOrder(deleteFutures).forEach {
try {
it.get()
} catch (e: Exception) {
// do something to get the studentId for this future
val currId = ...
failedDeletes.add(currId)
}
}
However, there are 2 things I don't like/know about it. One is that it's mutating a state that we have to define outside. The other is that I still don't know how to get the studentId from the point of failure (in onFailure or catch block).
I have added a code snippet below in JAVA. This is blocking procedure.
ResultSet getUninterruptibly()
Waits for the query to return and return its result. This method is
usually more convenient than Future.get() because it:
Waits for the result uninterruptibly, and so doesn't throw InterruptedException.
Returns meaningful exceptions, instead of having to deal with ExecutionException.
As such, it is the preferred way to get the future result.
Check this link:
Interface ResultSetFuture
List<ResultSetFuture> futures = new ArrayList<>();
List<Long> futureStudentIds = new ArrayList<>();
// List<Long> successfullIds = new ArrayList<>();
List<Long> unsuccessfullIds = new ArrayList<>();
for (long studentid : studentids) {
futures.add(session.executeAsync(statement.deleteStudent(studentid)));
futureStudentIds.add(studentid);
}
for (int index = 0; index < futures.size(); index++) {
try {
futures.get(index).getUninterruptibly();
// successfullIds.add(futureStudentIds.get(index));
} catch (Exception e) {
unsuccessfullIds.add(futureStudentIds.get(index));
LOGGER.error("", e);
}
}
return unsuccessfullIds;
For Non-blocking you can use ListenableFuture.
Asynchronous queries with the Java driver
I have the following rows with these keys in hbase table "mytable"
user_1
user_2
user_3
...
user_9999999
I want to use the Hbase shell to delete rows from:
user_500 to user_900
I know there is no way to delete, but is there a way I could use the "BulkDeleteProcessor" to do this?
I see here:
https://github.com/apache/hbase/blob/master/hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java
I want to just paste in imports and then paste this into the shell, but have no idea how to go about this. Does anyone know how I can use this endpoint from the jruby hbase shell?
Table ht = TEST_UTIL.getConnection().getTable("my_table");
long noOfDeletedRows = 0L;
Batch.Call<BulkDeleteService, BulkDeleteResponse> callable =
new Batch.Call<BulkDeleteService, BulkDeleteResponse>() {
ServerRpcController controller = new ServerRpcController();
BlockingRpcCallback<BulkDeleteResponse> rpcCallback =
new BlockingRpcCallback<BulkDeleteResponse>();
public BulkDeleteResponse call(BulkDeleteService service) throws IOException {
Builder builder = BulkDeleteRequest.newBuilder();
builder.setScan(ProtobufUtil.toScan(scan));
builder.setDeleteType(deleteType);
builder.setRowBatchSize(rowBatchSize);
if (timeStamp != null) {
builder.setTimestamp(timeStamp);
}
service.delete(controller, builder.build(), rpcCallback);
return rpcCallback.get();
}
};
Map<byte[], BulkDeleteResponse> result = ht.coprocessorService(BulkDeleteService.class, scan
.getStartRow(), scan.getStopRow(), callable);
for (BulkDeleteResponse response : result.values()) {
noOfDeletedRows += response.getRowsDeleted();
}
ht.close();
If there exists no way to do this through JRuby, Java or alternate way to quickly delete multiple rows is fine.
Do you really want to do it in shell because there are various other better ways. One way is using the native java API
Construct an array list of deletes
pass this array list to Table.delete method
Method 1: if you already know the range of keys.
public void massDelete(byte[] tableName) throws IOException {
HTable table=(HTable)hbasePool.getTable(tableName);
String tablePrefix = "user_";
int startRange = 500;
int endRange = 999;
List<Delete> listOfBatchDelete = new ArrayList<Delete>();
for(int i=startRange;i<=endRange;i++){
String key = tablePrefix+i;
Delete d=new Delete(Bytes.toBytes(key));
listOfBatchDelete.add(d);
}
try {
table.delete(listOfBatchDelete);
} finally {
if (hbasePool != null && table != null) {
hbasePool.putTable(table);
}
}
}
Method 2: If you want to do a batch delete on the basis of a scan result.
public bulkDelete(final HTable table) throws IOException {
Scan s=new Scan();
List<Delete> listOfBatchDelete = new ArrayList<Delete>();
//add your filters to the scanner
s.addFilter();
ResultScanner scanner=table.getScanner(s);
for (Result rr : scanner) {
Delete d=new Delete(rr.getRow());
listOfBatchDelete.add(d);
}
try {
table.delete(listOfBatchDelete);
} catch (Exception e) {
LOGGER.log(e);
}
}
Now coming down to using a CoProcessor. only one advice, 'DON'T USE CoProcessor' unless you are an expert in HBase.
CoProcessors have many inbuilt issues if you need I can provide a detailed description to you.
Secondly when you delete anything from HBase it's never directly deleted from Hbase there is tombstone marker get attached to that record and later during a major compaction it gets deleted, so no need to use a coprocessor which is highly resource exhaustive.
Modified code to support batch operation.
int batchSize = 50;
int batchCounter=0;
for(int i=startRange;i<=endRange;i++){
String key = tablePrefix+i;
Delete d=new Delete(Bytes.toBytes(key));
listOfBatchDelete.add(d);
batchCounter++;
if(batchCounter==batchSize){
try {
table.delete(listOfBatchDelete);
listOfBatchDelete.clear();
batchCounter=0;
}
}}
Creating HBase conf and getting table instance.
Configuration hConf = HBaseConfiguration.create(conf);
hConf.set("hbase.zookeeper.quorum", "Zookeeper IP");
hConf.set("hbase.zookeeper.property.clientPort", ZookeeperPort);
HTable hTable = new HTable(hConf, tableName);
If you already aware of the rowkeys of the records that you want to delete from HBase table then you can use the following approach
1.First create a List objects with these rowkeys
for (int rowKey = 1; rowKey <= 10; rowKey++) {
deleteList.add(new Delete(Bytes.toBytes(rowKey + "")));
}
2.Then get the Table object by using HBase Connection
Table table = connection.getTable(TableName.valueOf(tableName));
3.Once you have table object call delete() by passing the list
table.delete(deleteList);
The complete code will look like below
Configuration config = HBaseConfiguration.create();
config.addResource(new Path("/etc/hbase/conf/hbase-site.xml"));
config.addResource(new Path("/etc/hadoop/conf/core-site.xml"));
String tableName = "users";
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf(tableName));
List<Delete> deleteList = new ArrayList<Delete>();
for (int rowKey = 500; rowKey <= 900; rowKey++) {
deleteList.add(new Delete(Bytes.toBytes("user_" + rowKey)));
}
table.delete(deleteList);