null pointer exception in getstrings method hadoop - java

Getting Null pointer exception in Driver class conf.getstrings() method. This driver class is invoked from my custom website.
Below are Driver class details
#SuppressWarnings("unchecked")
public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException
{
Configuration conf = new Configuration();
//conf.set("fs.default.name", "hdfs://localhost:54310");
//conf.set("mapred.job.tracker", "localhost:54311");
//conf.set("mapred.jar","/home/htcuser/Desktop/ResumeLatest.jar");
Job job = new Job(conf, "ResumeSearchClass");
job.setJarByClass(HelloForm.class);
job.setJobName("ResumeParse");
job.setInputFormatClass(FileInputFormat.class);
FileInputFormat.addInputPath(job, new Path("hdfs://localhost:54310/usr/ResumeDirectory"));
job.setMapperClass(ResumeMapper.class);
job.setReducerClass(ResumeReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setSortComparatorClass(ReverseComparator.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
job.setOutputFormatClass(FileOutPutFormat.class);
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:54310/usr/output" + System.currentTimeMillis()));
long start = System.currentTimeMillis();
var = job.waitForCompletion(true) ? 0 : 1;
Getting NULL pointer exception from following two line of code
String[] keytextarray=conf.getStrings("Keytext");
for(int i=0;i<keytextarray.length;i++) //GETTING NULL POINTER EXCEPTION HERE IN keytextarray.length
{
//some code here
}
if(var==0)
{
RequestDispatcher dispatcher = request.getRequestDispatcher("/Result.jsp");
dispatcher.forward(request, response);
long finish= System.currentTimeMillis();
System.out.println("Time Taken "+(finish-start));
}
}
I have removed few unwanted codes from above Drives class method...
Below are RecordWriter class where I use conf.setstrings() in Write() method to set values
Below are RecordWriter class details
public class RecordWrite extends org.apache.hadoop.mapreduce.RecordWriter<IntWritable, Text> {
TaskAttemptContext context1;
Configuration conf;
public RecordWrite(DataOutputStream output, TaskAttemptContext context)
{
out = output;
conf = context.getConfiguration();
HelloForm.context1=context;
try {
out.writeBytes("result:\n");
out.writeBytes("Name:\t\t\t\tExperience\t\t\t\t\tPriority\tPriorityCount\n");
} catch (IOException e) {
e.printStackTrace();
}
}
public RecordWrite() {
// TODO Auto-generated constructor stub
}
#Override
public void close(TaskAttemptContext context) throws IOException,
InterruptedException
{
out.close();
}
int z=0;
#Override
public void write(IntWritable value,Text key) throws IOException,
InterruptedException
{
conf.setStrings("Keytext", key1string); //setting values here
conf.setStrings("valtext", valuestring);
String[] keytext=key.toString().split(Pattern.quote("^"));
//some code here
}
}`
`I suspect this null pointer exception happens since i call conf.getstrings() method after job is completed (job.waitForCompletion(true)). Please help fix this issue.
If above code is not correct way of passing values from recordwriter() method to driverclass.. please let me know how to pass values from recordwriter() to driver class.
I have tried option of setting values in RecordWriter() to an custom static class and accessing that object from static class in Driverclass again returns Null exception if i am running code in cluster..

If you have the value of key1staring and valuestirng, in Job class, try setting them in job class itself, rather than RecordWriter.write() method.

Related

How to cover a private method in JUnit Testing

Please help me how am I going to cover the privated method in my class used in a public method. Whenever I run my JUnit coverage, it says that that private method has a missing branch.
Here is the code that uses that private method:
public String addRecord(Record rec) throws IOException {
GeoPoint geoPoint = locationService.getLocation(rec.getTerminalId());
if (Objects.isNull(geoPoint)) {
loggingService.log(this.getClass().toString(), rec.getTerminalId(), "GET LOCATION",
"No Coordinates found for terminal ID: " + rec.getTerminalId());
return "No Coordinates found for terminal ID: " + rec.getTerminalId();
}
loggingService.log(this.getClass().toString(), rec.getTerminalId(), "GeoPoint",
"Latitude: " + geoPoint.getLat() + " Longitude: " + geoPoint.getLon());
format(rec);
loggingService.log(this.getClass().toString(), rec.getTerminalId(), "addRecord",
"Formatted Payload" + rec.toString());
XContentBuilder builder = XContentFactory.jsonBuilder();
builder.startObject().field("terminalId", rec.getTerminalId())
.field("status", "D".equals(rec.getStatus()) ? 1 : 0).field("recLocation", rec.getLocation())
.field("errorDescription", rec.getErrorDescription()).field("lastTranTime", rec.getLastTranTime())
.field("lastDevStatTime", rec.getLastDevStatTime()).field("errorCode", rec.getErrorCode())
.field("termBrcode", rec.getTermBrcode()).timeField("#timestamp", new Date())
.latlon("location", geoPoint.getLat(), geoPoint.getLon()).endObject();
IndexRequest indexRequest = new IndexRequest(prop.getEsIndex(), prop.getEsType(), rec.getTerminalId())
.source(builder);
IndexResponse response = client.index(indexRequest, RequestOptions.DEFAULT);
loggingService.log(this.getClass().toString(), rec.getTerminalId(), TraceLog.SUCCESSFUL_PUSH_TO_ELASTIC_SEARCH,
util.mapToJsonString(rec));
return response.getResult().name();
}
This is the the private method:
private Record format(Record rec) {
if (rec.getLocation() == null) {
rec.setLocation("");
}
if (rec.getTermBrcode() == null) {
rec.setTermBrcode("");
}
if (rec.getErrorDescription() == null) {
rec.setErrorDescription("");
}
return rec;
}
This is my Junit code:
#Before
public void setUp() throws ParseException, IOException {
client = mock(RestHighLevelClient.class);
indexRequest = mock(IndexRequest.class);
indexResponse = mock(IndexResponse.class);
MockitoAnnotations.initMocks(this);
rec= new Record();
rec.setLocation("location");
rec.setStatus("U");
rec.setErrorCode("222");
rec.setErrorDescription("STATUS");
rec.setLastDevStatTime("02-02-2020");
rec.setLastTranTime("02-02-2020");
rec.setTerminalId("123");
rec.setTermBrcode("111");
ReflectionTestUtils.setField(client, "client", restClient);
}
#Test
public void testAddRecordIsNull()
throws IOException, NumberFormatException, IllegalArgumentException, IllegalAccessException {
Mockito.when(locationService.getLocation(Mockito.anyString())).thenReturn(null);
elasticsearchService.addRecord(rec);
assertThat(1).isEqualTo(1);
}
#Test
public void testFormat() throws IOException {
rec = new Record();
rec.setLocation(null);
rec.setStatus(null);
rec.setErrorCode(null);
rec.setErrorDescription(null);
rec.setLastDevStatTime(null);
rec.setLastTranTime(null);
rec.setTerminalId(null);
rec.setTermBrcode(null);
elasticsearchService.addRecord(rec);
//ReflectionTestUtils.invokeMethod(ElasticsearchService.class, "addAtmStatusRecord", rec);
Mockito.when(elasticsearchService.addRecord(null)).thenReturn("");
//elasticsearchService.addRecord(atm);
//Mockito.when(locationService.getLocation(Mockito.anyString())).thenReturn(atm);
//elasticsearchService.addRecord(null);
assertThat(1).isEqualTo(1);
}
Please help me on where am I missing on my JUnit to cover the private method 'format'. Any help will be much appreciated. Thanks.
In testFormat, if elasticsearchService.addRecord is being tested, it shouldn't mocked. i.e. Remove Mockito.when(elasticsearchService.addRecord(null)).thenReturn("");
What should be mocked are the services / dependencies used in the method. e.g. loggingService
xxx
Update #1: EclEmma is telling you that the body of the if statements are red. This means that testAddRecordIsNull is not configured correctly. It is passing a Record object that has values. Instead of passing rec, pass new Record(). This assumes that the attributes of the new Record has default values of null. If you need to have a Record that has values for other attributes, create a new record accordingly.
Yes! I finally found the solution.
#Test
public void testFormat() throws IOException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException {
rec= new Record();
rec.setLocation(null);
rec.setStatus(null);
rec.setErrorCode(null);
rec.setErrorDescription(null);
rec.setLastDevStatTime(null);
rec.setLastTranTime(null);
rec.setTerminalId(null);
rec.setTermBrcode(null);
java.lang.reflect.Method method = ElasticsearchService.class.getDeclaredMethod("format", Record.class);
method.setAccessible(true);
Record output = (Record) method.invoke(es, rec);
assertEquals(output, rec);
}
Reflection is the key. Sharing it here so others running on the same issue might have this for assistance. Thanks.

How To Know All Asynchronous HTTP Calls are Completed

I am trying to figure out how to determine if all async HTTP GET requests I've made have completed, so that I can execute another method. For context, I have something similar to the code below:
public void init() throws IOException {
Map<String, CustomObject> mapOfObjects = new HashMap<String, CustomObject>();
ObjectMapper mapper = new ObjectMapper();
// some code to populate the map
mapOfObjects.forEach((k,v) -> {
HttpClient.asyncGet("https://fakeurl1.com/item/" + k, createCustomCallbackOne(k, mapper));
// HttpClient is just a wrapper class for your standard OkHTTP3 calls,
// e.g. client.newcall(request).enqueue(callback);
HttpClient.asyncGet("https://fakeurl2.com/item/" + k, createCustomCallbackTwo(k, mapper));
});
}
private createCustomCallbackOne(String id, ObjectMapper mapper) {
return new Callback() {
#Override
public void onResponse(Call call, Response response) throws IOException {
if (response.isSuccessful()) {
try (ResponseBody body = response.body()) {
CustomObject co = mapOfObjects.get(id);
if (co != null) {
co.setFieldOne(mapper.readValue(body.byteStream(), FieldOne.class)));
}
} // implicitly closes the response body
}
}
#Override
public void onFailure(Call call, IOException e) {
// log error
}
}
}
// createCustomCallbackTwo does more or less the same thing,
// just sets a different field and then performs another
// async GET in order to set an additional field
So what would be the best/correct way to monitor all these asynchronous calls to ensure they have completed and I can go about performing another method on the Objects stored inside the map?
The most simple way would be to keep a count of how many requests are 'in flight'. Increment it for each request enqueued, decrement it at the end of the callback. When/if the count is 0, any/all requests are done. Using a semaphore or counting lock you can wait for it to become 0 without polling.
Note that the callbacks run on separate threads, so you must provide some kind of synchronization.
If you want to create a new callback for every request, you could use something like this:
public class WaitableCallback implements Callback {
private boolean done;
private IOException exception;
private final Object[] signal = new Object[0];
#Override
public void onResponse(Call call, Response response) throws IOException {
...
synchronized (this.signal) {
done = true;
signal.notifyAll();
}
}
#Override
public void onFailure(Call call, IOException e) {
synchronized (signal) {
done = true;
exception = e;
signal.notifyAll();
}
}
public void waitUntilDone() throws InterruptedException {
synchronized (this.signal) {
while (!this.done) {
this.signal.wait();
}
}
}
public boolean isDone() {
synchronized (this.signal) {
return this.done;
}
}
public IOException getException() {
synchronized (this.signal) {
return exception;
}
}
}
Create an instance for every request and put it into e.g. a List<WaitableCallback> pendingRequests.
Then you can just wait for all requests to be done:
for ( WaitableCallback cb : pendingRequests ) {
cb.waitUntilDone();
}
// At this point, all requests have been processed.
However, you probably should not create a new identical callback object for every request. Callback's methods get the Call passed as parameter so that the code can examine it to figure out which request it is processing; and in your case, it seems you don't even need that. So use a single Callback instance for the requests that should be handled identically.
If the function asyncGet calls your function createCustomCallbackOne then its easy.
For each key you are calling two pages. "https://fakeurl1.com/item/" and "https://fakeurl2.com/item/" (left out + k)
So you need a map to trach that and just one call back function is enough.
Use a map with key indicating each call:
static final Map<String, Integer> trackerOfAsyncCalls = new HashMap<>();
public void init() throws IOException {
Map<String, CustomObject> mapOfObjects = new HashMap<String, CustomObject>();
//need to keep a track of the keys in some object
ObjectMapper mapper = new ObjectMapper();
trackerOfAsyncCalls.clear();
// some code to populate the map
mapOfObjects.forEach((k,v) -> {
HttpClient.asyncGet("https://fakeurl1.com/item/" + k, createCustomCallback(k,1 , mapper));
// HttpClient is just a wrapper class for your standard OkHTTP3 calls,
// e.g. client.newcall(request).enqueue(callback);
HttpClient.asyncGet("https://fakeurl2.com/item/" + k, createCustomCallback(k, 2, mapper));
trackerOfAsyncCalls.put(k + "-2", null);
});
}
//final important
private createCustomCallbackOne(final String idOuter, int which, ObjectMapper mapper) {
return new Callback() {
final String myId = idOuter + "-" + which;
trackerOfAsyncCalls.put(myId, null);
#Override
public void onResponse(Call call, Response response) throws IOException {
if (response.isSuccessful()) {
trackerOfAsyncCalls.put(myId, 1);
///or put outside of if if u dont care if success or fail or partial...
Now set up a thread or best a schduler that is caclled every 5 seconds, check all eys in mapOfObjects and trackerOfAsyncCalls to see if all keys have been started and some final success or timeout or error status has been got for all.

Random Mockito Stubbing exception

I have the following code snippet from a unit test using Mockito which has happily been passing for months/years.
#Test
public void testAddRemoveTarFile() throws IOException, GeneralSecurityException {
//add a TAR file
TableRowCount downloadRowCount = new TableRowCount(tableDownloads);
//create the item that will appear in the table row
MyHSAItem item = createMyHSAItem();
Mockito.when(model.getConfigurations(Type.TAR)).thenReturn(Arrays.asList(item));
//get the table model
JTable downloadsTable = (JTable)UI.findComponent(getPanel(), "download");
final MyHSATableModel tableModel = (MyHSATableModel ) downloadsTable.getModel();
final MyHSAEvent event = Mockito.mock(MyHSAEvent.class);
Mockito.when(event.getType()).thenReturn(MyHSAEvent.Type.MODEL);
//Fire table event when adding observation
final File xmlFile = Mockito.mock(File.class);
Mockito.doAnswer(new Answer<Void>() {
#Override
public Void answer(InvocationOnMock invocation) throws Throwable {
tableModel.modelChanged(event);
return null;
}
}).when(model).addObservation(xmlFile);
//Fire table event when deleting observation
Mockito.doAnswer(new Answer<Void>() {
#Override
public Void answer(InvocationOnMock invocation) throws Throwable {
tableModel.modelChanged(event);
return null;
}
}).when(model).delete(item, true);
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
UI.findButtonWithText(getPanel(), "Add ...").doClick();
}
});
//select a file, table row should update
chooseFile(xmlFile);
ensureEquals(1, downloadRowCount, TIMEOUT);
// Remove download + cancel
UI.leftClick(tableDownloads);
clickRemove("Cancel");
ensureEquals(1, downloadRowCount, TIMEOUT);
// Remove download + OK
UI.leftClick(tableDownloads);
Mockito.when(model.getConfigurations(Type.TAR)).thenReturn(new ArrayList<MyHSAItem>());
clickRemove("OK");
ensureEquals(0, downloadRowCount, TIMEOUT);
}
Suddenly it failed just once with:
org.mockito.exceptions.misusing.UnfinishedStubbingException:
Unfinished stubbing detected here:
-> at herschel.ia.pal.pool.hsa.gui.MyHsaPreferencePanelTest.testAddRemoveTarFile(MyHsaPreferencePanelTest.java:257)
E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
1. missing thenReturn()
2. although stubbed methods may return mocks, you cannot inline mock creation (mock()) call inside a thenReturn method (see issue 53)
I understand this error but not how it can randomly happen. The Mockito.doAnswer seems to be the problem. I am not inlining mocks and it seems to be ok and has always worked. What can it be?
Line 257 stars with
Mockito.doAnswer(new Answer<Void>() {
model is indeed a field initialised like so:
#Mock
private MyHSANotifiableModelImpl model;
public void setUpPanel() {
MockitoAnnotations.initMocks(this);
Both the answers return null and have signature Void, so not sure what you mean exactly.
Thanks for any help

Hadoop - How to extract a taskId from mapred.JobConf?

Is it possible to create a valid *mapreduce*.TaskAttemptID from *mapred*.JobConf?
The background
I need to write a FileInputFormatAdapter for an ExistingFileInputFormat. The problem is that the Adapter needs to extend mapred.InputFormat and the Existing format extends mapreduce.InputFormat.
I need to build a mapreduce.TaskAttemptContextImpl, so that I can instantiate the ExistingRecordReader. However, I can't create a valid TaskId...the taskId comes out as null.
So How can I get the taskId, jobId, etc from mapred.JobConf.
In particular in the Adapter's getRecordReader I need to do something like:
public org.apache.hadoop.mapred.RecordReader<NullWritable, MyWritable> getRecordReader(
org.apache.hadoop.mapred.InputSplit split, JobConf job, Reporter reporter) throws IOException {
SplitAdapter splitAdapter = (SplitAdapter) split;
final Configuration conf = job;
/*************************************************/
//The problem is here, "mapred.task.id" is not in the conf
/*************************************************/
final TaskAttemptID taskId = TaskAttemptID.forName(conf.get("mapred.task.id"));
final TaskAttemptContext context = new TaskAttemptContextImpl(conf, taskId);
try {
return new RecordReaderAdapter(new ExistingRecordReader(
splitAdapter.getMapRedeuceSplit(),
context));
} catch (InterruptedException e) {
throw new RuntimeException("Failed to create record-reader.", e);
}
}
This code throws an exception:
Caused by: java.lang.NullPointerException
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.<init>(TaskAttemptContextImpl.java:44)
at org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.<init>(TaskAttemptContextImpl.java:39)
'super(conf, taskId.getJobID());' is throwing the exception, most likely because taskId is null.
I found the answer by looking through HiveHbaseTableInputFormat. Since my solution is targeted for hive, this works perfectly.
TaskAttemptContext tac = ShimLoader.getHadoopShims().newTaskAttemptContext(
job.getConfiguration(), reporter);

MapReduce HBase NullPointerException

I am beginner at bigdata. First I wanna try how mapreduce work with hbase. The scenario is summing of a field uas in my hbase use map reduce based on date which is as primary key. Here is my table :
Hbase::Table - test
ROW COLUMN+CELL
10102010#1 column=cf:nama, timestamp=1418267197429, value=jonru
10102010#1 column=cf:quiz, timestamp=1418267197429, value=\x00\x00\x00d
10102010#1 column=cf:uas, timestamp=1418267197429, value=\x00\x00\x00d
10102010#1 column=cf:uts, timestamp=1418267197429, value=\x00\x00\x00d
10102010#2 column=cf:nama, timestamp=1418267180874, value=jonru
10102010#2 column=cf:quiz, timestamp=1418267180874, value=\x00\x00\x00d
10102010#2 column=cf:uas, timestamp=1418267180874, value=\x00\x00\x00d
10102010#2 column=cf:uts, timestamp=1418267180874, value=\x00\x00\x00d
10102012#1 column=cf:nama, timestamp=1418267156542, value=jonru
10102012#1 column=cf:quiz, timestamp=1418267156542, value=\x00\x00\x00\x0A
10102012#1 column=cf:uas, timestamp=1418267156542, value=\x00\x00\x00\x0A
10102012#1 column=cf:uts, timestamp=1418267156542, value=\x00\x00\x00\x0A
10102012#2 column=cf:nama, timestamp=1418267166524, value=jonru
10102012#2 column=cf:quiz, timestamp=1418267166524, value=\x00\x00\x00\x0A
10102012#2 column=cf:uas, timestamp=1418267166524, value=\x00\x00\x00\x0A
10102012#2 column=cf:uts, timestamp=1418267166524, value=\x00\x00\x00\x0A
My codes are like these :
public class TestMapReduce {
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
Configuration config = HBaseConfiguration.create();
Job job = new Job(config, "Test");
job.setJarByClass(TestMapReduce.TestMapper.class);
Scan scan = new Scan();
scan.setCaching(500);
scan.setCacheBlocks(false);
TableMapReduceUtil.initTableMapperJob(
"test",
scan,
TestMapReduce.TestMapper.class,
Text.class,
IntWritable.class,
job);
TableMapReduceUtil.initTableReducerJob(
"test",
TestReducer.class,
job);
job.waitForCompletion(true);
}
public static class TestMapper extends TableMapper<Text, IntWritable> {
#Override
protected void map(ImmutableBytesWritable rowKey, Result columns, Mapper.Context context) throws IOException, InterruptedException {
System.out.println("mulai mapping");
try {
//get row key
String inKey = new String(rowKey.get());
//get new key having date only
String onKey = new String(inKey.split("#")[0]);
//get value s_sent column
byte[] bUas = columns.getValue(Bytes.toBytes("cf"), Bytes.toBytes("uas"));
String sUas = new String(bUas);
Integer uas = new Integer(sUas);
//emit date and sent values
context.write(new Text(onKey), new IntWritable(uas));
} catch (RuntimeException ex) {
ex.printStackTrace();
}
}
}
public class TestReducer extends TableReducer {
public void reduce(Text key, Iterable values, Reducer.Context context) throws IOException, InterruptedException {
try {
int sum = 0;
for (Object test : values) {
System.out.println(test.toString());
sum += Integer.parseInt(test.toString());
}
Put inHbase = new Put(key.getBytes());
inHbase.add(Bytes.toBytes("cf"), Bytes.toBytes("sum"), Bytes.toBytes(sum));
context.write(null, inHbase);
} catch (Exception e) {
e.printStackTrace();
}
}
}
I got errors like these :
Exception in thread "main" java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1010)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:451)
at org.apache.hadoop.util.Shell.run(Shell.java:424)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:656)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:745)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:728)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)
at TestMapReduce.main(TestMapReduce.java:97)
Java Result: 1
Help me please :)
Let's look at this part of your code:
byte[] bUas = columns.getValue(Bytes.toBytes("cf"), Bytes.toBytes("uas"));
String sUas = new String(bUas);
For the current key you are trying to get a value of column uas from column family cf. This is a non-relational DB, so it is easily possible that this key doesn't have a value for this column. In that case, getValue method will return null. String constructor that accepts byte[] as an input can't handle null values, so it will throw a NullPointerException. A quick fix will look like this:
byte[] bUas = columns.getValue(Bytes.toBytes("cf"), Bytes.toBytes("uas"));
String sUas = bUas == null ? "" : new String(bUas);

Categories

Resources