I am working on a Get object as retrieved from a table in Habse. I want to dynamically retrieve all column values related to that get since I don't know the exact name of column families
val result1 = hTable.get(g)
if (!result1.isEmpty) {
//binaryEpisodes = result1.getValue(Bytes.toBytes("episodes"),Bytes.toBytes("episodes"))
//instead of above retrieve all values dynamically
}
Simple way :
get rawcells and knowing CF , columns information.
You have to do something like below example
public static void printResult(Result result, Logger logger) {
logger.info("Row: ");
for (Cell cell : result.rawCells()) {
byte[] family = CellUtil.cloneFamily(cell);
byte[] column = CellUtil.cloneQualifier(cell);
byte[] value = CellUtil.cloneValue(cell);
logger.info("\t" + Bytes.toString(family) + ":" + Bytes.toString(column) + " = " + Bytes.toString(value));
}
}
Hbase Admin way : Hbase client API was exposed by HbaseAdmin class like below...
Client would be like
package mytest;
import com.usertest.*;
import java.io.IOException;
import java.util.Date;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class ListHbaseTablesAndColumns {
public static void main(String[] args) {
try {
HbaseMetaData hbaseMetaData =new HbaseMetaData();
for(String hbaseTable:hbaseMetaData .getTableNames(".*yourtables.*")){
for (String column : hbaseMetaData .getColumns(hbaseTable, 10000)) {
System.out.println(hbaseTable + "," + column);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Use below class to Get HbaseMetaData..
package com.usertest;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.filter.PageFilter;
import java.io.IOException;
import java.util.*;
import java.util.regex.Pattern;
public class HbaseMetaData {
private HBaseAdmin hBaseAdmin;
private Configuration hBaseConfiguration;
public HbaseMetaData () throws IOException {
this.hBaseConfiguration = HBaseConfiguration.create();
this.hBaseAdmin = new HBaseAdmin(hBaseConfiguration);
}
/** get all Table names **/
public List<String> getTableNames(String regex) throws IOException {
Pattern pattern=Pattern.compile(regex);
List<String> tableList = new ArrayList<String>();
TableName[] tableNames=hBaseAdmin.listTableNames();
for (TableName tableName:tableNames){
if(pattern.matcher(tableName.toString()).find()){
tableList.add(tableName.toString());
}
}
return tableList;
}
/** Get all columns **/
public Set<String> getColumns(String hbaseTable) throws IOException {
return getColumns(hbaseTable, 10000);
}
/** get all columns from the table **/
public Set<String> getColumns(String hbaseTable, int limitScan) throws IOException {
Set<String> columnList = new TreeSet<String>();
HTable hTable=new HTable(hBaseConfiguration, hbaseTable);
Scan scan=new Scan();
scan.setFilter(new PageFilter(limitScan));
ResultScanner results = hTable.getScanner(scan);
for(Result result:results){
for(KeyValue keyValue:result.list()){
columnList.add(
new String(keyValue.getFamily()) + ":" +
new String(keyValue.getQualifier())
);
}
}
return columnList;
}
}
Related
Here is the ExcelDataToDataTable format which converts exceldata to data table
package com.api.cucumber.transform;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;enter code here
import java.net.URI;
import java.util.Arrays;
import java.util.LinkedList;
import java.util.List;
import java.util.Locale;
import org.apache.poi.openxml4j.exceptions.InvalidFormatException;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import com.automation.custom.utilities.ExcelLibrary;
import com.automation.custom.utilities.ExcelReader;
import cucumber.api.DataTable;
import cucumber.api.Transformer;
import cucumber.runtime.ParameterInfo;
import cucumber.runtime.table.TableConverter;
import cucumber.runtime.xstream.LocalizedXStreams;
import gherkin.formatter.model.Comment;
import gherkin.formatter.model.DataTableRow;
public class ExcelDataToDataTable extends Transformer<DataTable> {
#Override
public DataTable transform(String filePath) {
String file[] = filePath.split(";");
String path = getfilePath(file[0]);
ExcelReader reader = new ExcelReader.ExcelReaderBuilder()
.setFileLocation(path)
.setSheet(file[1])
.build();
List<List<String>> excelData = getExcelData(reader);
List<DataTableRow> dataTableRows = getDataTableRows(excelData);
DataTable table = getDataTable(dataTableRows);
return table;
}
public String getfilePath(String path) {
// File file = new File("TestData.xlsx");
//return file.toString();
}
private DataTable getDataTable(List<DataTableRow> dataTableRows) {
ParameterInfo parameterInfo = new ParameterInfo(null, null, null, null);
TableConverter tableConverter = new TableConverter(new LocalizedXStreams(Thread.currentThread().getContextClassLoader()).get(Locale.getDefault()), parameterInfo);
DataTable table = new DataTable(dataTableRows, tableConverter);
return table;
}
private List<DataTableRow> getDataTableRows(List<List<String>> excelData) {
List<DataTableRow> dataTableRows = new LinkedList<>();
int line = 1;
for(List<String> list : excelData){
Comment commnet = new Comment("", line);
DataTableRow tableRow = new DataTableRow(Arrays.asList(commnet), list, line++);
dataTableRows.add(tableRow);
}
return dataTableRows;
}
private List<List<String>> getExcelData(ExcelReader reader) {
List<List<String>> excelData = new LinkedList<>();
try {
excelData = reader.getSheetDataAt();
} catch (InvalidFormatException | IOException e) {
throw new RuntimeException(e.getMessage());
}
return excelData;
}
}
This is StepDefination file
//Testdata.xlsx;0 sheet
#Then("^I validate list of properties in page zero with data in excel at \"([^\"]*)\"$")
public void i_validate_list_of_properties_in_page_zero_with_data_in_excel_at(#Transform(ExcelDataToDataTable.class) DataTable table) throws Throwable {
System.out.println(table.toString());
List<String> dataList=table.asList(String.class);
for(String str : dataList)
{
System.out.println(str);
}
//Testdata.xlsx;1
#Then("^I validate list of properties in page one with data in excel at \"([^\"]*)\"$")
public void i_validate_list_of_properties_in_page_one_with_data_in_excel_at(#Transform(ExcelDataToDataTable.class) DataTable table) throws Throwable {
System.out.println(table.toString());
List<String> dataList=table.asList(String.class);
for(String str : dataList)
{
System.out.println(str);
}
This is the Feature file
Feature: Validate values in application pages
Description: Validate dynamic values
#Test
Scenario: User navigates to application site and validates tags
Given As User I need to tags in the application
When I navigate to zero page "application url"
Then I validate list of properties in page zero with data in excel at "TestData.xlsx;0”
#Test1
Scenario: User navigates to application site and validates tags
Then i select country
Then i validate list of properties in page one with data in exel at "TestData.xlsx;1”
This is the transformData class
package com.api.cucumber.transform;
import cucumber.api.Transformer;
public class TransformData extends Transformer<String>{
#Override
public String transform(String args) {
return args + " Transform";
}
}
This is the ExcelReader class to read the data from excel
package com.automation.custom.utilities;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
import org.apache.poi.openxml4j.exceptions.InvalidFormatException;
import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Sheet;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.ss.usermodel.WorkbookFactory;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.openqa.selenium.JavascriptExecutor;
import com.api.cucumber.transform.ExcelDataToDataTable;
public class ExcelReader {
public String fileName;
public String sheetName;
public int sheetIndex;
public XSSFWorkbook book;
public ExcelReader(ExcelReaderBuilder excelReaderBuilder) {
this.fileName = excelReaderBuilder.fileName;
this.sheetIndex = excelReaderBuilder.sheetIndex;
this.sheetName = excelReaderBuilder.sheetName;
}
public static class ExcelReaderBuilder{
public String fileName;
public String sheetName;
public int sheetIndex;
public ExcelReaderBuilder setFileLocation(String location) {
this.fileName = location;
return this;
}
public ExcelReaderBuilder setSheet(String sheetName) {
this.sheetName = sheetName;
return this;
}
public ExcelReaderBuilder setSheet(int sheetIndex) {
this.sheetIndex = sheetIndex;
return this;
}
public ExcelReader build() {
return new ExcelReader(this);
}
#Override
public String toString() {
return "fileName = "+this.fileName+" , sheetName= "+this.sheetName+", sheetIndex="+this.sheetIndex;
}
}
public XSSFWorkbook getWorkBook(String filePath) throws InvalidFormatException, IOException {
return new XSSFWorkbook(new File(filePath));
}
public XSSFSheet getWorkBookSheet(String fileName, String sheetName) throws InvalidFormatException, IOException {
this.book = getWorkBook(fileName);
return this.book.getSheet(sheetName);
}
public XSSFSheet getWorkBookSheet(String fileName, int sheetIndex) throws InvalidFormatException, IOException {
this.book = getWorkBook(fileName);
return this.book.getSheetAt(sheetIndex);
}
public List<List<String>> getSheetData() throws IOException{
XSSFSheet sheet;
List<List<String>> outerList = new LinkedList<>();
try {
sheet = getWorkBookSheet(fileName, sheetName);
outerList = getSheetData(sheet);
} catch (InvalidFormatException e) {
throw new RuntimeException(e.getMessage());
}
return outerList;
}
public List<List<String>> getSheetDataAt() throws InvalidFormatException, IOException {
XSSFSheet sheet;
List<List<String>> outerList = new LinkedList<>();
try {
sheet = getWorkBookSheet(fileName, sheetIndex);
outerList = getSheetData(sheet);
} catch (InvalidFormatException e) {
throw new RuntimeException(e.getMessage());
}
return outerList;
}
public List<List<String>> getSheetData(XSSFSheet sheet) {
List<List<String>> outerList = new LinkedList<>();
prepareOutterList(sheet, outerList);
return Collections.unmodifiableList(outerList);
}
public void prepareOutterList(XSSFSheet sheet, List<List<String>> outerList) {
for (int i = sheet.getFirstRowNum(); i <= sheet.getLastRowNum(); i++) {
List<String> innerList = new LinkedList<>();
XSSFRow xssfRow = sheet.getRow(i);
for (int j = xssfRow.getFirstCellNum(); j < xssfRow.getLastCellNum(); j++) {
prepareInnerList(innerList, xssfRow, j);
}
outerList.add(Collections.unmodifiableList(innerList));
}
}
public void prepareInnerList(List<String> innerList, XSSFRow xssfRow, int j) {
switch (xssfRow.getCell(j).getCellType()) {
case Cell.CELL_TYPE_BLANK:
innerList.add("");
break;
case Cell.CELL_TYPE_STRING:
innerList.add(xssfRow.getCell(j).getStringCellValue());
break;
case Cell.CELL_TYPE_NUMERIC:
innerList.add(xssfRow.getCell(j).getNumericCellValue() + "");
break;
case Cell.CELL_TYPE_BOOLEAN:
innerList.add(xssfRow.getCell(j).getBooleanCellValue() + "");
break;
default:
throw new IllegalArgumentException("Cannot read the column : " + j);
}
}
Note - Unable to print/fetch the data from second sheet in cucumber using java Every time the data is rendered from sheet one only
In the feature:
Then i validate list of properties in page one with data in exel at "TestData.xlsx;2" '''
But in the StepDefinition file:
#Then("^I validate list of properties in page zero with data in excel at \"([^\"]*)\"$")
Be careful: Are you sure you are writing "excel" with a "c" in both lines? Or any other way, but the same all the time?
I have a Hbase table with a specific column value for all rows as
901877853087813636 column=metadata:collection-id, timestamp=1514594631532, value=1007
Now how do I change the value from 1007 to 1008 for all rows in the table.
All the help points to modifying a specific row.
Please help me in this
scan the table with SingleColumnValueFilter to get all the rows where value is
1007 and than u can put new value(1008) for all those rows using bulk put.for example to scan put the filter like this:
SingleColumnValueFilter singleColumnValueFilter = new SingleColumnValueFilter("metadata".getBytes(),
"collection-id".getBytes(),CompareOp.EQUAL,
new BinaryComparator(Bytes.toBytes(1007)));
HBase supports update of records based on rowKey alone. So, we have to fetch all the records which have to be updated and create our own batch put to update those records something like below.
UpdateAllRows [tableName] [columnQualifier] [columnFamily] [oldValue] [newValue]
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;
public class UpdateAllRows {
private static Connection conn;
public static Admin getConnection() throws IOException {
if (conn == null) {
conn = ConnectionFactory.createConnection(HBaseConfiguration.create());
}
return conn.getAdmin();
}
public static void main(String args[]) throws Exception {
getConnection();
updateTable(args[0], args[1], args[2], args[3], args[4]);
}
public static void updateTable(String tableName, String columnFamily, String columnQualifier, String oldValue, String newValue) throws Exception{
Table table = conn.getTable(TableName.valueOf(tableName));
ResultScanner rs = scan(tableName, columnFamily, columnQualifier);
byte[] cfBytes = Bytes.toBytes(columnFamily);
byte[] cqBytes = Bytes.toBytes(columnQualifier);
byte[] oldvalBytes = Bytes.toBytes(oldValue);
byte[] newvalBytes = Bytes.toBytes(newValue);
Result res = null;
List<Put> putList = new ArrayList<Put>();
try {
while ((res = rs.next()) != null) {
if (Arrays.equals(res.getValue(cfBytes, cqBytes), oldvalBytes)){
Put p = new Put(res.getRow());
p.addColumn(cfBytes, cqBytes, newvalBytes);
putList.add(p);
}
}
} finally {
rs.close();
}
table.put(putList);
table.close();
}
public static ResultScanner scan(String tableName, String columnFamily, String columnQualifier) throws IOException {
Table table = conn.getTable(TableName.valueOf(tableName));
return table.getScanner(new Scan().addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes(columnQualifier)));
}
}
I've been looking for easy way to add ID to HTML tags and spent few hours here jumping form one tool to another before I came up with this little test solving my issues. Hence my sprint backlog is almost empty I have some time to share. Feel free to make it clear and enjoy those whom are asked by QA to add the ID. Just change the tag, path and run :)
Had some issue here to make proper lambda due to lack of coffee today...
how to replace first occurence only, in single lambda? in files I had many lines having same tags.
private void replace(String path, String replace, String replaceWith) {
try (Stream<String> lines = Files.lines(Paths.get(path))) {
List<String> replaced = lines
.map(line -> line.replace(replace, replaceWith))
.collect(Collectors.toList());
Files.write(Paths.get(path), replaced);
} catch (IOException e) {
e.printStackTrace();
}
}
Above was replacing all lines as it found text to replace in next lines. Proper matcher with repleace that has autoincrement would be better to use within this method body isntead of preparing the replaceWith value before the call. If I'll ever need this again I'll add you another final version .
Final version to not waste more time (phase green):
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.runners.MockitoJUnitRunner;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
import java.util.stream.Stream;
#RunWith(MockitoJUnitRunner.class)
public class RepalceInFilesWithAutoIncrement {
private int incremented = 100;
/**
* The tag you would like to add Id to
* */
private static final String tag = "label";
/**
* Regex to find the tag
* */
private static final Pattern TAG_REGEX = Pattern.compile("<" + tag + " (.+?)/>", Pattern.DOTALL);
private static final Pattern ID_REGEX = Pattern.compile("id=", Pattern.DOTALL);
#Test
public void replaceInFiles() throws IOException {
String nextId = " id=\"" + tag + "_%s\" ";
String path = "C:\\YourPath";
try (Stream<Path> paths = Files.walk(Paths.get(path))) {
paths.forEach(filePath -> {
if (Files.isRegularFile(filePath)) {
try {
List<String> foundInFiles = getTagValues(readFile(filePath.toAbsolutePath().toString()));
if (!foundInFiles.isEmpty()) {
for (String tagEl : foundInFiles) {
incremented++;
String id = String.format(nextId, incremented);
String replace = tagEl.split("\\r?\\n")[0];
replace = replace.replace("<" + tag, "<" + tag + id);
replace(filePath.toAbsolutePath().toString(), tagEl.split("\\r?\\n")[0], replace, false);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
});
}
System.out.println(String.format("Finished with (%s) changes", incremented - 100));
}
private String readFile(String path)
throws IOException {
byte[] encoded = Files.readAllBytes(Paths.get(path));
return new String(encoded, StandardCharsets.UTF_8);
}
private List<String> getTagValues(final String str) {
final List<String> tagValues = new ArrayList<>();
final Matcher matcher = TAG_REGEX.matcher(str);
while (matcher.find()) {
if (!ID_REGEX.matcher(matcher.group()).find())
tagValues.add(matcher.group());
}
return tagValues;
}
private void replace(String path, String replace, String replaceWith, boolean log) {
if (log) {
System.out.println("path = [" + path + "], replace = [" + replace + "], replaceWith = [" + replaceWith + "], log = [" + log + "]");
}
try (Stream<String> lines = Files.lines(Paths.get(path))) {
List<String> replaced = new ArrayList<>();
boolean alreadyReplaced = false;
for (String line : lines.collect(Collectors.toList())) {
if (line.contains(replace) && !alreadyReplaced) {
line = line.replace(replace, replaceWith);
alreadyReplaced = true;
}
replaced.add(line);
}
Files.write(Paths.get(path), replaced);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Try it with Jsoup.
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class JsoupTest {
public static void main(String argv[]) {
String html = "<html><head><title>Try it with Jsoup</title></head>"
+ "<body><p>P first</p><p>P second</p><p>P third</p></body></html>";
Document doc = Jsoup.parse(html);
Elements ps = doc.select("p"); // The tag you would like to add Id to
int i = 12;
for(Element p : ps){
p.attr("id",String.valueOf(i));
i++;
}
System.out.println(doc.toString());
}
}
I am using java spark API to write some test application . I am using a class which doesn't extends serializable interface . So to make the application work I am using kryo serializer to serialize the class . But the problem which I observed while debugging was that during the de-serialization the returned class object becomes null and in turn throws a null pointer exception . It seems to be closure problem where things are going wrong but not sure.Since I am new to this kind of serialization I don't know where to start digging.
Here is the code I am testing :
package org.apache.spark.examples;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.net.InetAddress;
import java.net.UnknownHostException;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
/**
* Spark application to test the Serialization issue in spark
*/
public class Test {
static PrintWriter outputFileWriter;
static FileWriter file;
static JavaSparkContext ssc;
public static void main(String[] args) {
String inputFile = "/home/incubator-spark/examples/src/main/scala/org/apache/spark/examples/InputFile.txt";
String master = "local";
String jobName = "TestSerialization";
String sparkHome = "/home/test/Spark_Installation/spark-0.7.0";
String sparkJar = "/home/test/TestSerializationIssesInSpark/TestSparkSerIssueApp/target/TestSparkSerIssueApp-0.0.1-SNAPSHOT.jar";
SparkConf conf = new SparkConf();
conf.set("spark.closure.serializer","org.apache.spark.serializer.KryoSerializer");
conf.set("spark.kryo.registrator", "org.apache.spark.examples.MyRegistrator");
// create the Spark context
if(master.equals("local")){
ssc = new JavaSparkContext("local", jobName,conf);
//ssc = new JavaSparkContext("local", jobName);
} else {
ssc = new JavaSparkContext(master, jobName, sparkHome, sparkJar);
}
JavaRDD<String> testData = ssc.textFile(inputFile).cache();
final NotSerializableJavaClass notSerializableTestObject= new NotSerializableJavaClass("Hi ");
#SuppressWarnings({ "serial", "unchecked"})
JavaRDD<String> classificationResults = testData.map(
new Function<String, String>() {
#Override
public String call(String inputRecord) throws Exception {
if(!inputRecord.isEmpty()) {
//String[] pointDimensions = inputRecord.split(",");
String result = "";
try {
FileWriter file = new FileWriter("/home/test/TestSerializationIssesInSpark/results/test_result_" + (int) (Math.random() * 100));
PrintWriter outputFile = new PrintWriter(file);
InetAddress ip;
ip = InetAddress.getLocalHost();
outputFile.println("IP of the server: " + ip);
result = notSerializableTestObject.testMethod(inputRecord);
outputFile.println("Result: " + result);
outputFile.flush();
outputFile.close();
file.close();
} catch (UnknownHostException e) {
e.printStackTrace();
}
catch (IOException e1) {
e1.printStackTrace();
}
return result;
} else {
System.out.println("End of elements in the stream.");
String result = "End of elements in the input data";
return result;
}
}
}).cache();
long processedRecords = classificationResults.count();
ssc.stop();
System.out.println("sssssssssss"+processedRecords);
}
}
Here is the KryoRegistrator class
package org.apache.spark.examples;
import org.apache.spark.serializer.KryoRegistrator;
import com.esotericsoftware.kryo.Kryo;
public class MyRegistrator implements KryoRegistrator {
public void registerClasses(Kryo kryo) {
kryo.register(NotSerializableJavaClass.class);
}
}
Here is the class I am serializing :
package org.apache.spark.examples;
public class NotSerializableJavaClass {
public String testVariable;
public NotSerializableJavaClass(String testVariable) {
super();
this.testVariable = testVariable;
}
public String testMethod(String vartoAppend){
return this.testVariable + vartoAppend;
}
}
This is because spark.closure.serializer only supports the Java serializer. See http://spark.apache.org/docs/latest/configuration.html about spark.closure.serializer
I'm trying to use this code. Which is an auto-complete field that use the database.
import java.util.Vector;
import net.rim.device.api.collection.util.*;
import net.rim.device.api.database.Cursor;
import net.rim.device.api.database.Database;
import net.rim.device.api.database.DatabaseFactory;
import net.rim.device.api.database.Row;
import net.rim.device.api.database.Statement;
import net.rim.device.api.ui.*;
import net.rim.device.api.ui.container.*;
import net.rim.device.api.ui.component.*;
public class AutoCompleteFieldDemo extends UiApplication
{
public static void main(String[] args)
{
AutoCompleteFieldDemo app = new AutoCompleteFieldDemo();
app.enterEventDispatcher();
}
public AutoCompleteFieldDemo()
{
pushScreen(new AutoCompleteFieldDemoScreen());
}
public static String[] getDataFromDB()
{
Vector names = new Vector();
try
{
Database db = DatabaseFactory.openOrCreate("database1.db");
Statement statement1 = db.createStatement("SELECT name FROM Directory_Items");
statement1.prepare();
statement1.execute();
Cursor c = statement1.getCursor();
Row r;
while(c.next())
{
r = c.getRow();
names.addElement(r.getString(0));
}
statement1.close();
db.close();
}
catch( Exception e )
{
System.out.println( e.getMessage() );
e.printStackTrace();
}
String [] returnValues = new String[names.size()];
for (int i = 0; i < names.size(); i++) {
returnValues[i] = (String) names.elementAt(i);
}
return returnValues;
}
static final class AutoCompleteFieldDemoScreen extends MainScreen
{
AutoCompleteFieldDemoScreen()
{
BasicFilteredList filterLst = new BasicFilteredList();
filterLst.addDataSet(1,getDataFromDB()
,"Names",BasicFilteredList.COMPARISON_IGNORE_CASE);
AutoCompleteField autoFld = new AutoCompleteField(filterLst);
add(autoFld);
}
I found it in here
The problem that I'm facing is, I can't find the database. I have created the database in the SD card and I put the name in the createOropen("myDataBase.db). But When I debug it jump to the exception. Any idea about why I'm having this problem.
Thank you so much!