i'm trying my files test1.txt and test1.model with this code belwo, my class are {Business,Friends,Spam}
when the function classify() compile , it dose not predicted any class . i'm new with Weka so i have found that the type of the class is wrong , so i have tried other type but it cause the same output ans the classify() dose not compiled in the right way. can anyone please tell me what's the problem ?
the output show like this
run:
===== Loaded text data: D:\test\test1.txt =====
hello this is a test
===== Loaded model: D:\test\test1.model =====
===== Instance created with reference dataset =====
#relation 'Test relation'
#attribute class {Business,Friends,Spam}
#attribute text string
#data
?,' hello this is a test '
Problem found when classifying the text
BUILD SUCCESSFUL (total time: 2 seconds)
with this code
package sentimentclassifier;
import weka.core.*;
import weka.core.FastVector;
import weka.classifiers.meta.FilteredClassifier;
import java.util.List;
import java.util.ArrayList;
import java.io.*;
public class SentimentClassifier {
/**
String text;
/**
* Object that stores the instance.
*/
Instances instances;
/**
* Object that stores the classifier.
*/
FilteredClassifier classifier;
private String text;
/**
* This method loads the text to be classified.
* #param fileName The name of the file that stores the text.
*/
public void load(String fileName) {
try {
BufferedReader reader = new BufferedReader(new FileReader(fileName));
String line;
text = "";
while ((line = reader.readLine()) != null) {
text = text + " " + line;
}
System.out.println("===== Loaded text data: " + fileName + " =====");
reader.close();
System.out.println(text);
}
catch (IOException e) {
System.out.println("Problem found when reading: " + fileName);
}
}
/**
* This method loads the model to be used as classifier.
* #param fileName The name of the file that stores the text.
*/
public void loadModel(String fileName) {
try {
ObjectInputStream in = new ObjectInputStream(new FileInputStream(fileName));
Object tmp = in.readObject();
classifier = (FilteredClassifier) tmp;
in.close();
System.out.println("===== Loaded model: " + fileName + " =====");
}
catch (Exception e) {
// Given the cast, a ClassNotFoundException must be caught along with the IOException
System.out.println("Problem found when reading: " + fileName);
}
}
/**
* This method creates the instance to be classified, from the text that has been read.
*/
public void makeInstance() {
FastVector fvNominalVal = new FastVector(3);
fvNominalVal.addElement("Business");
fvNominalVal.addElement("Friends");
fvNominalVal.addElement("Spam");
Attribute attribute1 = new Attribute("class", fvNominalVal);
Attribute attribute2 = new Attribute("text",(FastVector) null);
//==========================================
// Create list of instances with one element
FastVector fvWekaAttributes = new FastVector(2);
fvWekaAttributes.addElement(attribute1);
fvWekaAttributes.addElement(attribute2);
instances = new Instances("Test relation", fvWekaAttributes, 1);
// Set class index
instances.setClassIndex(0);
// Create and add the instance
DenseInstance instance = new DenseInstance(2);
instance.setValue(attribute2, text);
// Another way to do it:
// instance.setValue((Attribute)fvWekaAttributes.elementAt(1), text);
instances.add(instance);
System.out.println("===== Instance created with reference dataset =====");
System.out.println(instances);
}
/**
* This method performs the classification of the instance.
* Output is done at the command-line.
*/
public void classify() {
try {
double pred = classifier.classifyInstance(instances.instance(2));
System.out.println("===== Classified instance =====");
System.out.println("Class predicted: " + instances.classAttribute().value((int) pred));
}
catch (Exception e) {
System.out.println("Problem found when classifying the text");
}
}
public static void main(String[] args) {
SentimentClassifier classifier;
classifier = new SentimentClassifier();
classifier.load("D:\\test\\test1.txt");
classifier.loadModel("D:\\test\\test1.model");
classifier.makeInstance();
classifier.classify();
}
}
Related
I am trying to use a model that is built on java. Here is the model main class:
package madamira;
import edu.columbia.ccls.madamira.MADAMIRAWrapper;
import edu.columbia.ccls.madamira.configuration.MadamiraInput;
import edu.columbia.ccls.madamira.configuration.MadamiraOutput;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Unmarshaller;
import java.io.File;
import java.util.concurrent.ExecutionException;
/**
* An example class that shows how MADAMIRA can be called through its API.
*
*/
public class madamira {
// MADAMIRA namespace as defined by its XML schema
private static final String MADAMIRA_NS = "edu.columbia.ccls.madamira.configuration";
private static final String INPUT_FILE = "~/Users/user/Desktop/comp_ling/ACL_arabic_parser/udpipe-master/src/MADAMIRA-release-20170403-2.1/padt_sents.xml";
private static final String OUTPUT_FILE = "~/Users/user/Desktop/comp_ling/ACL_arabic_parser/udpipe-master/src/MADAMIRA-release-20170403-2.1/sampleOutputFile.xml";
public static void main(String [] args) {
final MADAMIRAWrapper wrapper = new MADAMIRAWrapper();
JAXBContext jc = null;
try {
jc = JAXBContext.newInstance(MADAMIRA_NS);
Unmarshaller unmarshaller = jc.createUnmarshaller();
// The structure of the MadamiraInput object is exactly similar to the
// madamira_input element in the XML
final MadamiraInput input = (MadamiraInput)unmarshaller.unmarshal(
new File( INPUT_FILE ) );
{
int numSents = input.getInDoc().getInSeg().size();
String outputAnalysis = input.getMadamiraConfiguration().
getOverallVars().getOutputAnalyses();
String outputEncoding = input.getMadamiraConfiguration().
getOverallVars().getOutputEncoding();
System.out.println("processing " + numSents +
" sentences for analysis type = " + outputAnalysis +
" and output encoding = " + outputEncoding);
}
// The structure of the MadamiraOutput object is exactly similar to the
// madamira_output element in the XML
final MadamiraOutput output = wrapper.processString(input);
{
int numSents = output.getOutDoc().getOutSeg().size();
System.out.println("processed output contains "+numSents+" sentences...");
}
jc.createMarshaller().marshal(output, new File(OUTPUT_FILE));
} catch (JAXBException ex) {
System.out.println("Error marshalling or unmarshalling data: "
+ ex.getMessage());
} catch (InterruptedException ex) {
System.out.println("MADAMIRA thread interrupted: "
+ex.getMessage());
} catch (ExecutionException ex) {
System.out.println("Unable to retrieve result of task. " +
"MADAMIRA task may have been aborted: "+ex.getCause());
}
wrapper.shutdown();
}
}
I am getting this error:
Unable to read form brown file resources/paths in resources dir. resources/paths
Error marshalling or unmarshalling data: null
I am not sure what the problem is. could you guys help? I think the problem has to do with some missing or clashing dependencies.
By the way, I am using Java "11.0.2" 2019-01-15 LTS and running the model in eclipse.
thank you
My task is classifying Iris dataset with libsvm in weka.First,I run it in weka explorer and get my ideal result.
Then I code it in eclipse and hope to get the same result as weka explorer shows below.Here is my code(you can neglect any code except for main function).
package weka;
import java.io.BufferedReader;
import java.io.FileReader;
import java.util.Vector;
import weka.classifiers.AbstractClassifier;
import weka.classifiers.Classifier;
import weka.classifiers.Evaluation;
import weka.core.Instances;
import weka.core.OptionHandler;
import weka.core.Utils;
import weka.filters.Filter;
import weka.classifiers.functions.LibSVM;
public class ClassifyIriswithLibsvm {
/** the classifier used internally */
protected Classifier m_Classifier = null;
/** the filter to use */
protected Filter m_Filter = null;
/** the training file */
protected String m_TrainingFile = null;
/** the training instances */
protected Instances m_Training = null;
/** for evaluating the classifier */
protected Evaluation m_Evaluation = null;
/**
* initializes the demo
*/
public ClassifyIriswithLibsvm () {
super();
}
/**
* sets the classifier to use
*
* #param name the classname of the classifier
* #param options the options for the classifier
*/
public void setClassifier(String name, String[] options) throws Exception {
m_Classifier = AbstractClassifier.forName(name, options);
}
/**
* sets the filter to use
*
* #param name the classname of the filter
*/
public void setFilter(String name) throws Exception {
m_Filter = (Filter) Class.forName(name).newInstance();
if (m_Filter instanceof OptionHandler) {
((OptionHandler) m_Filter).setOptions(options);
}
}
/**
* sets the file to use for training
*/
public void setTraining(String name) throws Exception {
m_TrainingFile = name;
m_Training = new Instances(new BufferedReader(
new FileReader(m_TrainingFile)));
m_Training.setClassIndex(m_Training.numAttributes() - 1);
}
/**
* runs 10fold CV over the training file
*/
public void execute() throws Exception {
// run filter
m_Filter.setInputFormat(m_Training);
Instances filtered = Filter.useFilter(m_Training, m_Filter);
// train classifier on complete file for tree
m_Classifier.buildClassifier(filtered);
// 10fold CV with seed=1
m_Evaluation = new Evaluation(filtered);
m_Evaluation.crossValidateModel(m_Classifier, filtered, 10,
m_Training.getRandomNumberGenerator(1));
}
/**
* outputs some data about the classifier
*/
#Override
public String toString() {
StringBuffer result;
result = new StringBuffer();
result.append("Weka - Demo\n===========\n\n");
result.append("Classifier...: " + Utils.toCommandLine(m_Classifier) + "\n");
if (m_Filter instanceof OptionHandler) {
result.append("Filter.......: " + m_Filter.getClass().getName() + " "
+ Utils.joinOptions(((OptionHandler) m_Filter).getOptions()) + "\n");
} else {
result.append("Filter.......: " + m_Filter.getClass().getName() + "\n");
}
result.append("Training file: " + m_TrainingFile + "\n");
result.append("\n");
result.append(m_Classifier.toString() + "\n");
result.append(m_Evaluation.toSummaryString() + "\n");
try {
result.append(m_Evaluation.toMatrixString() + "\n");
} catch (Exception e) {
e.printStackTrace();
}
try {
result.append(m_Evaluation.toClassDetailsString() + "\n");
} catch (Exception e) {
e.printStackTrace();
}
return result.toString();
}
public static void main(String[] args) throws Exception {
String classifier = "weka.classifiers.functions.LibSVM" ;
String options = ( "-S 0 -K 0 -D 3 -G 0.0 -R 0.0 -N 0.5 -M 40.0 -C 1.0 -E 0.001 -P 0.1" );
String[] classifierOptions = options.split( " " );
String filter = "weka.filters.unsupervised.instance.Randomize ";
String dataset = "D:\\SoftWare\\weka3.8.2\\Weka-3-8\\data\\iris.arff";
// run
ClassifyIriswithLibsvm demo = new ClassifyIriswithLibsvm();
demo.setClassifier(classifier,
classifierOptions);
demo.setFilter(filter);
demo.setTraining(dataset);
demo.execute();
System.out.println(demo.toString());
}
}
But error prints out like this
`Exception in thread "main" java.lang.NoClassDefFoundError: libsvm/svm_print_interface
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at weka.core.WekaPackageClassLoaderManager.forName(WekaPackageClassLoaderManager.java:198)
at weka.core.WekaPackageClassLoaderManager.forName(WekaPackageClassLoaderManager.java:178)
at weka.core.WekaPackageClassLoaderManager.objectForName(WekaPackageClassLoaderManager.java:162)
at weka.Run.findSchemeMatch(Run.java:90)
at weka.core.ResourceUtils.forName(ResourceUtils.java:76)
at weka.core.Utils.forName(Utils.java:1045)
at weka.classifiers.AbstractClassifier.forName(AbstractClassifier.java:91)
at weka.ClassifyIriswithLibsvm.setClassifier(ClassifyIriswithLibsvm.java:46)
at weka.ClassifyIriswithLibsvm.main(ClassifyIriswithLibsvm.java:221)
Caused by: java.lang.ClassNotFoundException: libsvm.svm_print_interface
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 11 more
`
I cannot figure out why it's wrong.I am newbie about libsvm and weka.How can I run the classiyier program using libsvm in weka successfully?
You need to ensure, that libsvm.jar is available in your classpath (in Eclipse).
You can check this answer on Stackoverflow for all necessary dependencies, which are libsvm.jar, wlsvm.jar and (of course) weka.jar.
I want to load a gzipped rdf file into a org.eclipse.rdf4j.repository.Repository. During the upload, status messages must be logged to the console. The size of my rdf file is ~1GB of uncompressed or ~50MB of compressed data.
Actually an RDF4J repository will automatically process a compressed (zip/gzip) file correctly, already. So you can simply do this:
RepositoryConnection conn = ... ; // your store connection
conn.add(new File("file.zip"), null, RDFFormat.NTRIPLES):
If you want to include reporting, a different (somewhat simpler) approach is to use an org.eclipse.rdf4j.repository.util.RDFLoader class in combination with an RDFInserter:
RepositoryConnection conn = ... ; // your store connection
RDFInsert inserter = new RDFInserter(conn);
RDFLoader loader = new RDFLoader(conn.getParserConfig(), conn.getValueFactory());
loader.load(new File("file.zip"), RDFFormat.NTRIPLES, inserter));
The RDFLoader takes care of properly uncompressing the (zip or gzip) file.
To get intermediate reporting you can wrap your RDFInserter in your own custom AbstractRDFHandler that does the counting and reporting (before passing on to the wrapper inserter).
Variant 1
The following sample will load an InputStream with gzipped data into an in-memory rdf repository. The zipped format is supported directly by rdf4j.
Every 100000th statement will be printed to stdout using the RepositoryConnectionListenerAdapter.
import java.io.InputStream;
import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.Resource;
import org.eclipse.rdf4j.model.Value;
import org.eclipse.rdf4j.repository.Repository;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.event.base.NotifyingRepositoryConnectionWrapper;
import org.eclipse.rdf4j.repository.event.base.RepositoryConnectionListenerAdapter;
import org.eclipse.rdf4j.repository.sail.SailRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.sail.memory.MemoryStore;
public class MyTripleStore {
Repository repo;
/**
* Creates an inmemory triple store
*
*/
public MyTripleStore() {
repo = new SailRepository(new MemoryStore());
repo.initialize();
}
/**
* #param in gzip compressed data on an inputstream
* #param format the format of the streamed data
*/
public void loadZippedFile(InputStream in, RDFFormat format) {
System.out.println("Load zip file of format " + format);
try (NotifyingRepositoryConnectionWrapper con =
new NotifyingRepositoryConnectionWrapper(repo, repo.getConnection());) {
RepositoryConnectionListenerAdapter myListener =
new RepositoryConnectionListenerAdapter() {
private long count = 0;
#Override
public void add(RepositoryConnection arg0, Resource arg1, IRI arg2,
Value arg3, Resource... arg4) {
count++;
if (count % 100000 == 0)
System.out.println("Add statement number " + count + "\n"
+ arg1+ " " + arg2 + " " + arg3);
}
};
con.addRepositoryConnectionListener(myListener);
con.add(in, "", format);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
Variant 2
This variant implements an AbstractRDFHandler to provide the reporting.
import java.io.InputStream;
import org.eclipse.rdf4j.model.Statement;
import org.eclipse.rdf4j.repository.Repository;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.sail.SailRepository;
import org.eclipse.rdf4j.repository.util.RDFInserter;
import org.eclipse.rdf4j.repository.util.RDFLoader;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.rio.helpers.AbstractRDFHandler;
import org.eclipse.rdf4j.sail.memory.MemoryStore;
public class MyTripleStore {
Repository repo;
/**
* Creates an inmemory triple store
*
*/
public MyTripleStore() {
repo = new SailRepository(new MemoryStore());
repo.initialize();
}
/**
* #param in gzip compressed data on an inputstream
* #param format the format of the streamed data
*/
public void loadZippedFile1(InputStream in, RDFFormat format) {
try (RepositoryConnection con = repo.getConnection()) {
MyRdfInserter inserter = new MyRdfInserter(con);
RDFLoader loader =
new RDFLoader(con.getParserConfig(), con.getValueFactory());
loader.load(in, "", RDFFormat.NTRIPLES, inserter);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
class MyRdfInserter extends AbstractRDFHandler {
RDFInserter rdfInserter;
int count = 0;
public MyRdfInserter(RepositoryConnection con) {
rdfInserter = new RDFInserter(con);
}
#Override
public void handleStatement(Statement st) {
count++;
if (count % 100000 == 0)
System.out.println("Add statement number " + count + "\n"
+ st.getSubject().stringValue() + " "
+ st.getPredicate().stringValue() + " "
+ st.getObject().stringValue());
rdfInserter.handleStatement(st);
}
}
}
Here is, how to call the code
MyTripleStore ts = new MyTripleStore();
ts.loadZippedFile(new FileInputStream("your-ntriples-zipped.gz"),
RDFFormat.NTRIPLES);
I've been pondering this for a fair amount of time now. I'm trying to download the data from Yahoo!'s Stock API. When you use the API, it gives you a .csv file. I've been looking at opencsv, which seems perfect, except I want to avoid downloading and saving the file, if at all possible.
OpenCSV, according to the examples, can only read from a FileReader. According to Oracle's docs on FileReader, the file needs to be local.
Is it possible to read from a remote file using OpenCSV without downloading?
CSVReader takes a Reader argument according to the documentation, so it isn't limited to a FileReader for the parameter.
To use a CSVReader without saving the file first, you could use a BufferedReader around a stream loading the data:
URL stockURL = new URL("http://example.com/stock.csv");
BufferedReader in = new BufferedReader(new InputStreamReader(stockURL.openStream()));
CSVReader reader = new CSVReader(in);
// use reader
Implementation of opencsv for reading csv file and saving to database.
import com.opencsv.CSVParser;
import com.opencsv.CSVParserBuilder;
import com.opencsv.CSVReader;
import com.opencsv.CSVReaderBuilder;
import com.opencsv.bean.CsvBindByPosition;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import javax.persistence.Column;
import java.io.*;
import java.lang.annotation.Annotation;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
#Service
#Slf4j
public class FileUploadService {
#Autowired
private InsertCSVContentToDB csvContentToDB;
/**
* #param csvFileName location of the physical file.
* #param type Employee.class
* #param delimiter can be , | # etc
* #param obj //new Employee();
*
* import com.opencsv.bean.CsvBindByPosition;
* import lombok.Data;
*
* import javax.persistence.Column;
*
* #Data
* public class Employee {
*
* #CsvBindByPosition(position = 0, required = true)
* #Column(name = "EMPLOYEE_NAME")
* private String employeeName;
*
* #CsvBindByPosition(position = 1)
* #Column(name = "Employee_ADDRESS_1")
* private String employeeAddress1;
* }
*
* #param sqlQuery query to save data to DB
* #param noOfLineSkip make it 0(Zero) so that it should not skip any line.
* #param auditId apart from regular column in csv we need to add more column for traking like file id or audit id
* #return
*/
public <T> void readCSVContentInArray(String csvFileName, Class<? extends T> type, char delimiter, Object obj,
String sqlQuery, int noOfLineSkip, Long auditId) {
List<T> lstCsvContent = new ArrayList<>();
Reader reader = null;
CSVReader csv = null;
try {
reader = new BufferedReader(new InputStreamReader(new FileInputStream(csvFileName), "utf-8"));
log.info("Buffer Reader : " + ((BufferedReader) reader).readLine().isEmpty());
CSVParser parser = new CSVParserBuilder().withSeparator(delimiter).withIgnoreQuotations(true).build();
csv = new CSVReaderBuilder(reader).withSkipLines(noOfLineSkip).withCSVParser(parser).build();
String[] nextLine;
int size = 0;
int chunkSize = 10000;
Class params[] = { Long.class };
Object paramsObj[] = { auditId };
long rowNumber = 0;
Field field[] = type.getDeclaredFields();
while ((nextLine = csv.readNext()) != null) {
rowNumber++;
try {
obj = type.newInstance();
for (Field f : field) {
if(!f.isSynthetic()){
f.setAccessible(true);
Annotation ann[] = f.getDeclaredAnnotations();
CsvBindByPosition csv1 = (CsvBindByPosition) ann[0];
Column c = (Column)ann[1];
try {
if (csv1.position() < nextLine.length) {
if (csv1.required() && (nextLine[csv1.position()] == null
|| nextLine[csv1.position()].trim().isEmpty())) {
String message = "Mandatory field is missing in row: " + rowNumber;
log.info("null value in " + rowNumber + ", " + csv1.position());
System.out.println(message);
}
if (f.getType().equals(String.class)) {
f.set(obj, nextLine[csv1.position()]);
}
if (f.getType().equals(Boolean.class)) {
f.set(obj, nextLine[csv1.position()]);
}
if (f.getType().equals(Integer.class)) {
f.set(obj, Integer.parseInt(nextLine[csv1.position()]));
}
if (f.getType().equals(Long.class)) {
f.set(obj, Long.parseLong(nextLine[csv1.position()]));
}
if (f.getType().equals(Double.class) && null!=nextLine[csv1.position()] && !nextLine[csv1.position()].trim().isEmpty() ) {
f.set(obj, Double.parseDouble(nextLine[csv1.position()]));
}if(f.getType().equals(Double.class) && ((nextLine[csv1.position()]==null) || nextLine[csv1.position()].isEmpty())){
f.set(obj, new Double("0.0"));
}
if (f.getType().equals(Date.class)) {
f.set(obj, nextLine[csv1.position()]);
}
}
} catch (Exception fttEx) {
log.info("Exception when parsing the file: " + fttEx.getMessage());
System.out.println(fttEx.getMessage());
}
}
}
lstCsvContent.add((T) obj);
if (lstCsvContent.size() > chunkSize) {
size = size + lstCsvContent.size();
//write code to save to data base of file system in chunk.
lstCsvContent = null;
lstCsvContent = new ArrayList<>();
}
} catch (Exception ex) {
log.info("Exception: " + ex.getMessage());
}
}
//write code to save list into DB or file system
System.out.println(lstCsvContent);
} catch (Exception ex) {
log.info("Exception:::::::: " + ex.getMessage());
} finally {
try {
if (csv != null) {
csv.close();
}
if (reader != null) {
reader.close();
}
} catch (IOException ioe) {
log.info("Exception when closing the file: " + ioe.getMessage());
}
}
log.info("File Processed successfully: ");
}
}
I can't seem to find sample code for constructing a Berkeley DB in Java and inserting records into it. Any samples? And I do not mean the Berkeley DB Java Edition either.
http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/BDB_Prog_Reference.pdf
Chapter 5
If you download db-5.0.21.NC.zip you will see plenty of samples.
Here is one that seems to do what you want
/*-
* See the file LICENSE for redistribution information.
*
* Copyright (c) 2004, 2010 Oracle and/or its affiliates. All rights reserved.
*
* $Id$
*/
// File: ExampleDatabaseLoad.java
package db.GettingStarted;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.List;
import java.util.StringTokenizer;
import java.util.Vector;
import com.sleepycat.bind.EntryBinding;
import com.sleepycat.bind.serial.SerialBinding;
import com.sleepycat.bind.tuple.TupleBinding;
import com.sleepycat.db.DatabaseEntry;
import com.sleepycat.db.DatabaseException;
public class ExampleDatabaseLoad {
private static String myDbsPath = "./";
private static File inventoryFile = new File("./inventory.txt");
private static File vendorsFile = new File("./vendors.txt");
// DatabaseEntries used for loading records
private static DatabaseEntry theKey = new DatabaseEntry();
private static DatabaseEntry theData = new DatabaseEntry();
// Encapsulates the databases.
private static MyDbs myDbs = new MyDbs();
private static void usage() {
System.out.println("ExampleDatabaseLoad [-h <database home>]");
System.out.println(" [-i <inventory file>] [-v <vendors file>]");
System.exit(-1);
}
public static void main(String args[]) {
ExampleDatabaseLoad edl = new ExampleDatabaseLoad();
try {
edl.run(args);
} catch (DatabaseException dbe) {
System.err.println("ExampleDatabaseLoad: " + dbe.toString());
dbe.printStackTrace();
} catch (Exception e) {
System.out.println("Exception: " + e.toString());
e.printStackTrace();
} finally {
myDbs.close();
}
System.out.println("All done.");
}
private void run(String args[])
throws DatabaseException {
// Parse the arguments list
parseArgs(args);
myDbs.setup(myDbsPath);
System.out.println("loading vendors db....");
loadVendorsDb();
System.out.println("loading inventory db....");
loadInventoryDb();
}
private void loadVendorsDb()
throws DatabaseException {
// loadFile opens a flat-text file that contains our data
// and loads it into a list for us to work with. The integer
// parameter represents the number of fields expected in the
// file.
List vendors = loadFile(vendorsFile, 8);
// Now load the data into the database. The vendor's name is the
// key, and the data is a Vendor class object.
// Need a serial binding for the data
EntryBinding dataBinding =
new SerialBinding(myDbs.getClassCatalog(), Vendor.class);
for (int i = 0; i < vendors.size(); i++) {
String[] sArray = (String[])vendors.get(i);
Vendor theVendor = new Vendor();
theVendor.setVendorName(sArray[0]);
theVendor.setAddress(sArray[1]);
theVendor.setCity(sArray[2]);
theVendor.setState(sArray[3]);
theVendor.setZipcode(sArray[4]);
theVendor.setBusinessPhoneNumber(sArray[5]);
theVendor.setRepName(sArray[6]);
theVendor.setRepPhoneNumber(sArray[7]);
// The key is the vendor's name.
// ASSUMES THE VENDOR'S NAME IS UNIQUE!
String vendorName = theVendor.getVendorName();
try {
theKey = new DatabaseEntry(vendorName.getBytes("UTF-8"));
} catch (IOException willNeverOccur) {}
// Convert the Vendor object to a DatabaseEntry object
// using our SerialBinding
dataBinding.objectToEntry(theVendor, theData);
// Put it in the database.
myDbs.getVendorDB().put(null, theKey, theData);
}
}
private void loadInventoryDb()
throws DatabaseException {
// loadFile opens a flat-text file that contains our data
// and loads it into a list for us to work with. The integer
// parameter represents the number of fields expected in the
// file.
List inventoryArray = loadFile(inventoryFile, 6);
// Now load the data into the database. The item's sku is the
// key, and the data is an Inventory class object.
// Need a tuple binding for the Inventory class.
TupleBinding inventoryBinding = new InventoryBinding();
for (int i = 0; i < inventoryArray.size(); i++) {
String[] sArray = (String[])inventoryArray.get(i);
String sku = sArray[1];
try {
theKey = new DatabaseEntry(sku.getBytes("UTF-8"));
} catch (IOException willNeverOccur) {}
Inventory theInventory = new Inventory();
theInventory.setItemName(sArray[0]);
theInventory.setSku(sArray[1]);
theInventory.setVendorPrice((new Float(sArray[2])).floatValue());
theInventory.setVendorInventory((new Integer(sArray[3])).intValue());
theInventory.setCategory(sArray[4]);
theInventory.setVendor(sArray[5]);
// Place the Vendor object on the DatabaseEntry object using our
// the tuple binding we implemented in InventoryBinding.java
inventoryBinding.objectToEntry(theInventory, theData);
// Put it in the database. Note that this causes our secondary database
// to be automatically updated for us.
myDbs.getInventoryDB().put(null, theKey, theData);
}
}
private static void parseArgs(String args[]) {
for(int i = 0; i < args.length; ++i) {
if (args[i].startsWith("-")) {
switch(args[i].charAt(1)) {
case 'h':
myDbsPath = new String(args[++i]);
break;
case 'i':
inventoryFile = new File(args[++i]);
break;
case 'v':
vendorsFile = new File(args[++i]);
break;
default:
usage();
}
}
}
}
private List loadFile(File theFile, int numFields) {
List records = new ArrayList();
try {
String theLine = null;
FileInputStream fis = new FileInputStream(theFile);
BufferedReader br = new BufferedReader(new InputStreamReader(fis));
while((theLine=br.readLine()) != null) {
String[] theLineArray = splitString(theLine, "#");
if (theLineArray.length != numFields) {
System.out.println("Malformed line found in " + theFile.getPath());
System.out.println("Line was: '" + theLine);
System.out.println("length found was: " + theLineArray.length);
System.exit(-1);
}
records.add(theLineArray);
}
fis.close();
} catch (FileNotFoundException e) {
System.err.println(theFile.getPath() + " does not exist.");
e.printStackTrace();
usage();
} catch (IOException e) {
System.err.println("IO Exception: " + e.toString());
e.printStackTrace();
System.exit(-1);
}
return records;
}
private static String[] splitString(String s, String delimiter) {
Vector resultVector = new Vector();
StringTokenizer tokenizer = new StringTokenizer(s, delimiter);
while (tokenizer.hasMoreTokens())
resultVector.add(tokenizer.nextToken());
String[] resultArray = new String[resultVector.size()];
resultVector.copyInto(resultArray);
return resultArray;
}
protected ExampleDatabaseLoad() {}
}
There are a number of good Getting Started Guides for Oracle Berkeley DB Java Edition. They are included in the documentation set. You'll find example code in the documentation. If you're new to Oracle Berkeley DB Java Edition that's the right place to start. There are other examples in the download package as well.
I'm the product manager for Oracle Berkeley DB, I hope this addressed your question. If not please let me know how else I can help you.