I'm trying to read text files .txt with more than 10.000 lines per file, splitting them and inserting the data in Access database using Java and UCanAccess. The problem is that it becomes slower and slower every time (as the database gets bigger).
Now after reading 7 text files and inserting them into database, it would take the project more than 20 minutes to read another file.
I tried to do just the reading and it works fine, so the problem is the actual inserting into database.
N.B: This is my first time using UCanAccess with Java because I found that the JDBC-ODBC Bridge is no longer available. Any suggestions for an alternative solution would also be appreciated.
If your current task is simply to import a large amount of data from text files straight into the database, and it does not require any sophisticated SQL manipulations, then you might consider using the Jackcess API directly. For example, to import a CSV file you could do something like this:
String csvFileSpec = "C:/Users/Gord/Desktop/BookData.csv";
String dbFileSpec = "C:/Users/Public/JackcessTest.accdb";
String tableName = "Book";
try (Database db = new DatabaseBuilder()
.setFile(new File(dbFileSpec))
.setAutoSync(false)
.open()) {
new ImportUtil.Builder(db, tableName)
.setDelimiter(",")
.setUseExistingTable(true)
.setHeader(false)
.importFile(new File(csvFileSpec));
// this is a try-with-resources block,
// so db.close() happens automatically
}
Or, if you need to manually parse each line of input, insert a row, and retrieve the AutoNumber value for the new row, then the code would be more like this:
String dbFileSpec = "C:/Users/Public/JackcessTest.accdb";
String tableName = "Book";
try (Database db = new DatabaseBuilder()
.setFile(new File(dbFileSpec))
.setAutoSync(false)
.open()) {
// sample data (e.g., from parsing of an input line)
String title = "So, Anyway";
String author = "Cleese, John";
Table tbl = db.getTable(tableName);
Object[] rowData = tbl.addRow(Column.AUTO_NUMBER, title, author);
int newId = (int)rowData[0]; // retrieve generated AutoNumber
System.out.printf("row inserted with ID = %d%n", newId);
// this is a try-with-resources block,
// so db.close() happens automatically
}
To update an existing row based on its primary key, the code would be
Table tbl = db.getTable(tableName);
Row row = CursorBuilder.findRowByPrimaryKey(tbl, 3); // i.e., ID = 3
if (row != null) {
// Note: column names are case-sensitive
row.put("Title", "The New Title For This Book");
tbl.updateRow(row);
}
Note that for maximum speed I used .setAutoSync(false) when opening the Database, but bear in mind that disabling AutoSync does increase the chance of leaving the Access database file in a damaged (and possibly unusable) state if the application terminates abnormally while performing the updates.
Also, if you need to use slq/ucanaccess, you have to call setAutocommit(false) on the connection at the begin, and do a commit each 200/300 record. The performances will improve drammatically (about 99%).
Related
Im trying to parse a pipe delimited file and insert fields into a table. when i start the application nothing happens in my DB. My DB has 4 columns (account_name, command_name, and system_name, CreateDt). The file i am parsing has the date in the first row then extra data. The rows following i only need the first 3 fields in each the rest is extra data. the last row is the row count. i skipped the inserting date because for now but want to get back to it after at least able to insert the first 3 fields. I have little experience with parsing a file and storing data in a DB and have looked through jdbc examples to get to this point but im struggling and am sure there is a better way.
File Example
20200310|extra|extra|extra||
Mn1223|01192|windows|extra|extra|extra||
Sd1223|02390|linux|extra|extra|extra||
2
table format
account_name command_name system_name createDt
Mn1223 01192 windows 20200310
Sd1223 02390 linux 20200310
Code to parse and insert into DB
public List insertZygateData (List<ZygateEntity> parseData) throws Exception {
String filePath = "C:\\DEV\\Test_file.xlsx";
List<String> lines = Files.readAllLines(Paths.get(filePath));
// remove date and amount
lines.remove(0);
lines.remove(lines.size() - 1);
for (ZygateEntity zygateInfo : parseData){
new MapSqlParameterSource("account_name", zygateInfo.getAccountName())
.addValue("command_name", zygateInfo.getCommandName())
.addValue("system_name", zygateInfo.getSystemName())
.getValues();
}
return lines.stream()
.map(s -> s.split("[|]")).map(val -> new ZygateEntity(val[0],val[1],val[2])).collect(Collectors.toList());
}
public boolean cleantheTable() throws SQLException {
String sql = "INSERT INTO Landing.midrange_xygate_load (account_name,command_name,system_name)"+
"VALUES (:account_name,:command_name,:system_name)";
boolean truncated = false;
Statement stmt = null;
try {
String sqlTruncate = "truncate table Landing.midrange_xygate_load";
jdbcTemplate.execute(sqlTruncate);
truncated = true;
} catch (Exception e) {
e.printStackTrace();
truncated = false;
return truncated;
} finally {
if (stmt != null) {
jdbcTemplate.execute(sql);
stmt.close();
}
}
log.info("Clean the table return value :" + truncated);
return truncated;
}
}
Entity/Model
public ZygateEntity(String accountName, String commandName, String systemName){
this.accountName=accountName;
this.commandName=commandName;
this.systemName=systemName;
}
//getters and setters
#Override
public String toString() {
return "ZygateEntity [accountName=" + accountName + ", commandName=" + commandName + ", systemName=" + systemName + ", createDt=" + createDt +"]";
}
}
Taking a look at what you've provided, it seems you have a jumbled collection of bits of code, and while most of it is there, it's not all there and not quite all in the right order.
To get some kind of clarity, try to break down what it is you're doing into separate steps, and have a method that focuses on each step. In particular, you write
Im trying to parse a pipe delimited file and insert fields into a table
This naturally breaks down into two parts:
parsing the pipe-delimited file, and
inserting fields into a table.
For the first part, you seem to have most of the parts already in your insertZygateData method. In particular, this line reads all the lines of a file into a list:
List<String> lines = Files.readAllLines(Paths.get(filePath));
These lines then remove the first and last lines from the list of lines read:
// remove date and amount
lines.remove(0);
lines.remove(lines.size() - 1);
You then have some code that looks a bit out of place: this seems to be something to do with inserting into the database, but we haven't created our list of ZygateEntity objects as we haven't yet finished reading the file. Let's put this for loop to one side for the moment.
Finally, we take the list of lines we read, split them using pipes, create ZygateEntity objects from the parts and create a List of these objects, which we then return.
return lines.stream()
.map(s -> s.split("[|]")).map(val -> new ZygateEntity(val[0],val[1],val[2])).collect(Collectors.toList());
Putting this lot together, we have a useful method that parses the file, completing the first part of the task:
private List<ZygateEntity> parseZygateData() throws IOException {
String filePath = "C:\\DEV\\Test_file.xlsx";
List<String> lines = Files.readAllLines(Paths.get(filePath));
// remove date and amount
lines.remove(0);
lines.remove(lines.size() - 1);
return lines.stream()
.map(s -> s.split("[|]")).map(val -> new ZygateEntity(val[0],val[1],val[2])).collect(Collectors.toList());
}
(Of course, we could add a parameter for the file path to read, but in the interest of getting something working, it's OK to stick with the current hard-coded file path.)
So, we've got our list of ZygateEntity objects. How do we write a method to insert them into the database?
We can find a couple of the ingredients we need in your code sample. First, we need the SQL statement to insert the data. This is in your cleanThetable method:
String sql = "INSERT INTO Landing.midrange_xygate_load (account_name,command_name,system_name)"+
"VALUES (:account_name,:command_name,:system_name)";
We then have this loop:
for (ZygateEntity zygateInfo : parseData){
new MapSqlParameterSource("account_name", zygateInfo.getAccountName())
.addValue("command_name", zygateInfo.getCommandName())
.addValue("system_name", zygateInfo.getSystemName())
.getValues();
}
This loop creates a MapSqlParameterSource out of each ZygateEntity object, and then converts it to a Map<String, Object> by calling the getValues() method. But then it does nothing with this value. Effectively you're creating these objects and getting rid of them again without doing anything with them. This isn't ideal.
A MapSqlParameterSource is used with a Spring NamedParameterJdbcTemplate. Your code mentions a jdbcTemplate, which appears to be a field within the class that parses data and inserts into the database, but you don't show the full code of this class. I'm going to have to assume it's a NamedParameterJdbcTemplate rather than a 'plain' JdbcTemplate.
A NamedParameterJdbcTemplate contains a method update that takes a SQL string and a SqlParameterSource. We have a SQL string, and we're creating MapSqlParameterSource objects, so we can use these to carry out the insert. There's not a lot of point in creating one of these MapSqlParameterSource objects only to convert it to a map, so let's remove the call to getValues().
So, we now have a method to insert the data into the database:
public void insertZygateData(List<ZygateEntity> parseData) {
String sql = "INSERT INTO Landing.midrange_xygate_load (account_name,command_name,system_name)"+
"VALUES (:account_name,:command_name,:system_name)";
for (ZygateEntity zygateInfo : parseData){
SqlParameterSource source = new MapSqlParameterSource("account_name", zygateInfo.getAccountName())
.addValue("command_name", zygateInfo.getCommandName())
.addValue("system_name", zygateInfo.getSystemName());
jdbcTemplate.update(sql, source);
}
}
Finally, let's take a look at your cleanThetable method. As with the others, let's keep it focused on one task: it looks like at the moment you're trying to delete the data out of the table and then insert it in the same method, but let's have it just focus on deleting the data as we've now got a method to insert the data.
We can't immediately get rid of the String sql = ... line, because the finally block in your code uses it. If stmt is not null, then you attempt to run the INSERT statement and then close stmt.
However, stmt is never assigned any value other than null, so it remains null. stmt != null is therefore always false, so the INSERT statement never runs. Your finally block never does anything, so you would be best off removing it altogether. With your finally block gone, you can also get rid of your local variable stmt and the sql string, leaving us with a method whose focus is to truncate the table:
public boolean cleantheTable() throws SQLException {
boolean truncated = false;
try {
String sqlTruncate = "truncate table Landing.midrange_xygate_load";
jdbcTemplate.execute(sqlTruncate);
truncated = true;
} catch (Exception e) {
e.printStackTrace();
truncated = false;
return truncated;
}
log.info("Clean the table return value :" + truncated);
return truncated;
}
I'll leave it up to you to write the code that calls these methods. I wrote some code for this purpose, and it ran successfully and inserted into a database.
So, in summary, no data was being written to your database because you were never making a call to the database to insert any. In your insertZygateData method you were creating the parameter-source objects but not doing anything useful with them, and in your cleanThetable method, it looked like you were trying to insert data, but your line jdbcTemplate.execute(sql) that attempted to do this never ran. Even if stmt wasn't null, this line wouldn't work as you didn't pass the parameter values in anywhere: you would get an exception from the database as it would be expecting values for the parameters but you never gave it any.
Hopefully my explanation gives you a way of getting your code working and helps you understand why it wasn't.
I am using spark 1.5.0.
I have a set of files on s3 containing json data in sequence file format, worth around 60GB. I have to fire around 40 queries on this dataset and store results back to s3.
All queries are select statements with a condition on same field. Eg. select a,b,c from t where event_type='alpha', select x,y,z from t where event_type='beta' etc.
I am using an AWS EMR 5 node cluster with 2 core nodes and 2 task nodes.
There could be some fields missing in the input. Eg. a could be missing. So, the first query, which selects a would fail. To avoid this I have defined schemas for each event_type. So, for event_type alpha, the schema would be like {"a": "", "b": "", c:"", event_type=""}
Based on the schemas defined for each event, I'm creating a dataframe from input RDD for each event with the corresponding schema.
I'm using the following code:
JavaPairRDD<LongWritable,BytesWritable> inputRDD = jsc.sequenceFile(bucket, LongWritable.class, BytesWritable.class);
JavaRDD<String> events = inputRDD.map(
new Function<Tuple2<LongWritable,BytesWritable>, String>() {
public String call(Tuple2<LongWritable,BytesWritable> tuple) throws JSONException, UnsupportedEncodingException {
String valueAsString = new String(tuple._2.getBytes(), "UTF-8");
JSONObject data = new JSONObject(valueAsString);
JSONObject payload = new JSONObject(data.getString("payload"));
return payload.toString();
}
}
);
events.cache();
for (String event_type: events_list) {
String query = //read query from another s3 file event_type.query
String jsonSchemaString = //read schema from another s3 file event_type.json
List<String> jsonSchema = Arrays.asList(jsonSchemaString);
JavaRDD<String> jsonSchemaRDD = jsc.parallelize(jsonSchema);
DataFrame df_schema = sqlContext.read().option("header", "true").json(jsonSchemaRDD);
StructType schema = df_schema.schema();
DataFrame df_query = sqlContext.read().schema(schema).option("header", "true").json(events);
df_query.registerTempTable(tableName);
DataFrame df_results = sqlContext.sql(query);
df_results.write().format("com.databricks.spark.csv").save("s3n://some_location);
}
This code is very inefficient, it takes around 6-8 hours to run. How can I optimize my code?
Should I try using HiveContext.
I think the current code is taking multipe passes at the data, not sure though as I have cached the RDD? How can I do it in a single pass if that is so.
Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.
As I was working through the following tutorial, I came across this code :
public void onClickRetrieveStudents(View view) {
// Retrieve student records
String URL = "content://com.example.provider.College/students";
I am interested to see what kind of data this is, so I tried to go to the website http://com.example.provider.College/students to view the data, however it just gave some kind of error. Therefore my question is , is this URL some kind of xml document? what exactly is the format for this data... and how can I view it ?
I would recommend you familiarize yourself with the following documenation:
Content Providers:
http://developer.android.com/guide/topics/providers/content-providers.html
Essentially when you pass that "URL" to the ContentResolver (presumably you're doing somethign like this):
// Queries the user dictionary and returns results
mCursor = getContentResolver().query(
UserDictionary.Words.CONTENT_URI, // The content URI of the words table
mProjection, // The columns to return for each row
mSelectionClause // Selection criteria
mSelectionArgs, // Selection criteria
mSortOrder); // The sort order for the returned rows
You're asking android to resolve that URL to a ContentProvider which is set up to handle that URL. The URL is not "imaginary" so much as it's targets are Local objects and processes which exist and are defined by applications which use the ContentProvider mechanism to store and make data available to other applications.
The goal of that URL (which is converted to a URI in this case) is to specify which ContentProvider you want, and what you want from it.
ContentProviders are generally used by applications that want to manage a database and make that information available to other applications while minimizing access violations etc..
EDIT:
This code is from your tutorial. See added comments:
/// this url points to the content provider.
//The content provider uses it to
///reference a specific database which it has knowledge of
//This URI doesn't represent an
//actual FILE on your system, rather it represents a way for you to tell the content //provider what DATABASE to access and what you want from it.
String URL = "content://com.example.provider.College/students";
// This line converts yoru "URL" into a URI
Uri students = Uri.parse(URL);
/// This call returns a Cursor - a cursor is a object type which contains the results of your QUERY in an order manner. IN this case it is a set of rows, each of which has a number of columns coresponding to your query and database, which can be iterated over to pull information from the DB..
/// managedQuery takes, as an argument, the URI conversion of the URL - this is
// where you are actually calling to the contentprovider, asking it to do a query on the
// databse for some information
Cursor c = managedQuery(students, null, null, null, "name");
// This line moves to the first ROW in the cursor
if (c.moveToFirst()) {
// this does somethign as long as the while loop conditional is true.
do{
// This line creates a pop up toast message with the information stored in the columns of the row you the cursor is currently on.
Toast.makeText(this,
c.getString(c.getColumnIndex(StudentsProvider._ID)) +
", " + c.getString(c.getColumnIndex( StudentsProvider.NAME)) +
", " + c.getString(c.getColumnIndex( StudentsProvider.GRADE)),
Toast.LENGTH_SHORT).show();
} while (c.moveToNext());
}
Your question in the comments was:
"all I need is an example of this file: String URL = "content://com.example.provider.College/students"; , what would the data look like ? "
The answer to this is that you have an Sqlite Database on your phone somewhere - generally (and in this case definitely) created by the application and/or content provider you are accessing. You also know that the content resolver accepts this URI and some other information and will return you a CURSOR.
This question addresses what a cursor is.
use of cursor in android
If you read the tutorial fully you will find this code::
public class StudentsProvider extends ContentProvider {
static final String PROVIDER_NAME = "com.example.provider.College";
static final String URL = "content://" + PROVIDER_NAME + "/students";
static final Uri CONTENT_URI = Uri.parse(URL);
static final String _ID = "_id";
static final String NAME = "name";
static final String GRADE = "grade";
You will also find, in the manifest of your tutorial:
<provider android:name="StudentsProvider"
android:authorities="com.example.provider.College">
</provider>
Which is the registration of your ContentProvider for the URI at question.
You will note that your URL and the "PROVIDER_NAME" and "URL" have eerie similarities. This is because the ContentProvider is utilizing these values to identify itself as the resolver for this partiuclar URI to the android system.
You should create the files as described in the tutorial, make the sample app function, and you will be able to start understanding this more clearly.
It's not real, and it's not a web url. That is an example of a hypothetical ContentURI.
As an example, you might consult the UserDictionary like so -
// Queries the user dictionary and returns results
mCursor = getContentResolver().query(
UserDictionary.Words.CONTENT_URI, // The content URI of the words table
mProjection, // The columns to return for each row
mSelectionClause // Selection criteria
mSelectionArgs, // Selection criteria
mSortOrder); // The sort order for the returned rows
You might also create your own.
I am attempting to write a simple java utility that extracts data from SAP into a MySQL database, using JCo. I have understood the JCo documentation and tried out the relevant examples mentioned in SAP help portal, I am able to retrieve data from Table and insert into MySQL DB.
What I would like to have is a facility to filter data in following two ways :
I would like to fetch only the required fields.
I would like to fetch rows only if the value of the a particular field matches certain pattern.
After doing some research I didn't find any way to specify query parameters so that it retrieves only the filtered data, it basically queries all the fields from a Table, I think I will have to filter out the data that I don't want in my java-client layer. Please let me know if I am missing out something here.
Here is a code example :
public static void readTables() throws JCoException, IOException {
final JCoDestination destination = JCoDestinationManager
.getDestination(DESTINATION_NAME2);
final JCoFunction function = destination.getRepository().getFunction(
"RFC_READ_TABLE");
function.getImportParameterList().setValue("QUERY_TABLE", "DD02L");
function.getImportParameterList().setValue("DELIMITER", ",");
if (function == null) {
throw new RuntimeException("BAPI RFC_READ_TABLE not found in SAP.");
}
try {
function.execute(destination);
} catch (final AbapException e) {
System.out.println(e.toString());
return;
}
final JCoTable codes = function.getTableParameterList().getTable(
"FIELDS");
String header = "SN";
for (int i = 0; i < codes.getNumRows(); i++) {
codes.setRow(i);
header += "," + codes.getString("FIELDNAME");
}
final FileWriter outFile = new FileWriter("out.csv");
outFile.write(header + "\n");
final JCoTable rows = function.getTableParameterList().getTable("DATA");
for (int i = 0; i < rows.getNumRows(); i++) {
rows.setRow(i);
outFile.write(i + "," + rows.getString("WA") + "\n");
outFile.flush();
}
outFile.close();
}
This method tries to read a table where SAP stores meta data or data dictionary and writes the output to a csv file. This works fine but takes 30-40 secs and returns around 4 hundred thousand records with 32 columns. My intention was to ask if there is a way I can restrict my query to return only a particular field, instead of reading all the fields and discarding them in the client layer.
Thanks.
This works fine :
JCoTable table = function.getTableParameterList().getTable("FIELDS");
table.appendRow();
table.setValue("FIELDNAME", "TABNAME");
table.appendRow();
table.setValue("FIELDNAME", "TABCLASS");
Please check this Thread
Thanks.