I'm using JOOQ and Postgres.
In Postgres I have a column gender:
'gender' AS gender,
(the table itself is a view and the gender column is a placeholder for a value that gets calculated in Java)
In Java when I .fetch() the view, I do some calculations on each record:
for (Record r : skillRecords) {
idNumber=function(r)
r.set(id, idNumber);
r.set(gender,getGender(idNumber));
}
All looks good and if println the values they're all correct.
However, when I call intoResultSet() on my skillsRecord, the gender column has an asterisks next to all the values, eg "*Male".
Then, I use the resultset as input into an OpenCSV CSV writer and when I open the CSV the gender column comes out as null.
Any suggestions?
UPDATE:
Following the input from Lukas regarding the asterisks, I realise the issue is likely with opencsv.
My code is as follows:
File tempFile = new File("/tmp/file.csv");
BufferedWriter out = new BufferedWriter(new FileWriter(tempFile));
CSVWriter writer = new CSVWriter(out);
//Code for getting records sits here
for (Record r : skillRecords) {
idNumber=function(r)
r.set(id, idNumber);
r.set(gender,getGender(idNumber));
}
writer.writeAll(skillRecords.intoResultSet(), true);
return tempFile;
All the columsn in the CSV come back as expected, except the gender column, which has the header "gender" but the column values are empty.
I have the necessary try/catches in the code above but I've excluded them for brevity.
The asterisk in *Male
The asterisk that you see in the ResultSet.toString() output (or in Result.toString()) reflects the record's internal Record.changed(Field) flag, i.e. the information on each record that says that the record was modified after it was retrieved form the database (which you did).
That is just visual information which you can safely ignore.
Solution:
So I found the solution. It turns out with postgres if I have something like:
'gender' AS gender,
The type is unknown, not text. So the solution was to define as:
'gender'::text AS gender
After doing so OpenCSV was happy.
Related
I'm migrating a Java application to VB.Net and I try to translate following Java code
Statement stmt
= conx.createStatement
(ResultSet.TYPE_SCROLL_INSENSITIVE
,ResultSet.CONCUR_READ_ONLY
);
ResultSet rsSheet = stmt.executeQuery(sSql);
bStatus = rsSheet.next();
...
bStatus = rsSheet.first();
In this code, a scrollable ResultSet is used. I can read the records returned by executeQuery() function and when I have terminated to read them, I can read it again without interrogating the Database a second time.
You can find some information on ResultSet here https://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html
My translated code is following
Dim cmd = conx.CreateCommand()
cmd.CommandText = sSql
Dim rsSheet as OracleDataReader = cmd.ExecuteReader()
bStatus = rsSheet.Read()
...
bStatus = rsSheet.? 'how to read first record again ?
But I don't find how to do so that OracleDataReader is scrollable ?
I can read the ResultSet from first to last record but I cannot read it again.
The only simple solution that I found to read all these records again is to call ExecuteReader() function a second time.
QUESTIONS
Is OracleDataReader Class scrollable ? How ?
Does another Class exist to do the job ? Which ?
PS: using Linq is not solution because SQL statements are executed in an environnement where Database structure is unknown. It is impossible to create entities.
A DataReader is one way only. Use a DataTable. This is an in memory representation of the result set. You can also use a DataTable to as a DataSource for various controls. You can use Linq on a DataTable.AsEnumerable()
Private Sub OPCode()
Dim sSql = "Your command text"
Dim dt As New DataTable
Using cn As New OracleConnection(ConStr),
cmd As New OracleCommand(sSql, cn)
cn.Open()
dt.Load(cmd.ExecuteReader)
End Using
'Code to read data.
End Sub
EDIT
The simplest way to see what is in the DataTable is to display it in a DataGridView if this is WinForms.
DataGridView1.DataSource = dt.
To access a specific row and column.
Dim s = dt.Rows(1)("ColName").ToString
The Rows collection starts with index 0 and the column name is from you Select statement. You then need to convert to the datatype with .ToString or Cint(), CDbl() etc. as this returns an object.
Following approach allows to read with skipping header:
Iterable<CSVRecord> records = CSVFormat.EXCEL.withHeader().parse(in);
for (CSVRecord record : records) {
//here first record is not header
}
How can I read csv since header line inclusively ?
P.S.
approach:
CSVFormat.EXCEL.withHeader().withSkipHeaderRecord(false).parse(in)
doesn't work and has the same behaviour
For me the followings all seem to have the header record as the first one (using commons-csv 1.5):
Iterable<CSVRecord> records = CSVFormat.EXCEL.parse(in);
Iterable<CSVRecord> records = CSVFormat.EXCEL.withSkipHeaderRecord().parse(in); //???
Iterable<CSVRecord> records = CSVFormat.EXCEL.withSkipHeaderRecord(false).parse(in);
Iterable<CSVRecord> records = CSVFormat.EXCEL.withSkipHeaderRecord(true).parse(in); //???
And as you have stated the following does NOT seem to have the header record as the first one:
Iterable<CSVRecord> records = CSVFormat.EXCEL.withHeader().parse(in); //???
It is beyond my understanding why withSkipHeaderRecord() and withSkipHeaderRecord(true) do include the header while withHeader() does not; seems to be the opposite behaviour as to what the method names suggest.
The withHeader() method tells the parser that the file has a header. Perhaps the method name is confusing.
The withFirstRecordAsHeader() method may also be useful.
From the CSVFormat (Apache Commons CSV 1.8 API) JavaDoc page:
Referencing columns safely
If your source contains a header record, you can simplify your code and safely reference columns, by using withHeader(String...) with no arguments:
CSVFormat.EXCEL.withHeader();
This causes the parser to read the first record and use its values as column names. Then, call one of the CSVRecord get method that takes a String column name argument:
String value = record.get("Col1");
This makes your code impervious to changes in column order in the CSV file.
I'm using Astyanax version 1.56.26 with Cassandra version 1.2.2
Ok, a little situational overview:
I have created a very simple column family using cqlsh like so:
CREATE TABLE users (
name text PRIMARY KEY,
age int,
weight int
);
I populated the column family (no empty columns)
Querying users via cqlsh yields expected results
Now I want to programmatically query users, so I try something like:
ColumnFamily<String, String> users =
new ColumnFamily<String, String>("users", StringSerializer.get(), StringSerializer.get());
OperationResult<ColumnList<String>> result = ks.prepareQuery(users)
.getRow("bobbydigital") // valid rowkey
.execute();
ColumnList<String> columns = result.getResult();
int weight = columns.getColumnByName("weight").getIntegerValue();
During the assignment of weight a NPE is thrown! :(
My understanding is that the result should have contained all the columns associated with the row containing "bobbydigital" as its row key. I then tried to assign the value in the column named "weight" to the integer variable weight. I know that the variable columns is getting assigned because when I add some debug code right after the assignment, like so:
System.out.println("Column names = " + columns.getColumnNames());
I get the following output:
Column names = [, age, weight]
So why the null pointer? Can someone tell me where I went wrong? Also, why is there blank column name?
UPDATE:
Also if I try querying in a different manner, like so:
Column<String> result = ks.prepareQuery(users)
.getKey("bobbydigital")
.getColumn("weight")
.execute().getResult();
int x = result.getIntegerValue();
I get the following exception:
InvalidRequestException(why:Not enough bytes to read value of component 0)
Thanks in advance for any help you can provide!
I figured out what I was doing incorrectly. The style of querying I was attempting is not valid with CQL tables. To query CQL tables with Astyanax you need to chain the .withCQL method to your prepareQuery method; passing a CQL statement as the argument.
Information specific to using CQL with Astyanax can be found here.
I got this fixed by adding setCqlVersion
this.astyanaxContext = new AstyanaxContext.Builder()
.forCluster("ClusterName")
.forKeyspace(keyspace)
.withAstyanaxConfiguration(
new AstyanaxConfigurationImpl().setCqlVersion(
"3.0.0").setDiscoveryType(
And adding WITH COMPACT STORAGE while creating the table.
I have data available to me in CSV file. Each CSV is different from another i.e. column names are different. For example in FileA unique identifier is called ID but in FileB it is called UID. Similarly, in FileA amount is called AMT but in FileB it is called CUST_AMT. The meaning is same but column names are different.
I want to create a general solution for saving this varying data from CSV files into a DB table. The solution must take into consideration additional formats that may become available in future.
Is there a best approach for such a scenario?
There are many solutions to this problem. But I think the easiest might be to generate a mapping from each input file format to a combined row format. You could create a configuration file that has column name to database field name mappings, and create a program that, given a CSV and a mapping file, can insert all the data into the database.
However, you would still have to alter the table for every new column you want to add.
More design work would require more details on how the data will be used after it enters the database.
I can think of the "Chain of responsibility" pattern at the start of the execution. So you read the header and let the chain of responsibility get the appropriate parser for that file.
Code could look like this:
interface Parser {
// returns true if this parser recognizes this format.
boolean accept(String fileHeader);
// Each parser can convert a line in the file into insert parameters to be
// used with PreparedStatement
Object[] getInsertParameters(String row);
}
This allows you to add new file formats by adding a new Parser object to the chain.
You would first initialize the Chain as follows:
List<Parser> parserChain = new ArrayList<Parser>();
parserChain.add(new ParserImplA());
parserChain.add(new ParserImplB());
parserChain.add(new ParserImplB());
....
Then you will use it as follows:
// read the header row from file
Parser getParser (String header) {
for (Parser parser: parserChain) {
if (parser.accept(header)
return parser;
}
throw new Exception("Unrecognized format!");
}
Then you can create a prepared statement for inserting a row into the table.
Processing each row of file would be :
preparedStatement.execute(parser.getInsertParameters(row));
I am having a query wherein I am fetching out sum of a column from a table formed through sub query.
Something in the lines:
select temp.mySum as MySum from (select sum(myColumn) from mySchema.myTable) temp;
However, I don't want MySum to be null when temp.mySum is null. Instead I want MySum to carry string 'value not available' when temp.mySum is null.
Thus I tried to use coalesce in the below manner:
select coalesce(temp.mySum, 'value not available') as MySum from (select sum(myColumn) from mySchema.myTable) temp;
However above query is throwing error message:
Message: The data type, length or value of argument "2" of routine "SYSIBM.COALESCE" is incorrect.
This message is because of datatype incompatibility between argument 1 and 2 of coalesce function as mentioned in the first answer below.
However, I am directly using this query in Jasper to send values to Excel sheet report:
hashmap.put("myQuery", this.myQuery);
JasperReport jasperReportOne = JasperCompileManager.compileReport(this.reportJRXML);
JasperPrint jasperPrintBranchCd = JasperFillManager.fillReport(jasperReportOne , hashmap, con);
jprintList.add(jasperPrintOne);
JRXlsExporter exporterXLS = new JRXlsExporter();
exporterXLS.setParameter(JRExporterParameter.JASPER_PRINT_LIST, jprintList);
exporterXLS.exportReport();
In the excel sheet, I am getting value as null when the value is not available. I want to show 'value unavailable' in the report.
How could this be achieved ?
Thanks for reading!
The arguments to coalesce must be compatible. That's not the case if the first is numeric (as mySum probably is) and the second is a string.
For example, the following PubLib doco has a table indicating compatibility between various types, at least for the DB2 I work with (the mainframe one) - no doubt there are similar restrictions for the iSeries and LUW variants as well.
You can try something like coalesce(temp.mySum, 0) instead or convert the first argument to a string with something like char(). Either of those should work since they make the two arguments compatible.