DbUnit: NoSuchColumnException and case sensitivity - java

Before posting this I googled a bit, I looked for in dbunit-user
archives and a bit also in DbUnit bug list, but I'm not found what
looking for.
Unfortunately, answers here did not help me either.
I'm using DbUnit 2.4.8 with MySQL 5.1.x to populate in setUp some JForum tables.
The issue is first appearing on jforum_users table created by this script
CREATE TABLE `jforum_users` (
`user_id` INT(11) NOT NULL AUTO_INCREMENT,
`user_active` TINYINT(1) NULL DEFAULT NULL,
`username` VARCHAR(50) NOT NULL DEFAULT '',
`user_password` VARCHAR(32) NOT NULL DEFAULT '',
[...]
PRIMARY KEY (`user_id`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=14
Executing REFRESH as database setup operation the following exception is raised.
org.dbunit.dataset.NoSuchColumnException: jforum_users.USER_ID -
(Non-uppercase input column: USER_ID) in ColumnNameToIndexes cache
map. Note that the map's column names are NOT case sensitive.
at org.dbunit.dataset.AbstractTableMetaData.getColumnIndex(AbstractTableMetaData.java:117)
at org.dbunit.operation.AbstractOperation.getOperationMetaData(AbstractOperation.java:89)
at org.dbunit.operation.RefreshOperation.execute(RefreshOperation.java:98)
at org.dbunit.AbstractDatabaseTester.executeOperation(AbstractDatabaseTester.java:190)
at org.dbunit.AbstractDatabaseTester.onSetup(AbstractDatabaseTester.java:103)
at net.jforum.dao.generic.AbstractDaoTest.setUpDatabase(AbstractDaoTest.java:43)
I looked in AbstractTableMetaData.java sources and nothing seems -statically- wrong.
The method
private Map createColumnIndexesMap(Column[] columns)
uses
columns[i].getColumnName().toUpperCase()
in writing map keys.
And then the method
public int getColumnIndex(String columnName)
uses
String columnNameUpperCase = columnName.toUpperCase();
Integer colIndex = (Integer) this._columnsToIndexes.get(columnNameUpperCase);
to read object from the map.
I really can't undestand what's going on...
Anybody can help me please?
Edit after last #limc answer
I'm using a PropertiesBasedJdbcDatabaseTester to configure my DbUnit env, as follow:
Properties dbProperties = new Properties();
dbProperties.load(new FileInputStream(testConfDir+"/db.properties"));
System.setProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_DRIVER_CLASS,
dbProperties.getProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_DRIVER_CLASS));
System.setProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_CONNECTION_URL,
dbProperties.getProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_CONNECTION_URL));
System.setProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_USERNAME,
dbProperties.getProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_USERNAME));
System.setProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_PASSWORD,
dbProperties.getProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_PASSWORD));
System.setProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_SCHEMA,
dbProperties.getProperty(PropertiesBasedJdbcDatabaseTester.DBUNIT_SCHEMA));
databaseTester = new PropertiesBasedJdbcDatabaseTester();
databaseTester.setSetUpOperation(getSetUpOperation());
databaseTester.setTearDownOperation(getTearDownOperation());
IDataSet dataSet = getDataSet();
databaseTester.setDataSet(dataSet);
databaseTester.onSetup();

I had a similar problem to this today (using the IDatabaseTester interface added in v2.2 against MySQL) and spent several hours tearing my hair out over it. The OP is using a PropertiesBasedJdbcDatabaseTester, whilst I was using its 'parent' JdbcDatabaseTester.
DBUnit has a FAQ answer related to this NoSuchColumnException (specific to MySQL) but it looks like an oversight to me that it neglects to mention that each connection drawn from the interface's getConnection() method will have separate config. In fact I'd go so far as to call it bug given the wording of the various bits of doco I looked at today and the names of the classes involved (eg. DatabaseConfig, yet per Connection?).
Anyway, in sections of code like setup/teardown (example below) you don't even provide the Connection object so there's no way I could see to set the config in there.
dbTester.setDataSet(beforeData);
dbTester.onSetup();
In the end I just extended JdbcDatabaseTester to #Override the getConnection() method and inject the configuration specific to MySQL each time:
class MySQLJdbcDatabaseTester extends org.dbunit.JdbcDatabaseTester {
public MySQLJdbcDatabaseTester(String driverClass, String connectionUrl, String username, String password,
String schema) throws ClassNotFoundException {
super(driverClass, connectionUrl, username, password, schema);
}
#Override
public IDatabaseConnection getConnection() throws Exception {
IDatabaseConnection connection = super.getConnection();
DatabaseConfig dbConfig = connection.getConfig();
dbConfig.setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new MySqlDataTypeFactory());
dbConfig.setProperty(DatabaseConfig.PROPERTY_METADATA_HANDLER, new MySqlMetadataHandler());
return connection;
}
}
And finally all the errors went away.

I have reason to believe the problem stemmed from user_id column as the record ID. I have similar problem in the past where the row ID is generated natively by SQL Server. I'm not at my work desk now, but try this solution to see if it helps: http://old.nabble.com/case-sensitivity-on-tearDown--td22964025.html
UPDATE - 02-03-11
I have a working solution here. Here's my test code:-
MySQL Script
CREATE TABLE `jforum_users` (
`user_id` INT(11) NOT NULL AUTO_INCREMENT,
`user_active` TINYINT(1) NULL DEFAULT NULL,
`username` VARCHAR(50) NOT NULL DEFAULT '',
PRIMARY KEY (`user_id`)
)
COLLATE='utf8_general_ci'
ENGINE=InnoDB
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=14
dbunit-test.xml Test File
<?xml version='1.0' encoding='UTF-8'?>
<dataset>
<jforum_users user_id="100" username="First User" />
</dataset>
Java Code
Class.forName("com.mysql.jdbc.Driver");
Connection jdbcConnection = DriverManager.getConnection("jdbc:mysql://localhost:8889/test", "", "");
IDatabaseConnection con = new DatabaseConnection(jdbcConnection);
InputStream is = getClass().getClassLoader().getResourceAsStream("dbunit-test.xml");
IDataSet dataSet = new FlatXmlDataSetBuilder().build(is);
DatabaseOperation.CLEAN_INSERT.execute(con, dataSet);
con.close();
I didn't get any errors, and the row was added into the database.
Just FYI, I did try a REFRESH and that works fine without errors too:-
DatabaseOperation.REFRESH.execute(con, dataSet);
I'm using DBUnit 2.4.8 and MySQL 5.1.44.
Hope this helps.

I came here looking for an answer to this problem. For me the problem was the Hibernate Naming Strategy. I realised this is the problem as show_sql was true in the Spring's application.properties:
spring.jpa.show-sql=true
I could see the generated table SQL and the field name was 'FACT_NUMBER' instead of 'factNumber' I had in my dbunit's xml.
This was solved by forcing the default naming strategy (ironically the default seems to be org.hibernate.cfg.ImprovedNamingStrategy, which puts in the '_'):
spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.DefaultNamingStrategy

When I got this error, it was because my schema had a not null constraint on a column, but this column was missing from my datafile.
For example, my table had
<table name="mytable">
<column>id</column>
<column>entity_type</column>
<column>deleted</column>
</table>
<dataset>
<mytable id="100" entity_type"2"/>
</dataset>
I have a not null constraint on the deleted column and when I run the test, I get the NoSuchColumnException.
When I change the dataset to
<mytable id="100" entity_type"2" deleted="0"/>
I get past the Exception.

I was faced with this problem and the reason was that the dtd of my dataset file had a description different of the table where i wanted to insert data.
So check that your table where you want to insert data has the same columns that your dtd file.
when I delete in the dtd file the column that was not in the table where i inserted the data the problem disappeared.

Well in my case it was a csv file encoded in UTF-8 with BOM char in the beginning. I was using notepad to create csv files. Use notepade++ to avoid saving BOM char.

I had same problem , then figured I have used different column name in my DB than what I have inside my XML file.
I'm sure you got problem in user_id vs USER_ID.

I've just stumbled my self over this error message.
I had to extend an old piece of code - I needed to add a new column to several tables. In one of my entities I've forgotten to create a setter for this column. So you might check your entities if they are "complete".
Sometimes it might be as simple as that.

Ok I faced the same trouble and I found the solution, The way we are creating the test data is wrong,for what kind of data set we were using, We were using the xml data set for which following format is correct is you are using the FlatXmlDataSet then there is a different format, for more explanation read in the link provided below. the xml should be in following format.
<?xml version="1.0" encoding="UTF-8"?>
<dataset>
<table>
<column>id</column>
<column>name</column>
<column>department</column>
<column>startDate</column>
<column>endDate</column>
<row>
<value>999</value>
<value>TEMP</value>
<value>TEMP DEPT</value>
<value>2113-10-13</value>
<value>2123-10-13</value>
</row>
</table>
</dataset>
If you wish to know more go to this link : http://dbunit.sourceforge.net/components.html

Related

How to get an entity-related object correctly

I have approximately the following entity:
public class Article {
private String name;
private Long fileId;
}
As you can see, it has a field fileld that contains the id of the associated file, which is also an entity. However, the file does not know anything about the Article, so the only thing that connects them is the fileId field in the Article. Therefore, they must be explicitly linked so as not to get lost. Now to get a linked file, I have to make a separate query to the database for each Article. That is, if I want to get a list of 10 Articles, I need to make a request to the database 10 times and get the file by its id. This looks very inefficient. How can this be done better? I use jooq, so I can't use JPA, so I can't substitute a file object instead of the fileId field. Any ideas?
I'm going to make an assumption that your underlying tables are something like this:
create table file (
id bigint primary key
content blob
);
create table article (
name text,
file_id bigint references file
);
In case of which you can fetch all 10 files into memory using a single query like this:
Result<?> result =
ctx.select()
.from(ARTICLE)
.join(FILE).on(ARTICLE.FILE_ID.eq(FILE.ID))
.fetch();

ISIS: Problems with Blob/Clob field serialization

Edit: Solution: Upgrading to ISIS 1.17.0 and setting the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false solved the first two problems.
I am using Apache ISIS 1.16.2 and I try to store Blob/Clob content in a MariaDB database (v10.1.35). Therefore, I use the DB connector org.mariadb.jdbc.mariadb-java-client (v2.3.0) and in the code the #Persistent annotation as shown in many examples and the ISIS documentation.
Using the code below, I just get one single column named content_name (in which the Blob object is serialized in binary form) instead of the three columns content_name, content_mimetype and content_bytes.
This is the Document class with the Blob field content:
#PersistenceCapable(identityType = IdentityType.DATASTORE)
#DatastoreIdentity(strategy = IdGeneratorStrategy.IDENTITY, column = "id")
#DomainObject(editing = Editing.DISABLED, autoCompleteRepository = DocumentRepository.class, objectType = "Document")
#Getter
// ...
public class Document implements Comparable<Document> {
#Persistent(
defaultFetchGroup = "false",
columns = {
#Column(name = "content_name"),
#Column(name = "content_mimetype"),
#Column(name = "content_bytes",
jdbcType = "BLOB",
sqlType = "LONGVARBINARY")
})
#Nonnull
#Column(allowsNull = "false")
#Property(optionality = Optionality.MANDATORY)
private Blob content;
// ...
#Column(allowsNull = "false")
#Property
private Date created = new Date();
public Date defaultCreated() {
return new Date();
}
#Column(allowsNull = "true")
#Property
#Setter
private String owner;
// ...
}
This creates the following schema for the DomainObject class Document with just one column for the Blob field:
CREATE TABLE `document` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`content_name` mediumblob,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`owner` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Normally, the class org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisBlobMapping of the ISIS framework should do the mapping. But it seems that this Mapper is somehow not involved...
1. Question: How do I get the Blob field being split up in the three columns (as described above and in many demo projects). Even if I switch to HSQLDB, I still get only one column, so this might not be an issue with MariaDB.
2. Question: If I use a Blob/Clob field in a class that inherits from another DomainObject class, I often get a org.datanucleus.exceptions.NucleusException (stack trace see below) and I cannot make head or tail of it. What are potential pitfalls when dealing with inheritance? Why am I getting this exception?
3. Question: I need to store documents belonging to domain objects (as you might have guessed). The proper way of doing so would be to store the documents in a file system tree instead of a database (which also has by default some size limitations for object data) and reference the files in the object. In the Datanucleus documentation I found the extension serializeToFileLocation that should do exactly that. I tried it by adding the line #Extension(vendorName="datanucleus", key="serializeToFileLocation" value="document-repository") to the Blob field, but nothing happened. So my question is: Is this Datanucleus extension compatible with Apache Isis?
If this extension conflicts with Isis, would it be possible to have a javax.jdo.listener.StoreLifecycleListener or org.apache.isis.applib.AbstractSubscriber that stores the Blob on a file system before persisting the domain object to database and restoring it before loading? Are there better solutions available?
That's it for now. Thank you in advance! ;-)
The stack trace to question 2:
... (other Wicket related stack trace)
Caused by: org.datanucleus.exceptions.NucleusException: Creation of SQLExpression for mapping "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" caused error
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:199)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl.findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager.getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel.getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider.size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase.internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView.getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView.getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView.onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater.onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView.onBeforeRender(AbstractPageableView.java:115)
at org.apache.wicket.Component.internalBeforeRender(Component.java:950)
at org.apache.wicket.Component.beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer.onBeforeRenderChildren(MarkupContainer.java:1825)
... 81 more
Caused by: org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" since not supported
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:189)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler#locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl#findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager#getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel#getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider#size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase#internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView#getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView#getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView#onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater#onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView#onBeforeRender(AbstractPageableView.java:115)
// <-- 8 times the following lines
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer#onBeforeRenderChildren(MarkupContainer.java:1825)
at org.apache.wicket.Component#onBeforeRender(Component.java:3916)
// -->
at org.apache.wicket.Page#onBeforeRender(Page.java:801)
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.Component#internalPrepareForRender(Component.java:2236)
at org.apache.wicket.Page#internalPrepareForRender(Page.java:242)
at org.apache.wicket.Component#render(Component.java:2325)
at org.apache.wicket.Page#renderPage(Page.java:1018)
at org.apache.wicket.request.handler.render.WebPageRenderer#renderPage(WebPageRenderer.java:124)
at org.apache.wicket.request.handler.render.WebPageRenderer#respond(WebPageRenderer.java:195)
at org.apache.wicket.core.request.handler.RenderPageRequestHandler#respond(RenderPageRequestHandler.java:175)
at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor#respond(RequestCycle.java:895)
at org.apache.wicket.request.RequestHandlerStack#execute(RequestHandlerStack.java:64)
at org.apache.wicket.request.cycle.RequestCycle#execute(RequestCycle.java:265)
at org.apache.wicket.request.cycle.RequestCycle#processRequest(RequestCycle.java:222)
at org.apache.wicket.request.cycle.RequestCycle#processRequestAndDetach(RequestCycle.java:293)
at org.apache.wicket.protocol.http.WicketFilter#processRequestCycle(WicketFilter.java:261)
at org.apache.wicket.protocol.http.WicketFilter#processRequest(WicketFilter.java:203)
at org.apache.wicket.protocol.http.WicketFilter#doFilter(WicketFilter.java:284)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.isis.core.webapp.diagnostics.IsisLogOnExceptionFilter#doFilter(IsisLogOnExceptionFilter.java:52)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.shiro.web.servlet.AbstractShiroFilter#executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1#call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable#doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable#call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject#execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter#doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter#doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
// ... some Jetty stuff
at java.lang.Thread#run(Thread.java:748)
After some research, I think that the problem of question 1 and 2 seems to be related to this ISIS bug report #1902.
In short: The datanucleus extension plugin resolving mechanism does not seem to find the ISIS value type adapters and therefore cannot know how to serialize ISIS's Blob/Clob types.
According to the aforementioned ISIS bug report, this problem is fixed in 1.17.0, so I am trying to upgrade from 1.16.2 to this version (which introduced many other problems, but that will be an extra topic).
For question 3 I have found Minio which addresses basically my problem, but it is a bit oversized for my needs. I will keep looking for other solutions to store Blob/Clobs to local file system and will keep Minio in mind...
UPDATE:
I upgraded my project to ISIS version 1.17.0. It solved my problem in question 1 (now I get three columns for a Blob/Clob object).
The problem in question 2 (NucleusException) is not solved by the upgrade. I figured out that it is only thrown if returning a list of DomainObjects with Blob/Clob field(s), i.e. if rendered as standalone table. If I get directly to an object's entity view, no exception is thrown and I can see/modify/download the Blob/Clob content.
In the meantime, I wrote my own datanucleus plugin, that stores the Blobs/Clobs as files on a file system.
UPDATE 2
I found a solution for circumventing the org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisClobMapping" since not supported. It seems to be a problem with bulk-loading (but I do not know any details).
Deactivating bulk-load via the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false (which is initially set to true by ISIS archetypes) solved the problem.

How do I set schema to dataSource? [duplicate]

Is it possible? Can i specify it on the connection URL? How to do that?
I know this was answered already, but I just ran into the same issue trying to specify the schema to use for the liquibase command line.
Update
As of JDBC v9.4 you can specify the url with the new currentSchema parameter like so:
jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema
Appears based on an earlier patch:
http://web.archive.org/web/20141025044151/http://postgresql.1045698.n5.nabble.com/Patch-to-allow-setting-schema-search-path-in-the-connectionURL-td2174512.html
Which proposed url's like so:
jdbc:postgresql://localhost:5432/mydatabase?searchpath=myschema
As of version 9.4, you can use the currentSchema parameter in your connection string.
For example:
jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema
If it is possible in your environment, you could also set the user's default schema to your desired schema:
ALTER USER user_name SET search_path to 'schema'
I don't believe there is a way to specify the schema in the connection string. It appears you have to execute
set search_path to 'schema'
after the connection is made to specify the schema.
DataSource – setCurrentSchema
When instantiating a DataSource implementation, look for a method to set the current/default schema.
For example, on the PGSimpleDataSource class call setCurrentSchema.
org.postgresql.ds.PGSimpleDataSource dataSource = new org.postgresql.ds.PGSimpleDataSource ( );
dataSource.setServerName ( "localhost" );
dataSource.setDatabaseName ( "your_db_here_" );
dataSource.setPortNumber ( 5432 );
dataSource.setUser ( "postgres" );
dataSource.setPassword ( "your_password_here" );
dataSource.setCurrentSchema ( "your_schema_name_here_" ); // <----------
If you leave the schema unspecified, Postgres defaults to a schema named public within the database. See the manual, section 5.9.2 The Public Schema. To quote hat manual:
In the previous sections we created tables without specifying any schema names. By default such tables (and other objects) are automatically put into a schema named “public”. Every new database contains such a schema.
I submitted an updated version of a patch to the PostgreSQL JDBC driver to enable this a few years back. You'll have to build the PostreSQL JDBC driver from source (after adding in the patch) to use it:
http://archives.postgresql.org/pgsql-jdbc/2008-07/msg00012.php
http://jdbc.postgresql.org/
In Go with "sql.DB" (note the search_path with underscore):
postgres://user:password#host/dbname?sslmode=disable&search_path=schema
Don't forget SET SCHEMA 'myschema' which you could use in a separate Statement
SET SCHEMA 'value' is an alias for SET search_path TO value. Only one
schema can be specified using this syntax.
And since 9.4 and possibly earlier versions on the JDBC driver, there is support for the setSchema(String schemaName) method.

Proper way to insert record with unique attribute

I am using spring, hibernate and postgreSQL.
Let's say I have a table looking like this:
CREATE TABLE test
(
id integer NOT NULL
name character(10)
CONSTRAINT test_unique UNIQUE (id)
)
So always when I am inserting record the attribute id should be unique
I would like to know what is better way to insert new record (in my spring java app):
1) Check if record with given id exists and if it doesn't insert record, something like this:
if(testDao.find(id) == null) {
Test test = new Test(Integer id, String name);
testeDao.create(test);
}
2) Call straight create method and wait if it will throw DataAccessException...
Test test = new Test(Integer id, String name);
try{
testeDao.create(test);
}
catch(DataAccessException e){
System.out.println("Error inserting record");
}
I consider the 1st way appropriate but it means more processing for DB. What is your opinion?
Thank you in advance for any advice.
Option (2) is subject to a race condition, where a concurrent session could create the record between checking for it and inserting it. This window is longer than you might expect because the record might be already inserted by another transaction, but not yet committed.
Option (1) is better, but will result in a lot of noise in the PostgreSQL error logs.
The best way is to use PostgreSQL 9.5's INSERT ... ON CONFLICT ... support to do a reliable, race-condition-free insert-if-not-exists operation.
On older versions you can use a loop in plpgsql.
Both those options require use of native queries, of course.
Depends on the source of your ID. If you generate it yourself you can assert uniqueness and rely on catching an exception, e.g. http://docs.oracle.com/javase/1.5.0/docs/api/java/util/UUID.html
Another way would be to let Postgres generate the ID using the SERIAL data type
http://www.postgresql.org/docs/8.1/interactive/datatype.html#DATATYPE-SERIAL
If you have to take over from an untrusted source, do the prior check.

UTF-8 won't persist on Hibernate + MySQL

I'm trying to save some values in MySQL database by using Hibernate, but most Lithuanian characters won't get saved, including ąĄ čČ ęĘ ėĖ įĮ ųŲ ūŪ(they are saved as ?), however, šŠ žŽ do get saved.
If I do inserts manually, then those values are properly saved, so the problem is most likely in Hibernate configuration.
What I have tried so far:
hibernate.charset=UTF-8
hibernate.character_encoding=UTF-8
hibernate.use_unicode=true
---------
properties.put(PROPERTY_NAME_HIBERNATE_USE_UNICODE,
env.getRequiredProperty(PROPERTY_NAME_HIBERNATE_USE_UNICODE));
properties.put(PROPERTY_NAME_HIBERNATE_CHARSET,
env.getRequiredProperty(PROPERTY_NAME_HIBERNATE_CHARSET));
properties
.put(PROPERTY_NAME_HIBERNATE_CHARACTER_ENCODING,
env.getRequiredProperty(PROPERTY_NAME_HIBERNATE_CHARACTER_ENCODING));
---------
private void registerCharachterEncodingFilter(ServletContext aContext) {
CharacterEncodingFilter cef = new CharacterEncodingFilter();
cef.setForceEncoding(true);
cef.setEncoding("UTF-8");
aContext.addFilter("charachterEncodingFilter", cef)
.addMappingForUrlPatterns(null, true, "/*");
}
As described here
I tried adding ?useUnicode=true&characterEncoding=utf-8 to db connection url.
As described here
I ensured that my db is set to UTF-8 charset. phpmyadmin > information_schema > schemata
def db_name utf8 utf8_lithuanian_ci NULL
This is how I save into db:
//Controller
buildingService.addBuildings(schema.getBuildings());
List<Building> buildings = buildingService.getBuildings();
System.out.println("-----------");
for (Building b : schema.getBuildings()) {
System.out.println(b.toString());
}
System.out.println("-----------");
for (Building b : buildings) {
System.out.println(b.toString());
}
System.out.println("-----------");
//Service:
#Override
public void addBuildings(List<Building> buildings) {
for (Building b : buildings) {
getCurrentSession().saveOrUpdate(b);
}
}
First set of println contains all Lithuanian characters, while second replaces most with ?
EDIT: Added details
insert into buildings values (11,'ąĄčČęĘ', 'asda');
select short, hex(short) from buildings;
//Šalt. was inserted via hibernate
//letters are properly displayed:
ąĄčČęĘ | C485C484C48DC48CC499C498
MIF Šalt. | 4D494620C5A0616C742E
select address, hex(address) from buildings;
Šaltini? <...> | C5A0616C74696E693F20672E2031412C2056696C6E697573
//should contain "ų"
--------
show create table buildings;
buildings | CREATE TABLE `buildings` (
`id` int(11) NOT NULL,
`short` varchar(255) COLLATE utf8_lithuanian_ci DEFAULT NULL,
`address` varchar(255) COLLATE utf8_lithuanian_ci DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_lithuanian_ci
EDIT:
I did not find a proper solution, so I came up with a workaround. I ended up escaping/unescaping characters, storing them like this: \uXXXX.
Let's verify that they were stored correctly... Please do SELECT col, HEX(col) ... to fetch some cell with Lithuanian characters. A correctly stored ą will show C485. The others should show various hex values of C4xx or C5xx. 3F is ?.
But, more importantly, 4 characters do show. Š should be C5A0 if properly stored as utf8. However, I suspect, you will see 8A, implying that the column in the table is really declared as CHARACTER SET latin1. (The 4 characters show up in the first column of my charset blog ).
Do SHOW CREATE TABLE to see how the column is defined. If it says latin1, then the problem is with the table definition, and you probably ought to start over.
You have to ensure that every component taking part in data entry uses UTF-8 encoding explicitly.
If you enter the values via browser, make sure that the
page displaying the results with the following header
Content-Type: text/html; charset=utf-8.
The input form is defined as follows
<form action="submit" accept-charset="UTF-8">...</form>.
If you are creating String objects from byte array, make sure you
explicitly state the Charset in the constructor.
If your entry happens from a text file, that file has to be UTF-8
encoded.
If it is hardcoded directly in your code, then the source has to be
UTF-8 encoded.
The fact that your DB holds correct UTF-8 (two or more bytes for a special letter) is reassuring.
If you get one single ? for a special letter, it was attempted to do a UTF-8 conversion to some encoding that does not contain those letters. And that seems to be the case. The letters that are converted correctly are in the ISO-8859-1 or Windows-1252 range. The others are not.
Now ISO-88591-1 aka Latin-1 is the default HTTP encoding, default in java EE server. You might like to do before writing:
response.setCharacterEncoding("UTF-8");
Now one problem with System.out.println is that it uses the system default encoding. Logging to a file with a logger is more interesting. Or debugging and inspecting the String and its char array.
That the schema does seemingly work, may be that the schema Strings stem immediately from a Java source, and the editor encoding and javac compiler encoding differ. This can be checked by u-escaping the string literals in java: "\u0105" instead of "ą".
Make a unit test that writes and reads from the database.

Categories

Resources