I'm working with an inherited a MySQL dB schema that I can't change. I am trying to use DataNucleus (JDO) with it.
I have the following in the schema:
CREATE TABLE `vendors` (
`vendor_id` INT(11) NOT NULL AUTO_INCREMENT,
...
`registration_method` ENUM('Company','Basic Upgrade','Website','Secure Admin','Unknown') NULL DEFAULT NULL,
...
PRIMARY KEY (`vendor_id`),
...
And I have defined a POJO to work with it:
#PersistenceCapable( table="vendors" )
public class VendorRec {
...
public static enum registration_method_enum {
Company( "Company" ), Basic_Upgrade( "Basic Upgrade" ), Website( "Website" ), Secure_Admin( "Secure Admin" ), Unknown( "Unknown" );
private String registration_method;
registration_method_enum( String registration_method ) {
this.registration_method = registration_method;
}
public String getRegistration_method() {
return this.registration_method;
}
};
...
#PrimaryKey
private Integer vendor_id;
...
private registration_method_enum registration_method;
...
I am using DataNucleus's JDO interface to retrieve this class:
Set<Integer> vendorIds = new HashSet();
...
Query q = persistenceManager.newQuery( VendorRec.class );
q.declareParameters( "java.util.Set vendorIds" );
q.setFilter( "vendorIds.contains( vendor_id )" );
q.setOrdering("vendor_fname ascending, vendor_lname ascending");
final Iterator<VendorRec> vendorIt = ( ( List<VendorRec> )q.execute( vendorIds ) ).iterator();
I get the following exception:
java.lang.IllegalArgumentException: No enum constant ca.vendorlink.app.schema.VendorRec.registration_method_enum.Secure Admin
...
Well duh! Of course there isn't an enum constant "registration_method_enum.Secure Admin"! Spaces are not allowed! But that's the dB definition I've inherited... :-(
Suggestions for fixing this problem? Is there a different way I should be defining the enum?
Many thanks in advance.
Related
I need to write the following with JPA Criteria API:
p.id IS NULL AND nv.organizationalstat IN ('EMPLOYEE', 'FELLOW')`
My code is:
List<String> corePredicateInList = Arrays.asList("EMPLOYEE", "FELLOW");
In<String> corePredicateIn = cb.in(nvRoot.get("organizationalstat"));
corePredicateInList.forEach(p -> corePredicateIn.value(p)); // Setting opts into IN
Predicate p1CorePredicate = cb.and(
cb.isNull(plans.get("id")),
cb.in(nvRoot.get("organizationalstat"), corePredicateIn)
);
The runtime error is
antlr.NoViableAltException: unexpected token: in
...
where ( ( ( ( ( ( generatedAlias0.id is null )
and ( generatedAlias1.organizationalstat in (:param0, :param1) in () ) )
or ( generatedAlias0.id is not null ) )
It looks like there's a double IN somewhere. There are no syntax errors in the code.
I can't do cb.in(List<String>) directly, that's wrong syntax. I have to go through in.value() as indicated in this thread.
Using CriteriaBuilder.In
Predicate p1CorePredicate = cb.and(cb.isNull(plans.get("id")),corePredicateIn);
Or using Expression.In
Predicate p1CorePredicate = cb.and(cb.isNull(plans.get("id")),
nvRoot.get("organizationalstat").in(corePredicateInList));
A solution that worked for me is to do root.in(List<String>) (not cb.in), as follows:
List<String> corePredicateInList = Arrays.asList("EMPLOYEE", "FELLOW");
Predicate p1CorePredicate = cb.and(
cb.isNull(plans.get("id")),
nvRoot.get("organizationalstat").in(corePredicateInList)
);
After upgrading mapstruct from 1.2.0 to 1.3.1 I noticed that the annotation #Mapper(nullValueCheckStrategy=NullValueCheckStrategy.ALWAYS) in not in effect.
Is it a bug of new mapstruct version?
Example:
The code below:
String id = getTestId( testId);
if ( id != null ) {
testCase.setTestCaseId( id );
}
else {
testCase.setTestCaseId( null );
}
while the right is:
String id = getTestId( testId);
if ( id != null ) {
testCase.setTestCaseId( id );
}
The behaviour has been made more consequent with the advent of NullValuePropertyMapping. I think that was mentioned in the release notes as well. Checkout the documentation:
1: update methods (#MappingTarget)
https://mapstruct.org/documentation/stable/reference/html/#mapping-result-for-null-properties
2: regular (non update) methods
https://mapstruct.org/documentation/stable/reference/html/#checking-source-property-for-null-arguments
After the upgrade from hibernate 5.2.17 to 5.3.6, i get this error :
Caused by: org.h2.jdbc.JdbcSQLException: Schéma "ENHANCED" non trouvé
Schema "ENHANCED" not found; SQL statement:
call next value for enhanced.SequenceStyleGenerator [90079-197]
at org.h2.engine.SessionRemote.done(SessionRemote.java:623) ~[h2-1.4.197.jar:1.4.197]
at org.h2.command.CommandRemote.prepare(CommandRemote.java:85) ~[h2-1.4.197.jar:1.4.197]
at org.h2.command.CommandRemote.<init>(CommandRemote.java:51) ~[h2-1.4.197.jar:1.4.197]
at org.h2.engine.SessionRemote.prepareCommand(SessionRemote.java:493) ~[h2-1.4.197.jar:1.4.197]
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1247) ~[h2-1.4.197.jar:1.4.197]
at org.h2.jdbc.JdbcPreparedStatement.<init>(JdbcPreparedStatement.java:76) ~[h2-1.4.197.jar:1.4.197]
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:304) ~[h2-1.4.197.jar:1.4.197]
at com.mchange.v2.c3p0.impl.NewProxyConnection.prepareStatement(NewProxyConnection.java:567) ~[c3p0-0.9.5.2.jar:0.9.5.2]
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$1.doPrepare(StatementPreparerImpl.java:87) ~[hibernate-core-5.3.6.Final.jar:5.3.6.Final]
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172) ~[hibernate-core-5.3.6.Final.jar:5.3.6.Final]
the id field is annotated this way :
#Id
#GeneratedValue(generator = "enhanced.SequenceStyleGenerator")
#GenericGenerator(
name = "enhanced.SequenceStyleGenerator",
parameters = {
// this value needs to be used when creating the sequence in "increment-by" clause.
#Parameter(name = "increment_size", value= "10"),
// default name : hibernate_sequence
#Parameter(name = "prefer_sequence_per_entity", value= "false"),
#Parameter(name = "optimizer", value="pooled")
},
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator")
#Column(name = "ID", nullable = false)
private Long id;
With hibernate 5.2 it is working as expected, but not anymore with hibernate 5.3.
The migration guide here : https://github.com/hibernate/hibernate-orm/blob/5.3/migration-guide.adoc do not reference any change on sequence generator.
What could be the issue ?
I've found that it is a new behavior of hibernate 5.3 :
In the method SequenceStyleGenerator.determineSequenceName that code was added :
final Boolean preferGeneratorNameAsDefaultName = serviceRegistry.getService( ConfigurationService.class )
.getSetting( AvailableSettings.PREFER_GENERATOR_NAME_AS_DEFAULT_SEQUENCE_NAME, StandardConverters.BOOLEAN, true );
if ( preferGeneratorNameAsDefaultName ) {
final String generatorName = params.getProperty( IdentifierGenerator.GENERATOR_NAME );
if ( StringHelper.isNotEmpty( generatorName ) ) {
fallbackSequenceName = generatorName;
}
}
The new default behavior is to use the generator name as sequence name. So there are two possibilities to migrate from hibernate 5.2 to 5.3:
change the generator name to be the sequence name
revert to the hibernate 5.2- behavior of not using the generator name by setting hibernate.model.generator_name_as_sequence_name to false in hibernate config (or the generator parameters)
Your Generator Name is interpreted as belonging to a specific schema, renaming should fix this - avoid the dot.
Please check the manual Section 2.6.11 on how to name and parametrize generators: https://docs.jboss.org/hibernate/stable/orm/userguide/html_single/Hibernate_User_Guide.html
Our project relies on Atomikos to provide light-weight transaction management. However, we find that it logs the DB user name and password in plain text during initialization.
E.g.
2015-10-15 16:43:01,106 [http-bio-8080-exec-4] INFO com.atomikos.jdbc.AtomikosDataSourceBean - AtomikosDataSoureBean 'LAB_Oracle': initializing with [ xaDataSourceClassName=oracle.jdbc.xa.client.OracleXADataSource, uniqueResourceName=LAB_Oracle, maxPoolSize=8, minPoolSize=1, borrowConnectionTimeout=30, maxIdleTime=60, reapTimeout=0, maintenanceInterval=60, testQuery=null, xaProperties=[URL=jdbc:oracle:thin:#***:1537:oocait01,user=***,password=**] loginTimeout=0]
Is there any configuration that can suppress the logging of these confidential information?
As far as configuration, you can set your threshold to WARN for log category com.atomikos.jdbc.AtomikosDataSourceBean. There are some examples for popular logging frameworks here. That would filter out that entire message.
If you only want to filter the confidential properties, you could create a subclass of AtomikosDataSourceBean and override the protected method printXaProperties(). Then you can filter out any confidential properties such as passwords.
package my.com.atomikos.jdbc;
import java.util.Enumeration;
import java.util.Properties;
public class AtomikosDataSourceBean extends com.atomikos.jdbc.AtomikosDataSourceBean {
private static final long serialVersionUID = 1L;
protected String printXaProperties()
{
Properties xaProperties = getXaProperties();
StringBuffer ret = new StringBuffer();
if ( xaProperties != null ) {
Enumeration it = xaProperties.propertyNames();
ret.append ( "[" );
boolean first = true;
while ( it.hasMoreElements() ) {
String name = ( String ) it.nextElement();
if ( name.equals ( "password" ) ) continue;
if ( ! first ) ret.append ( "," );
String value = xaProperties.getProperty( name );
ret.append ( name ); ret.append ( "=" ); ret.append ( value );
first = false;
}
ret.append ( "]" );
}
return ret.toString();
}
}
Since Atomikos will automatically detect the logging framework, which could vary depending how you test, package and deploy your application, using a subclass is probably more foolproof.
I submitted a pull request that could make it into version 4. We'll see.
I have a feeling I'm going about this all wrong. But anyway.
I have an sql database which has essentially a purposefully denormalised table which I've constructed to make this task easier for me, so I can just grab stuff from one table.
What I have is a table of pairs, something like this:
user_lo | user_hi | something_else | other stuff
1000 | 1234 | 1231251654 | 123
1050 | 1100 | 1564654 | 45648
1080 | 1234 | 456444894648 | 1
And so on.
So for my neo4j graph db, I want each user id as a node, the other stuff isn't too important but will be the stuff in the relations basically.
I only want one node for each user, so my feeling is that if I do something like this:
while (rs.next()) {
node_lo = db.createNode();
node_lo.setProperty("user_id", rs.getInt(1));
node_hi = db.createNode();
node_hi.setProperty("user_id", rs.getInt(2));
}
That when we add the node with user_id 1234 for the second time, it will just create a new node, but I what I want is for it to just grab this node instead of creating it so I can add it to the relationship to 1080 in this case.
So what is the way to do this?
Have you looked at CREATE UNIQUE?
If you can't use Cypher, maybe you can use unique nodes?
Use an index to search, and if no result of found, create a new one.
Index<Node> userIndex = graphDatabaseService.index().forNodes('UserNodes');
IndexHits<Node> userNodes = userIndex.get('id', 1234);
if(!userNodes.hasNext()){
//Create new User node
} else {
Node userNode = userNodes.next();
}
Is this the type of operation you are looking for?
You'll probably want to use the UniqueNodeFactory provided by Neo4j.
public Node getOrCreateUserWithUniqueFactory( String username, GraphDatabaseService graphDb )
{
UniqueFactory<Node> factory = new UniqueFactory.UniqueNodeFactory( graphDb, "UserNodes" )
{
#Override
protected void initialize( Node created, Map<String, Object> properties )
{
created.setProperty( "id", properties.get( "id" ) );
}
};
return factory.getOrCreate( "id", id );
}
Normalize your SQL tables to look like nodes and relationships. Then with cypher in your migration you can make the migration rerunnable by something like
start a = node:node_auto_index('id:"<PK_Value>"')
delete a
create a = {id: "<PK_VALUE>", etc}
for nodes and since you should have in your many-to-many middle table:
start LHS = node:node_auto_index('id:"<LHS_PK>"'),
RHS = node:node_auto_index('id:"<RHS_PK>"')
create unique LHS=[:<relType> {<rel props>}]->RHS
now you will end up with no duplicates and can rerun as much as you like.
use this function:
where:
ID is the key which you want to check if already exist
Type: is the type of the node ( the label)
this function will create the node and return it, then you can add more properties.
public static Node getOrCreateUserWithUniqueFactory( long ID, GraphDatabaseService graphDb, String Type )
{
UniqueFactory<Node> factory = new UniqueFactory.UniqueNodeFactory( graphDb, Type )
{
#Override
protected void initialize( Node created, Map<String, Object> properties )
{
created.addLabel( DynamicLabel.label( Type ) );
created.setProperty( "ID", properties.get( "ID" ) );
}
};
return factory.getOrCreate( "ID", ID );
}
using cypher query, you can create a unique node with the following syntax,
CYPHER 2.0 merge (x:node_auto_index{id:1})
when making a REST call, one can make batch insertion like
$lsNodes[] = array(
'method'=> 'POST', 'to'=> '/cypher',
'body' => array(
'query' => 'CYPHER 2.0 merge (x:node_auto_index{id_item:{id}})',
'params' => array('id'=>1)
),
'id'=>0
);
$sData = json_encode($lsNodes);
similarly for creating relationships in a batch request, do the following
$lsNodes[] = array(
'method'=> 'POST', 'to'=> '/cypher',
'body' => array(
'query' => 'start a=node:node_auto_index(id={id1}), b = node:node_auto_index(id={id2}) create unique a-[:have{property:30}}]-b;',
'params' => array(
'id1' => 1, 'id2'=> 2
)
),
'id' => 0
);
$sData = json_encode($lsNodes);