Use a second JpaConnection to a different database in keycloak 19 - java

I'm trying to setup a keycloak instance which connects to a separate, second database for it's user management. I see a lot of jdbc examples online, but none with JPA.
I tried to setup my keycloak to use a separate JpaConnectionProvider to be able to use JPA in my CustomUserStorageProvider. For this, I created a subclass MySecondJpaConnectionProviderFactory of DefaultJpaConnectionProviderFactory which should create my own JpaConnectionProvider for usage in the CustomUserStorageProvider. But when obtaining the JpaConnectionProvider, strange things happen in the factory, even though the connection to the second database can be successfully obtained.
CustomUserStorageProvider using a JpaConnectionProvider created by MySecondJpaConnectionProviderFactory extends DefaultJpaConnectionProviderFactory
First, when calling migration(...) the factory is not able to locate the LiquibaseJpaUpdaterProviderFactory (on line 342), returning null for the lookup, causing a NullPointer on line 344.
void migration(MigrationStrategy strategy, boolean initializeEmpty, String schema, File databaseUpdateFile, Connection connection, KeycloakSession session) {
JpaUpdaterProvider updater = session.getProvider(JpaUpdaterProvider.class, LiquibaseJpaUpdaterProviderFactory.PROVIDER_ID);
JpaUpdaterProvider.Status status = updater.validate(connection, schema);
if (status == JpaUpdaterProvider.Status.VALID) {
logger.debug("Database is up-to-date");
(....)
Since I don't need the migration, I copied the class methods into my own class instead of extending it and removed the migration part, but then keycloak is not able to find necessary classes when creating the entity manager factory on line 286.
Caused by: java.lang.ClassNotFoundException: org.jboss.jandex.DotName
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
Code in DefaultJpaConnectionProviderFactory:
(....)
properties.put(AvailableSettings.CLASSLOADERS, classLoaders);
emf = JpaUtils.createEntityManagerFactory(session, unitName, properties, jtaEnabled);
addSpecificNamedQueries(session, connection);
logger.trace("EntityManagerFactory created");
(....)
All in all, looking at these suspicious errors I'm wondering if I run into a dead end here: Is usage of a second JPA connection supposed to be possible? Am I missing something?
Or should I rather stick to doing it all with jdbc?
The setup:
I'm running keycloak 19 in a docker container where I pre-build according to this official guide. All my SPI's are recognized by keycloak, the jdbc connection to the second db is working.

Related

How can I use Hibernate/JPA to tell the DB who the user is before inserts/updates/deletes?

Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.

Keycloak exception "Cannot access delegate without a transaction"

Context: Keycloak 1.9.1.Final and newer versions
Hi,
I have created a custom user federation provider which is a simple variation of the Keycloak's classpath property federation provider example. Instead of reading the usernames in a property file, I fetch then from an external web service.
My trouble is that sometimes I get the following exception when trying to authenticate with a test user:
Failed authentication: java.lang.IllegalStateException: Cannot access delegate without a transaction
at org.keycloak.models.cache.infinispan.UserCacheSession.getDelegate(UserCacheSession.java:78)
at org.keycloak.models.cache.infinispan.UserCacheSession.addUser(UserCacheSession.java:442)
at com.example.keycloak.MyFederationProvider.getUserModel(MyFederationProvider.java:324)
at com.example.keycloak.MyFederationProvider.getUserByUsername(MyFederationProvider.java:206)
at org.keycloak.models.UserFederationManager.getUserByUsername(UserFederationManager.java:237)
at org.keycloak.models.utils.KeycloakModelUtils.findUserByNameOrEmail(KeycloakModelUtils.java:273)
at org.keycloak.authentication.authenticators.browser.AbstractUsernameFormAuthenticator.validateUserAndPassword(AbstractUsernameFormAuthenticator.java:127)
at org.keycloak.authentication.authenticators.browser.UsernamePasswordForm.validateForm(UsernamePasswordForm.java:56)
at org.keycloak.authentication.authenticators.browser.UsernamePasswordForm.action(UsernamePasswordForm.java:49)
at org.keycloak.authentication.DefaultAuthenticationFlow.processAction(DefaultAuthenticationFlow.java:84)
at org.keycloak.authentication.AuthenticationProcessor.authenticationAction(AuthenticationProcessor.java:759)
at org.keycloak.services.resources.LoginActionsService.processFlow(LoginActionsService.java:359)
at org.keycloak.services.resources.LoginActionsService.processAuthentication(LoginActionsService.java:341)
at org.keycloak.services.resources.LoginActionsService.authenticateForm(LoginActionsService.java:386)
...
I can't figure out why this exception occures. I looked at the org.keycloak.models.cache.infinispan.UserCacheSession class and I could see that the exception is thrown when transactionActive variable is false, but I don't understand under what conditions it is set to false.
I tried forcing a transaction with KeycloakModelUtils.runJobInTransaction() method or by adding begin() and commit() arround the addUser() call, but it didn't solved the issue (I got a new error which informs that transaction is already active).
Did you already experienced this exception and know how to avoid it ?
Thanks a lot
I think I could find my error (or at least a workarround).
The getInstance() method of my user federation provider was always returning the same object (a singleton). I updated it in order to make it create a new provider every time the method is called.
This seems to solve the issue.

How do I do nested transactions in hibernate using only one connection?

Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.

Dynamically configuring datasources

The situation is this: a table that is used in a query (name it SUMMARY) was formerly in the same server and database that I'm doing all the queries of the application (name it server1 and DB1). But recently the SUMMARY table was deleted from this database, which makes it necessary to consult other server / database combinations.
The data from the server name and the database to be used for access to the SUMMARY table are parameterized in a table for this purpose. These data depend on the database which is connected, this way: for example, if I'm in the database DB1 of server1 then parameters will be server21 and DB21, whereas if someone refers to the parameters from the DB5 from server1 parameters will be server16 and DB16.
On that side I have no problem because I have listed the SQL query of the two parameters, ready to give the name of the server and database to consult in each case. This query is necessary in order to give server name and database name with which dynamically generate the datasource to connect to.
The problem, and the topic of this entry is, whether anyone has ever had to dynamically configure the datasource to be used in hibernate.properties, since this is usually a single, fixed value and in this case should be allowed changes in order to view the SUMMARY table (using parameters retrieved by my SQL query) only in this specific case, while all other database operations must be performed by using the original connection properties.
That is: What I need is to dynamically generate the datasource based on the parameters coming from the query, so the approaches which handle this by knowing beforehand how many and what are the possible connections should be discarded because they are not viable to solve my problem.
The application specifications are:
Database engine: SQL Server 2005
Prog. Language: Java 5.0
Frameworks: Spring 2.0.4, Hibernate 3.0.5
App. Server: WAS 6.1
Thanks in advance to anyone who has that knowledge and willing to share.
You can use a ConnectionProvider in hibernate to decide how to get the connection to be used by a session. We use something like this in our application:
public Connection getConnection() throws SQLException {
DataSource ds = (DataSource) BeanFactory.getBean("dataSource" + UserInfo.getDSName());
return ds.getConnection();
}
UserInfo is a class that store stuff in a ThreadLocal. This code will chose a datasource from Spring depending on the name that was passed in the ThreadLocal. What we do is to set the name of the datasource we want to use before opening the session (the actual logic is a little bit more complicated than that, as it depends on user preferences and other stuff).
You can do something like that to choose what Database to connect to.
You can check the javadocs for the ConnectionProvider interface. Make your own implementation, set the hibernate.connection.provider_class property in your hibernate configuration file to point to your class, and you're done.
I guess, it depends on the number of database/server combinations and the frequency of using them, but if there are a lot of database/servers and low use frequency, using plain JDBC without Hibernate and data sources might be an option. I don't think Hibernate is meant for situations like that.
or extend org.apache.commons.dbcp.BasicDataSource
public class MyDataSource extends BasicDataSource {
private void init() {
username = ...
password = ...
url = ...
}
#Override
protected synchronized DataSource createDataSource() throws SQLException {
init();
return super.createDataSource();
}
}

Google App Engine (Java) + Spring managed PersistenceManager

I've got kinda problem with JDO persistence of a list of just retrieved objects.
What I want to do is to:
Fetch list of "Orders"
Modify one property "status"
Make bulk update of "Orders"
What I've got so far is "Object with id ... is managed by a different Object Manager".
But wait, I haven't faced such a problem without Spring!
I tried to debug it like this:
List<Orderr> orders = orderDao.findByIdAll(ordersKeys);
for(Orderr o : orders) {
System.out.println(JDOHelper.getPersistenceManager(o).hashCode());
//hashcode is 1524670
o.setSomething(somevalue);
}
orderDao.makePresistentAll(orders); //hashcode inside is 31778523
makePersistentAll does nothing but:
try {
System.out.println(getPersistenceManager().hashCode());
getPersistenceManager().makePersistentAll(entities);
} finally {
getPersistenceManager().close();
}
All my DAOs extend JdoDaoSupport. Pmf is injected and managed by spring.
Finally, here is the question: Why is the persistence manager closed after findByIdAll? Or why do I get new persistence manager instance? My findByIdAll method doesn't call close on persistence manager, of course.
Of course if I call makePersistent for each "order" it works well. But it breaks layering of business and database logic...
UPD
Just found out that all calls to makePersistentAll aren't working at all after migration to spring managed PersistenceManager. Before spring I used plain old PMF.get() helper and everything was shiny!
If your app remains live in response to a HTTP request for longer than 30 seconds, it will be killed. Part of the mode of operation of GAE is that your apps are not long-lived. At all.
Though you wouldn't do this on a site of your own, you'll have to get used to having only short-term access to your DB session manager. A lot of time is sometimes needed to re-open it for every transaction, but that's how GAE makes the process scalable. If you really have a lot of traffic, it can run your application in parallel on several servers.
This is kind of magic. Everytime I ask here a question I know the answer to my question within 24 hours after post.
Of course, factory by its meaning should always create a new pm instance. Now I save reference to my old pm (like I did before spring jdo daos) and everyting is ok.

Categories

Resources