Bulk update of users in KeyCloak - java

I have a task to change the status of users in the IDM. The solution I chose is naive: looping each one and calling KeyCloak's service using REST.
However, I've noticed that this consumes a lot of time. I thought that something like bulk update (equivalent to SQL) might solve the issue, but I didn't find it in KeyCloak's API.
Does anyone know how to fix it? Thanks for help!

Do you have access to Keycloak's database? If that's the case, you may update users' data with SQL sentences. The schema is pretty straightforward to understand, I've made bulk updates in this fashion before.
What do you mean by "status"? If you mean the "enabled" status, your update will look like this:
UPDATE user_entity SET enabled = (value) WHERE (your conditions)
AFAIK, there's no way to bulk update through REST or the admin console.
Good luck!

Related

Using a cache as a layer in front of a database

I'm working on some back-end service which is asynchronous in nature. That is, we have multiple jobs that are ran asynchronously and results are written to some record.
This record is basically a class wrapping an HashMap of results (keys are job_id).
The thing is, I don't want to calculate or know in advance how many jobs are going to run (if I knew, I could cache.invalidate() the key when all the jobs has already been completed)
Instead, I'd like to have the following scheme:
Set an expiry for new records (i.e. expireAfterWrite)
On expiry, write (actually upsert) the record the database
If a cache miss occurs, load() is called to fetch the record from the database (if not found, create a new one)
The problem:
I tried to use Caffeine cache but the problem is that records aren't expired at the exact time they were supposed to. I then read this SO answer for Guava's Cache and I guess a similar mechanism works for Caffeine as well.
So the problem is that a record can "wait" in the cache for quite a while, even though it was already completed. Is there a way to overcome this issue? That is, is there a way to "encourage" the cache to invalidate expired items?
That lead me to question my solution. Would you consider my solution a good practice?
P.S. I'm willing to switch to other caching solutions, if necessary.
You can have a look at the Ehcache with write-behind. It is for sure more setup effort but it is working quite well

Alternative to checking database value in a while loop

I have a scenario where I check for a specific value in the Database every 10 seconds or so. And, if the value is YES, then I execute a bunch of shell scripts from a Java application.
Now, the value in database is only updated to YES once in a while depending on the user submitting a job on a web page. Therefore, running a while loop to check for this value in database seems to be a very bad design and I would like to implement a much cleaner approach using listeners (Observer design pattern).
How would such an implementation look like? Any examples I can follow to do this?
Yes there is much better job. So there is something called binlog reader in mysql. Thats how master and slave sync is done in mysql cluster database.
So either you write your own logic over https://github.com/shyiko/mysql-binlog-connector-java which gets all the chane event on table
or use https://github.com/zendesk/maxwell to read events from particular table and whenver any change in value is there check if it matches your condition and excute the script or java application on basis of that instead of running it as a cron.
The general idea is to use DB triggers, register DB listener from Java side and be notified from DB side when some event has happened.
Pls review proposed solutions
How to implement a db listener in Java

Is there an equivalent class/API in Solr like IndexingOperationListener in elasticsearch?

I need to capture Create/Index/Delete events in Solr and I understand that it is possible by using IndexingOperationListener in elasticsearch. Is there an equivalent of the same in Solr? There is SolrEventListener, but I am not able to get information whether SolrEventListener would provide Create/Index/Delete notifications as it is not clear from all the sources I've tried.
Any suggestions please?
This is the nearest. The idea is to add an event listener for each handler. We have separate handlers for each opedartion like query, update, delete in solr.

How to make a database listener with java?

Greetings all
I want to do something like a trigger or a listener (I don't know what) that will listen on a specific database table, and with each new record inserted on this table, do some java code, I mean that it detects that a new record was inserted and get it's data if it's possible
I need some guide about how this process can be accomplished ?
I am using Spring-Hibernate-PostgreSQL
This is what LISTEN/NOTIFY was created for.
The only drawback is that you will need to have some kind of background thread that polls the database on a regular basis to see if any notifications are available.
You can also use the code from the Postgres Wiki to have a starting point
I assume you mean that the DB content is added through your hibernate code.
If so, consult this previous answer of mine for how to set up Hibernate Event Listeners with Spring.
Otherwise, a-horse-with-no-name's answer should be best.
You could add an Interceptor to your Hibernate configuration to detect save events.

Way to know table is modified

There are two different processes developed in Java running independently,
If any of the process modifyies the table, can i get any intimation? As the table is modified. My objective is i want a object always in sync with a table in database, if any modification happens on table i want to modify the object.
If table is modified can i get any intimation regarding this ? Do Database provide any facility like this?
We use SQL Server and have certain triggers that fire when a table is modified and call an external binary. The binary we call sends a Tib rendezvous message to notify other applications that the table has been updated.
However, I'm not a huge fan of this solution - Much better to control writing to your table through one "custodian" process and have other applications delegate to that. To enforce this you could change permissions on your table so that only your custodian process can write to the database.
The other advantage of this approach is being able to provide a caching layer within your custodian process to cater for common access patterns. Granted that a DBMS performs caching anyway, but by offering it at the application layer you will have more control / visibility over it.
No, database doesn't provide these services. You have to query it periodically to check for modification. Or use some JMS solution to send notifications from one app to another.
You could add a timestamp column (last_modified) to the tables and check it periodically for updates or sequence numbers (which are incremented on updates similiar in concept to optimistic locking).
You could use jboss cache which provides update mechanisms.
One way, you can do this is: Just enclose your database statement in a method which should return 'true' when successfully accomplished. Maintain the scope of the flag in your code so that whenever you want to check whether the table has been modified or not. Why not you try like this???
If you're willing to take the hack approach, and your database stores tables as files (eg, mySQL), you could always have something that can check the modification time of the files on disk, and look to see if it's changed.
Of course, databases like Oracle where tables are assigned to tablespaces, and tablespaces are what have storage on disk it won't work.
(yes, I know this is a bad approach, that's why I said it's a hack -- but we don't know all of the requirements, and if he needs something quick, without re-writing the whole application, this would technically work for some databases)

Categories

Resources