Keycloak export for big database is taking too long - java

We are running some tests using Keycloak 3.4.3 and exporting a database with 10k users. This is the code we use:
ExportImportConfig.setAction(ExportImportConfig.ACTION_EXPORT);
ExportImportConfig.setDir(backupFolderPath);
ExportImportConfig.setProvider(ExportImportConfig.PROVIDER_DEFAULT);
ExportImportConfig.setUsersExportStrategy(ExportImportConfig.DEFAULT_USERS_EXPORT_STRATEGY);
ExportImportConfig.setUsersPerFile(500);
ExportImportManager eiManager = new ExportImportManager(keycloakSession);
eiManager.runExport();
The time it takes to do a backup is about 1 minute per generated file, no matter how many users we put in that file (we tried with the default of 50 and got up to 500).
Are we missing some config that could help improve that time?
P.S. We also checked the Keycloak JIRA, but the only post we have found is KEYCLOAK-2413 and the fix for it was using "dir" export strategy, which we are already using. It is also a very old question, asked for Keycloak 1.

Related

filenet JAVA API retention periode not updated

I've been working on a project that use filenet for multiple parts of the project.
I've been asked to remove a boatload of file's that need to be removed. The reason why I was asked is because the assumtion was made we could remove the data through a java application.
And so far so good honestly. I've been able to use code left by former project members to create a delete application. However, i've come to a single problem in this issue: the retention date.
When I delete a file it will display the error message: Content Engine cannot delete or move content because the retention period for the item has not yet expired. Current time: 9/16/21 8:43 AM; Storage period: 20-12-99 1:00.
I've created code that does change this. However the above error is still appearing. I;ve doubled checked the retentiondate property in the specifice file, and this has been changed.
I've looked through all the properties of the file and have not spotted anything that includes this date.
So my questions is, what am I missing? Is this something from filenet?
The retention period is probably additionally set on the storage device and the storage device is not aligned with filenet.
What type of storage backend are you using.
Please check this doc https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=objects-retention-limitations-constraints
And this https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=objects-security-permissions-required-setting-modifying-retention
Sure that you have enough permissions for the principal that you are using for API calls.
And check this page https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=sweeps-deleting-objects-sweep and this https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=sweeps-updating-object-retention as alternative way to delete objects without API and custom app.

Tomcat 8.5 takes too long to recognize new content

I have the following problem, I write an Excel file in C:\Tomcat85\webapps\MyWebApp\Excel\myExcel.xls.
As soon as my Java application finishes writing the file, it performs a download for the user to work with it. This gives a nasty 404 error.
If I wait a few seconds and reload the page it downloads all right (or adding a five second sleep in java, it works the same).
So, what I conclude is that Tomcat is taking 5 seconds to recognize that this new excel file exists and just then is able to serve it.
Is there anyway to make Tomcat perform this task faster? Maybe using some configuration in web.xml to treat that "/Excel/" folder differently.
Windows 10 64bits, Tomcat 8.5, Java 7 (could try java8 but I dont think it will make a difference).
Some code:
new ExcelExport(remoteHandle, context).execute( outFileName, outMessage);
// Thread.sleep(5000);
httpContext.wjLoc = formatLink(outFileName);
sleep is commented or uncommented depending on the test. With out the sleep, I get 404, with the sleep in 5 seconds it works fine.
httpContet.wjLoc just performs the download, as a link to a file.
The wiriting is working fine, as I see it ready and writable in File Explorer, but if I try to open it by URL I get the same 404.
Resources are cached by default. The amount of time in milliseconds between the revalidation of cache entries is defined by the cacheTtl parameter, referenced in this documentation. By default its value is 5 seconds.
If you want to disable the cache, just set the cachingAllowed to false.

Best way of cache-management with spring for static files

I want use cache control in spring mvc for static files.
I have gone through the following scenarios
Using WebContentHandlerInterceptor.
Using browser cache headers.
Using mvc:resources
Version number/build number for the js files.
But my problem is when the user comes for the first time it is loaded with the latest static files. If I update any JS files that are needed to be updated in the test or production server before the cache expiration. The browser is taking only from cache until I reload using F5 or Ctrl+F5.
When the user is requested a page, then all static files are to be checked and if they are not modified then it has to use cache otherwise take the latest one from the server.
Please help me.. I am newbee to this stackoverflow.
To ensure browser to download the latest version of your static files is to add a parameter to the URL.
For example, your request will look like resources/scripts/menu.js?ver=1.0.

Caching with Play framework and Java

I am running an application with Play and Java, and I need to set up expiration date for various types of assets: images, css, javascript etc.
I have the following in the conf/routes file:
GET /assets/*file controllers.Assets.at(path="/public", file)
I was able to set expiration date for one individual file in application.conf:
"assets.cache./public/js/pages/validation.js"="max-age=7200"
But I am not able to set it for a whole folder. I have tried
"assets.cache./public/js/pages/*.js"="max-age=7200"
"assets.cache./public/js/pages/*"="max-age=7200"
but nothing happens. I was hoping to set the expiration date for everything in the /js/pages folder.
I've also tried
assets.defaultCache="max-age=7200"
per instructions at
http://www.jamesward.com/2014/04/29/optimizing-static-asset-loading-with-play-framework
as well as
http.cacheControl=7200
per documentation http://www.playframework.com/documentation/1.2.3/configuration#http
and none of these work. The changes above were done in application.conf.
I know there is a way to do the same by defining controllers that change the response() for the routes that I want to set the expiration date for:
far future Expires header for static contents
But I would like to know how to configure expiration date for assets from the application.conf file.
Our application is running on S3 Linux instances, so configuring the expire date on the server is not an option.
Thank you!
Play framework does not support "assets.cache./public/js/pages/*.js"="max-age=7200"
but assets.defaultCache="max-age=7200" should work.
In debug/dev mode (starting app using play run) assets.defaultCache is ignored, so it is always 'no-cache'. Make sure you are running it in prod mode(using play start).
I can't find any reference in docs, but same can be checked in https://github.com/playframework/playframework/blob/master/framework/src/play/src/main/scala/play/api/controllers/Assets.scala AssetInfo::cacheControl function

Why am I getting this exception in GAE

I just tested and redeployed my application to a test instance, and it worked fine, then i changed the app id and redeployed to my production instance, and I get an indexing problem. How do I avoid this in the future? I went to the effort to test it first and it worked fine!
Uncaught exception from servlet
com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.. <datastore-index kind="Article" ancestor="false" source="manual">
<property name="tags" direction="asc"/>
<property name="created" direction="asc"/>
</datastore-index>
at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java:40)
at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:67)
The admin console says that it is "Building" the index. It has said that for 20 minutes now! How long does it take!?
When you create new queries, and use them for the first time on your local machine, they always work first time. When you run these new queries for the first time on google app engine, they will return this exception because the google app engine servers take some time to generate an "index" to allow your query to work properly.
I would recommend when you create new queries, to give them a once off run in the production environment to get the "index" built, so that when your users hit them, they work first time.
Secondly,manually pre defining your queries before you need them and uploading them to the server, means that when you really need them they may be built on the server already.
The way I work around this problem is to maintain a number of versions for my app.
Typically something like this:
Version 1: Current Default
Version 2: Next release
When I have a new release ready for deployment, I upload it to version 2 in this instance. Once the indexes have been built I make version 2 the default. This way customers never experience any downtime or errors.
So in essence you could swap between version 1 and 2 when releasing a new version.
I would suggest that you do also pre-test within a different testing "Application" prior to uploading to your deployed "Application".
This happens because the app-engine Data Store Indexes are not initialized i.e. corydoras's answer is correct. I am adding my fix for java [I presume python and the index.yaml have a similar fix].
You can see which indexes are serving on your using your Google account on https://appengine.google.com/ . click the app link on the left and on the left menu choose Datastore Indexes under data
When one makes a new query to the datastore it can take hours for the data indexes to be updated.
First you should know that debugging in the local environment creates a file called datastore-indexes-auto-xml every time a new "Kind" of entity is stored.
In the local environment it can be used instantly for a query but there is a delay in updating the datastore-indexes-auto-xml.
When deploying an application to the appengine the auto generated datastore-indexes-auto-xml is submitted and the data indexes are updated much faster [to see the results refresh the page].
So
Make sure none of your Entities have illegal signs e.g. '&'.
Open the Data Indexes view on appengine.google.com.
Make sure you havn't deleted the datastore-indexes-auto-xml. [I do this routinely]
Store an Entity of each "Kind"!
Use all the "Kinds" in Queries!
Make sure the datastore-indexes-auto-xml is updated [I sometimes even restart eclipse]
Deploy to appengine.
Refresh Data Indexes view on the browser.
Wait until you see the indexes
Please tell Google to fix this.
This was informative but didn't work for me: enter link description here
This was also informative but didn't work for me: enter link description here
Check your index.yaml file and make sure the proper indices are specified there, etc.

Categories

Resources