I just tested and redeployed my application to a test instance, and it worked fine, then i changed the app id and redeployed to my production instance, and I get an indexing problem. How do I avoid this in the future? I went to the effort to test it first and it worked fine!
Uncaught exception from servlet
com.google.appengine.api.datastore.DatastoreNeedIndexException: no matching index found.. <datastore-index kind="Article" ancestor="false" source="manual">
<property name="tags" direction="asc"/>
<property name="created" direction="asc"/>
</datastore-index>
at com.google.appengine.api.datastore.DatastoreApiHelper.translateError(DatastoreApiHelper.java:40)
at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:67)
The admin console says that it is "Building" the index. It has said that for 20 minutes now! How long does it take!?
When you create new queries, and use them for the first time on your local machine, they always work first time. When you run these new queries for the first time on google app engine, they will return this exception because the google app engine servers take some time to generate an "index" to allow your query to work properly.
I would recommend when you create new queries, to give them a once off run in the production environment to get the "index" built, so that when your users hit them, they work first time.
Secondly,manually pre defining your queries before you need them and uploading them to the server, means that when you really need them they may be built on the server already.
The way I work around this problem is to maintain a number of versions for my app.
Typically something like this:
Version 1: Current Default
Version 2: Next release
When I have a new release ready for deployment, I upload it to version 2 in this instance. Once the indexes have been built I make version 2 the default. This way customers never experience any downtime or errors.
So in essence you could swap between version 1 and 2 when releasing a new version.
I would suggest that you do also pre-test within a different testing "Application" prior to uploading to your deployed "Application".
This happens because the app-engine Data Store Indexes are not initialized i.e. corydoras's answer is correct. I am adding my fix for java [I presume python and the index.yaml have a similar fix].
You can see which indexes are serving on your using your Google account on https://appengine.google.com/ . click the app link on the left and on the left menu choose Datastore Indexes under data
When one makes a new query to the datastore it can take hours for the data indexes to be updated.
First you should know that debugging in the local environment creates a file called datastore-indexes-auto-xml every time a new "Kind" of entity is stored.
In the local environment it can be used instantly for a query but there is a delay in updating the datastore-indexes-auto-xml.
When deploying an application to the appengine the auto generated datastore-indexes-auto-xml is submitted and the data indexes are updated much faster [to see the results refresh the page].
So
Make sure none of your Entities have illegal signs e.g. '&'.
Open the Data Indexes view on appengine.google.com.
Make sure you havn't deleted the datastore-indexes-auto-xml. [I do this routinely]
Store an Entity of each "Kind"!
Use all the "Kinds" in Queries!
Make sure the datastore-indexes-auto-xml is updated [I sometimes even restart eclipse]
Deploy to appengine.
Refresh Data Indexes view on the browser.
Wait until you see the indexes
Please tell Google to fix this.
This was informative but didn't work for me: enter link description here
This was also informative but didn't work for me: enter link description here
Check your index.yaml file and make sure the proper indices are specified there, etc.
Related
I've been working on a project that use filenet for multiple parts of the project.
I've been asked to remove a boatload of file's that need to be removed. The reason why I was asked is because the assumtion was made we could remove the data through a java application.
And so far so good honestly. I've been able to use code left by former project members to create a delete application. However, i've come to a single problem in this issue: the retention date.
When I delete a file it will display the error message: Content Engine cannot delete or move content because the retention period for the item has not yet expired. Current time: 9/16/21 8:43 AM; Storage period: 20-12-99 1:00.
I've created code that does change this. However the above error is still appearing. I;ve doubled checked the retentiondate property in the specifice file, and this has been changed.
I've looked through all the properties of the file and have not spotted anything that includes this date.
So my questions is, what am I missing? Is this something from filenet?
The retention period is probably additionally set on the storage device and the storage device is not aligned with filenet.
What type of storage backend are you using.
Please check this doc https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=objects-retention-limitations-constraints
And this https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=objects-security-permissions-required-setting-modifying-retention
Sure that you have enough permissions for the principal that you are using for API calls.
And check this page https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=sweeps-deleting-objects-sweep and this https://www.ibm.com/docs/en/filenet-p8-platform/5.2.1?topic=sweeps-updating-object-retention as alternative way to delete objects without API and custom app.
We are running some tests using Keycloak 3.4.3 and exporting a database with 10k users. This is the code we use:
ExportImportConfig.setAction(ExportImportConfig.ACTION_EXPORT);
ExportImportConfig.setDir(backupFolderPath);
ExportImportConfig.setProvider(ExportImportConfig.PROVIDER_DEFAULT);
ExportImportConfig.setUsersExportStrategy(ExportImportConfig.DEFAULT_USERS_EXPORT_STRATEGY);
ExportImportConfig.setUsersPerFile(500);
ExportImportManager eiManager = new ExportImportManager(keycloakSession);
eiManager.runExport();
The time it takes to do a backup is about 1 minute per generated file, no matter how many users we put in that file (we tried with the default of 50 and got up to 500).
Are we missing some config that could help improve that time?
P.S. We also checked the Keycloak JIRA, but the only post we have found is KEYCLOAK-2413 and the fix for it was using "dir" export strategy, which we are already using. It is also a very old question, asked for Keycloak 1.
I have really low knowledge on Java and JasperReports, barely used those to play around, nothing too serious. A friend of mine has been trying to get someone to develop him an application that will generate PDFs with information from an access database for each of his clients, however, after 6 months and 7 developers who ditched him, he has found none, so he asked me if I could help him to which I said I'd give it a try.
What I have been able to do so far:
So far I've managed to successfully (Everything has been done separately, I have like 8 projects in total so far):
Use Jaspersoft Studio/iReport to create a single PDF with the required client information on each sheet.
Create a separate JasperReports project with a input field to get a pdf with a single client information.
Create a Java App with a JFrame to launch the report generation.
Create a Java App to connect to the access database through ucanaccess and validate the search criteria
Questions:
Now, after a few days on Google up and down I havnt managed to successfully achieve everything that I'd like to achieve, and I'd love if someone could either point me into good noob-proof guides or (if willing) provide a noob-proof answer so I can continue to move on.
Create a Java App where you can choose to generate all client's report or a single report for a specified client (I am assuming this isn't too complicated since it'd just be a matter to embed both Jasper reports into the java app), however I'd need to pass the input value into jasper report field to generate a single report (Not sure if this one was clear enough), and run the query for the data-set based on that field's value.
Ideally though not highly needed, pass yet another variable as a field to set a date range.
Since this is being done on a MS Access Database -*.accdb- (Don't blame me, I've been telling him to move to MySQL/SQL for quite a while now), I'd love to know if its possible to make JasperReports do a query based on a UCanAccess JDBC connection (Tried a few options, none worked).
Finally, I need to generate in the report a date range (Something like: "Between 1/Jan/2014 and 1/Feb/2014")
I feel like I've made a decent amount of progress so far, but since I am no pro on either JasperReports nor Java, I am getting stuck in a point where more knowledge is required to create a more decent and practical piece of software and I'd love if someone could point me into a better direction (Either if something is impossible or just a few links to help me get thru)
-Remeber to add ucanaccess jar and all dependencies jars in the Driver Classpath, while creating the Data Adapter
-You have to set Showschema=true:
e.g.
jdbc:ucanaccess://c:/db/database.accdb;Showschema=true
In this way Jasper Studio will be able to navigate the metadata of your database, and you'll find your tables under the PUBLIC schema.
Then you'll be able to create your reports as usual.
I have to create a jar with a java application that fulfills the following features:
There are xml data packed in the jar which are read the first time the application is started. with every consecutive start of the application the data are loaded from a dynamically created binary file.
A customer should not be able to reset the application to its primary state (e.g. if the binary file gets deleted for some reason, the application should fail to run again and give an error message).
All this should not depend on the os it is running on (which means e.g. setting a registry entry in windows won't do the job)
Summarizing I want to prevent a once started application to be reset in order to limit illegitimate reuse of the application.
Now to my ideas on how to accomplish that:
Delete the xml from the jar at the first run (so far I came to the understanding that it is not possible to let an application edit it's own jar. is that true?)
Set a variable/property/setting/whatever in the jar permanently at the first run (is that possible)
Any suggestions/ideas on how to accomplish that?
update:
I did not find a solution for this exact problem, but I found a simple workaround: along with my software I ship a certain file which gets changed after the program is started the first time. of course if someone keeps a copy of the original file he can always replace it and start over.
Any user able to delete the binary file, will, with enough time, also be able to revert any changes made in the jar. When the only existing part of the application is in the hand of the user, you won't able to prevent changes to it.
You can easily just store a backup of the original jar, make a copy, use that for one run, delete, copy the original jar, etc. You would need some sort of mechanism outside the users machine, like an activation server. The user gets one code to activate an account, and can't use that code again.
**
GAE/J, Eclipse, Local Datastore:
**
I have changed my data model, including the structure of one of my entity classes. In an attempt to start over with my data, I followed the advice of JohnIdol in this SO answer:
How to delete all datastore in Google App Engine?
Somehow, even after deleting local_db.bin and cleaning, the old structure re-appears. In this screenshot, you can see one of the properties with its old name, "organizationAliasKeys," which should now be "orgAliasKeys:"
http://dl.dropbox.com/u/6919071/Captures/2010-11-28_2035.png
Where is the old ghost of my data coming from? How do I kill it all the way?
From my experience you need to stop the dev server before deleting local_db.bin.