dspace create-administrator fails - java

I am working on installing the dspace 6.1 src release. I am on Suse 12.2. I have followed the installation instructions carefully, followed the ideas in this post, and cleaned-reinstalled-retried the create-administrator script and the user script numerous times yesterday and today. I always get the same error:
dspace#mycomputer:~/dspace_install> bin/dspace create-administrator
Exception: The schema validator returned: Unable to create requested
service [org.hibernate.engine.spi.CacheImplementor]
org.dspace.core.exception.DatabaseSchemaValidationException: The
schema validator returned: Unable to create requested service
[org.hibernate.engine.spi.CacheImplementor]
at org.dspace.core.Context.init(Context.java:170)
at org.dspace.core.Context.<init>(Context.java:126)
at org.dspace.administer.CreateAdministrator.<init>
(CreateAdministrator.java:101)
at (org.dspace.administer.CreateAdministrator.main(CreateAdministrator.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.dspace.app.launcher.ScriptLauncher.runOneCommand(ScriptLauncher.java:229)
at org.dspace.app.launcher.ScriptLauncher.main(ScriptLauncher.java:81)
I have not found this error anywhere online. The closest is that posting from 2015 above. My database did not have default entries in the epersongroup table, so I added the entries suggested the post above (add 'Anonymous' and 'Administrator' groups). Still I am getting the same error.
I would be grateful for any help!

Since the error reports an issue with the database schema, I recommend the following.
Run the following command to see the status of your database schema
bin/dspace database info
You will see output like the following
+----------------+-----------------------------------------------------+---------------------+---------+
| Version | Description | Installed on | State |
+----------------+-----------------------------------------------------+---------------------+---------+
| 1 | << Flyway Baseline >> | 2017-07-25 18:26:49 | Baselin |
| 1.1 | Initial DSpace 1.1 database schema | 2017-07-25 18:26:50 | Success |
| 1.2 | Upgrade to DSpace 1.2 schema | 2017-07-25 18:26:50 | Success |
| 1.3 | Upgrade to DSpace 1.3 schema | 2017-07-25 18:26:50 | Success |
| 1.3.9 | Drop constraint for DSpace 1 4 schema | 2017-07-25 18:26:50 | Success |
| 1.4 | Upgrade to DSpace 1.4 schema | 2017-07-25 18:26:50 | Success |
| 1.5 | Upgrade to DSpace 1.5 schema | 2017-07-25 18:26:51 | Success |
| 1.5.9 | Drop constraint for DSpace 1 6 schema | 2017-07-25 18:26:51 | Success |
| 1.6 | Upgrade to DSpace 1.6 schema | 2017-07-25 18:26:51 | Success |
| 1.7 | Upgrade to DSpace 1.7 schema | 2017-07-25 18:26:51 | Success |
| 1.8 | Upgrade to DSpace 1.8 schema | 2017-07-25 18:26:51 | Success |
| 3.0 | Upgrade to DSpace 3.x schema | 2017-07-25 18:26:51 | Success |
| 4.0 | Upgrade to DSpace 4.x schema | 2017-07-25 18:26:51 | Success |
| 4.9.2015.10.26 | DS-2818 registry update | 2017-07-25 18:26:51 | Success |
| 5.0.2014.08.08 | DS-1945 Helpdesk Request a Copy | 2017-07-25 18:26:51 | Success |
| 5.0.2014.09.25 | DS 1582 Metadata For All Objects drop constraint | 2017-07-25 18:26:51 | Success |
| 5.0.2014.09.26 | DS-1582 Metadata For All Objects | 2017-07-25 18:26:51 | Success |
| 5.6.2016.08.23 | DS-3097 | 2017-07-25 18:26:51 | Success |
| 5.7.2017.04.11 | DS-3563 Index metadatavalue resource type id column | 2017-07-25 18:26:51 | Success |
| 5.7.2017.05.05 | DS 3431 Add Policies for BasicWorkflow | 2017-07-25 18:26:51 | Success |
| 6.0.2015.03.06 | DS 2701 Dso Uuid Migration | 2017-07-25 18:26:51 | Success |
| 6.0.2015.03.07 | DS-2701 Hibernate migration | 2017-07-25 18:26:51 | Success |
| 6.0.2015.08.31 | DS 2701 Hibernate Workflow Migration | 2017-07-25 18:26:52 | Success |
| 6.0.2016.01.03 | DS-3024 | 2017-07-25 18:26:52 | Success |
| 6.0.2016.01.26 | DS 2188 Remove DBMS Browse Tables | 2017-07-25 18:26:52 | Success |
| 6.0.2016.02.25 | DS-3004-slow-searching-as-admin | 2017-07-25 18:26:52 | Success |
| 6.0.2016.04.01 | DS-1955 Increase embargo reason | 2017-07-25 18:26:52 | Success |
| 6.0.2016.04.04 | DS-3086-OAI-Performance-fix | 2017-07-25 18:26:52 | Success |
| 6.0.2016.04.14 | DS-3125-fix-bundle-bitstream-delete-rights | 2017-07-25 18:26:52 | Success |
| 6.0.2016.05.10 | DS-3168-fix-requestitem item id column | 2017-07-25 18:26:52 | Success |
| 6.0.2016.07.21 | DS-2775 | 2017-07-25 18:26:52 | Success |
| 6.0.2016.07.26 | DS-3277 fix handle assignment | 2017-07-25 18:26:52 | Success |
| 6.0.2016.08.23 | DS-3097 | 2017-07-25 18:26:52 | Success |
| 6.1.2017.01.03 | DS 3431 Add Policies for BasicWorkflow | 2017-07-25 18:26:52 | Success |
+----------------+-----------------------------------------------------+---------------------+---------+
If you see any items without a status of "Success" run the following
bin/dspace database repair
If you do not see that the entries include release 6.1, run the following command
bin/dspace database migrate
Hopefully this will resolve this issue.

I just spoke with my colleague, who found the real problem. We installed 2 instances of Dspace on the same server, and he found the setting for the hibernate ehcache (hibernate-ehcache-config.xml) in the [dspace-src]/dspace/config directory. This was referenced in the exception message I posted in my question above. There is a warning in hibernate-ehcache-config.xml that when you have more than one instance of dspace on the same server, you need separate diskStore folders for the hibernate ehcache. I created a separate one for my Dspace instance, then reversed my setting for dspace.hostname in the local.cfg file so it shows the server domain name, and everything works, the server, and create-administrator on the command line.

Related

Using previous JDK version which is not available via SDKMAN but from adoptopenjdk

I have the following situation:
$ sdk java list
AdoptOpenJDK | | 15.0.1.j9 | adpt | | 15.0.1.j9-adpt
| | 15.0.1.hs | adpt | installed | 15.0.1.hs-adpt
| | 14.0.2.j9 | adpt | | 14.0.2.j9-adpt
| | 14.0.2.hs | adpt | | 14.0.2.hs-adpt
| | 13.0.2.j9 | adpt | | 13.0.2.j9-adpt
| | 13.0.2.hs | adpt | | 13.0.2.hs-adpt
| | 12.0.2.j9 | adpt | | 12.0.2.j9-adpt
| | 12.0.2.hs | adpt | | 12.0.2.hs-adpt
| | 11.0.9.j9 | adpt | | 11.0.9.j9-adpt
| >>> | 11.0.9.hs | adpt | installed | 11.0.9.hs-adpt
...
But on adoptopenjdk the following version https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/tag/jdk-11.0.8%2B10 is available and can be downloaded.
But when I try to install it via sdkman:
$ sdk java install
tools$ sdk install java 11.0.8.hs-adpt
Stop! java 11.0.8.hs-adpt is not available. Possible causes:
* 11.0.8.hs-adpt is an invalid version
* java binaries are incompatible with Darwin
* java has not been released yet
Is there a way to handle that via sdkman or not?
sdkman doesn't recognize as candidates every existing version of the jdk.
However it allows to add local version as compensation.
Thus you'll just have to download the specific version you need, store it in any path you want, and then use a command to register the local version as installed in sdkman:
https://sdkman.io/usage#localversion

Can separate OSGi Deployment Packages use the same types of Resources?

The Deployment Admin Specification defines a way of creating Deployment Packages, which can contain Bundles and Resources.
I wanted to use the Deployment Admin to install two types of Deployment Packages. The first Deployment Package would contain the "framework" of the program; including all infrastructure level services and bundles, like the Configuration Admin, Declarative Services, etc. The second Deployment Package would contain the "data" of the program; almost exclusively Resources to be processed by the Resource Processors in the framework package.
In this situation, the Resource Processors would be in Customizer bundles in the framework package.
+--------------------------------------------------+
| |
| Framework Deployment Package |
| |
+--------------------------------------------------+
| |
| Declarative Services : Bundle |
| |
+--------------------------------------------------+
| |
| Configuration Admin : Bundle |
| |
+--------------------------------------------------+
| |
| Species Resource Processor : Bundle (Customizer) |
| |
+--------------------------------------------------+
+-------------------------+
| |
| Data Deployment Package |
| |
+-------------------------+
| |
| Configuration Resources |
| |
+-------------------------+
| |
| Species Resources |
| |
+-------------------------+
When trying to do it this way, though, the second package is recognized as a foreign deployment package, and therefore can't be installed. The Specification, (114.5, Customizer) mentions
As a Customizer bundle is started, it should register one or more Resource Processor services. These
Resource Processor services must only be used by resources originating from the same Deployment
Package. Customizer bundles must never process a resource from another Deployment Package,
which must be ensured by the Deployment Admin service.
To check if this was true, I looked to the Auto Configuration Specification (115), which specifies an Autoconf Resource Processor, which is an extension of the Deployment Package Specification. It specifies a Resource Processor implementation, and processes Resources from the Deployment Package.
Based on this, if I wanted to use the Auto Configuration Specification, then I would seemingly need to include the Auto Configuration Bundle, and the Autoconf Resources within the same Deployment Package. This seems to result in another problem though. The Auto Configuration Bundle wouldn't be able to be used by another package, since a Resource Processor can only be used by Resources in the same package; additionally that Bundle couldn't be used by a separate, unrelated deployment package, because the Autoconf Bundle is already registered by the first package.
+---------------------------------+ +---------------------------------+
| | | |
| First Deployment Package | | Second Deployment Package |
| | | |
+---------------------------------+ +---------------------------------+
| | | |
| +-----------------------------+ | | +-----------------------------+ |
| | | | | | | |
| | Autoconf Bundle | | | | Autoconf Bundle | |
| | | | | | | |
| | SymbolicName: X | | | | SymbolicName: X | |
| | Version 1.0 | | | | Version 1.0 | |
| | | | | | | |
| +-----------------------------+ | | | Cannot be installed | |
| | | | | | because it's already | |
| | Autoconf Resource Processor | | | | installed by first | |
| | | | | | | |
| | Can only be used by | | | +-----------------------------+ |
| | this Deployment Package | | | | | |
| | | | | | Autoconf Resource Processor | |
| +-----------------------------+ | | | | |
| | | | Can only be used by | |
| Autoconf Resource | | | this Deployment Package | |
| | | | | |
+---------------------------------+ | +-----------------------------+ |
| |
| Autoconf Resource |
| |
| Cannot be used because Bundle |
| containing Resource Processor |
| isn't installed, and therefore |
| can't be used by only this |
| Deployment Package |
| |
+---------------------------------+
It seems like if two Deployment Packages wanted to use Autoconf Resources, we would be blocked because the Resource Processor is either installed as a Bundle, and therefore unusable by the Deployment Admin because it is not in a Deployment Package, or as part of a single Deployment Package, and no other Deployment Package can ever use that version of that Bundle.
Is my understanding correct then, that two Deployment Packages not only have to be separate, but also completely unrelated, to the point of not reusing even a single Bundle? If this is the case, then does that mean that it is impossible to have two Deployment Packages with Autoconf Resources; or at least, two packages with the same type of Resource?

How to connect JDBC remotely

I'm a mac user, I have the oracle database in a virtual machine, and I've been working on it pretty well, I've been installed also Eclipse on my Mac machine, my problem begins now that I need to connect them.
Is there any way I can connect to my oracle database remotely with the IP address on the virtual machine?
In other words connect my Eclipse installed on mac to my oracle database installed on my windows virtual machine.
Yes, this can be done and is a common scenario for development. Depending on the flavour of virtual machine you are using (VMWare, virtualbox, etc) the details will differ, but you will either need to set up port forwarding of the relevant oracle ports onto your host machine (the Mac), or configure bridged networking so the virtual machine with oracle has it's own IP address on the network.
I tend to just port forward and use NAT, which means I can connect to the database using the normal jdbc connection string, with the hostname section set to localhost. The schematic diagram below should give you a rough idea of what's going on.
+--------------------------------------+
| |
| |
| v
| +-------+-----------+
| | |
| | 1512 |
+--------------------------------------------+------+-----+------++
| | Host Machine (Mac) | NAT + PortForwarding
| | | |
| | | |
| | | |
| | | |
| | v |
| | +------+-----------+ |
| | | 1512 | |
| | +-------------+------------------+ |
| | | | |
| +---------+-------------+ | | |
| | java application | | | |
| | | | | |
| | | | | |
| |db-url=localhost:1512/ | | ORACLE VM | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +-----------------------+ | | |
| +--------------------------------+ |
+-----------------------------------------------------------------+
For details on how to set up NAT/portforwding on virtualbox, see this relevant section in this link: https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/

HA karaf cellar cluster

I have three karaf nodes in one cellar group on my machine. First node (lb_node) is used as load_balancer and other two nodes (1_node and 2_node) are used as service nodes (with deployed features). Both nodes have /service address available. I've installed cellar-http-balancer feature on cluster. Also, I have installed locally sample feature both on 1_node and 2_node.
Problem is that, when I start 1_node and 2_node their services are not properly registered in lb_node. http-list output from lb_node:
ID | Servlet | Servlet-Name | State | Alias | Url
103 | CellarBalancerProxyServlet | ServletModel-10 | Failed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-11 | Failed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-12 | Failed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-9 | Failed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-13 | Failed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-8 | Deployed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-14 | Failed | /system/console/res | [/system/console/res/*]
103 | CellarBalancerProxyServlet | ServletModel-15 | Failed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-3 | Deployed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-2 | Deployed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-7 | Deployed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-6 | Deployed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-5 | Deployed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-4 | Deployed | /system/console/res | [/system/console/res/*]
As you can see only one node registered address. When I enter lb_node url in browser to check if feature from other nodes works, it does. But when I shutdown registered node, then lb_node won't act as proxy anymore. It throws java.net.ConnectException: Connection refused
In tutorial https://karaf.apache.org/manual/cellar/latest-4/#_http_balancer there is case for only one node with service and one balancer, this is not my case.
Is there any way to achieve active/active cluster with http load balancing using karaf and cellar?
I put this as answer because I can't comment (no reputation), I'm not answering your question but it may help you.
I found this github project that does what I think you are trying to achieve, I haven't tested it yet.

Apache Tomcat using old resources - changes in Eclipse project not reflected in Web App

I am using the Eclipse Java EE IDE with Apache Tomcat version 6 to develop a web application.
The structure of the application is very simple, there are just two classes, one is the servlet class and the other is a object which is built by the servlet and does most of the work.
The problem that I am having is this: since this morning changes which I have made to both class files have not been appeared in the behaviour of the Web application. The application essentially behaves as if it is running my code from yesterday. To make sure that this is the case, I temporarily altered the behaviour of the program in radical ways and low, these changes still did not affect the web application.
Some relevant information:I am running Ubuntu 12, my Eclipse project is set to build automatically, and the tomcat server is configured to auto load modules by default and to automatically publish whenever a resource is changed. I have also cleaned
out the server's working directory.
How can I overcome this issue? I need my web application to implement the changes which I have made to the source code of my servlet and the class which the servlet uses.
If you add the Tomcat server in the standar way in the tabs Servers, the deploy path lives in the .metadata directory of your workspace.
The next tree directory is for my workspace. I added Tomcat 7.
MY_WORKSPACE
+---.metadata
| \---.plugins
| +---org.eclipse.wst.server.core
| | | monitors.xml
| | | publish.xml
| | | servers.xml
| | | tmp-data.xml
| | |
| | +---publish
| | | publish0.dat
| | | publish1.dat
| | |
| | \---tmp0
| | +---conf
| | | | catalina.policy
| | | | catalina.properties
| | | | context.xml
| | | | server.xml
| | | | tomcat-users.xml
| | | | web.xml
| | | |
| | | \---Catalina
| | | \---localhost
| | +---logs
| | | catalina.2013-07-06.log
| | | catalina.2013-07-11.log
| | | host-manager.2013-07-06.log
| | | host-manager.2013-07-11.log
| | | localhost.2013-07-06.log
| | | localhost.2013-07-11.log
| | | localhost_access_log.2013-07-06.txt
| | | localhost_access_log.2013-07-11.txt
| | | manager.2013-07-06.log
| | | manager.2013-07-11.log
| | |
| | +---temp
| | +---webapps
| | +---work
| | | \---Catalina
| | | \---localhost
| | \---wtpwebapps
| | +---ROOT
| | | \---WEB-INF
| | | web.xml
| | |
| | \---MyWebProject
| | | index.html
| | |
| | \---WEB-INF
| | +---classes
| | | |
| | | \---several-packages-and-clases
| | \---lib
| | log4j-1.2.17.jar
| | slf4j-api-1.7.5.jar
| | slf4j-log4j12-1.7.5.jar
| |
| \---org.other.plugins
|
\---Servers
| .project
|
+---.settings
| org.eclipse.wst.server.core.prefs
|
\---Tomcat v7.0 Server at localhost-config
catalina.policy
catalina.properties
context.xml
server.xml
tomcat-users.xml
web.xml
The real path for my web application MyWebProject is in the directory wtpwebapps. You can delete MyWebProject and try again.
If you have several Tomcat's in your workspace, you see directories like tmp0, tmp1, tmp2 ...
I have faced similar issue , in my case even if the tmp0 folder was correctly updated with new class files and resource files, still I was getting the same issue.
In my case, i got it resolved by disabling jrebel plugin for eclipse. I had jrebel installed in eclipse which was still monitoring and deploying the project from other workspace (not even this workspace), I am going to raise a bug to jrebel team as well.
Simply disable jrebel for that project and get it working back to normal

Categories

Resources