JBoss configuration files - java

I've inherited a project that is running a single instance of a JBoss 7.x server, java back-end, etc. I'm completely new to JBoss and I was curious about the configuration of the file structure(s), what I have to have and where. The documentation has a different structure than what I've been handed and I'm not sure how one would, say, setup a completely new web application server project (i.e. starting a project from scratch).
Why, for instance, do I have multiple standalone.xml files? Namely,
standalone.xml
standalone-ha.xml
standalone-full-ha.xml
standalone-full.xml
Basically, I'm looking for a 'You NEED these to get your app running' sort of guide.
My JBoss folder has this structure
|--appclient
| |--configuration
| | `--appclient.xml
| | `-- logging.properties
|
|--bin
| |--client
| | `-- jboss-client.jar
| |--init.d
| | `-- jboss-as.conf
| | `-- jboss-as-standalone.sh
| `--(a lot of .bat and .conf files)
|
|--bundles
| |--javax
| | |--servlet
| | |--api
| | |--v25
| | `--jbosgi-http-api-1.0.5.jar
| |--org
| |--apache
| |--jboss
| |--osgi
| |--projectodd
|
|--docs
| |--examples
| |--schema
|
|--domain
| |--configuration
| | `--domain.xml
| | `--host.xml
| | `--host-master.xml
| | `--host-slave.xml
| |--data
| | |--content
| | `--(empty)
| |--tmp
| | |--auth
| | `--(empty)
|
|--modules
| |--asm
| | |--main
| | |--asm
| | `--asm-3.3.1.jar
| | `--module.xml
| |--ch
| |--com
| |--gnu
| |--javaee
| |--javax
| |--jline
| |--net
| |--nu
| |--org
| |--sun
|
|--standalone
| |--configuration
| | `--(I know the standalone.xml files go here)
| |--data
| |--deployments
| | `-- (I know the .war files go here)
| |--lib
| |--log
| | `-- (what ever could this be?? *sarcasm)
| |--tmp
|
|--welcome-content
*clearly I got tired and didn't label everything in every folder

The link you provided is for an older version of jboss (4, 5), that is why it's all different jej
standalone*.xml specifies the services jboss provides to the user. You can choose the services you need, so you don't have to waste memory on services you are not going to use.
For example, standalone-full-ha.xml provides all services, and also starts jboss in a cluster mode. standalone-full has all the services, but without cluster mode. standalone.xml is the default one, and has all the basic services you will need to deploy an app (note that it does not include JMS support).
In the extension section of your standalone*.xml you can see what services are being provided.
When you start jboss, if you don't use the -c param, it will use standalone.xml. If you want to use standalone-full.xml (or any other config, could be a custom one), you would use standalone.bat -c standalone-full.xml
As you said, standalone/deployments is where you deploy your apps. Remember to place a .dodeploy file to tell jboss to deploy your apps. For example, myExample.war should have a myExample.war.dodeploy (if you forget this, the log will tell you that there is an app to deploy, and is waiting for the dodeploy file)
Hope it helps!

Related

Can separate OSGi Deployment Packages use the same types of Resources?

The Deployment Admin Specification defines a way of creating Deployment Packages, which can contain Bundles and Resources.
I wanted to use the Deployment Admin to install two types of Deployment Packages. The first Deployment Package would contain the "framework" of the program; including all infrastructure level services and bundles, like the Configuration Admin, Declarative Services, etc. The second Deployment Package would contain the "data" of the program; almost exclusively Resources to be processed by the Resource Processors in the framework package.
In this situation, the Resource Processors would be in Customizer bundles in the framework package.
+--------------------------------------------------+
| |
| Framework Deployment Package |
| |
+--------------------------------------------------+
| |
| Declarative Services : Bundle |
| |
+--------------------------------------------------+
| |
| Configuration Admin : Bundle |
| |
+--------------------------------------------------+
| |
| Species Resource Processor : Bundle (Customizer) |
| |
+--------------------------------------------------+
+-------------------------+
| |
| Data Deployment Package |
| |
+-------------------------+
| |
| Configuration Resources |
| |
+-------------------------+
| |
| Species Resources |
| |
+-------------------------+
When trying to do it this way, though, the second package is recognized as a foreign deployment package, and therefore can't be installed. The Specification, (114.5, Customizer) mentions
As a Customizer bundle is started, it should register one or more Resource Processor services. These
Resource Processor services must only be used by resources originating from the same Deployment
Package. Customizer bundles must never process a resource from another Deployment Package,
which must be ensured by the Deployment Admin service.
To check if this was true, I looked to the Auto Configuration Specification (115), which specifies an Autoconf Resource Processor, which is an extension of the Deployment Package Specification. It specifies a Resource Processor implementation, and processes Resources from the Deployment Package.
Based on this, if I wanted to use the Auto Configuration Specification, then I would seemingly need to include the Auto Configuration Bundle, and the Autoconf Resources within the same Deployment Package. This seems to result in another problem though. The Auto Configuration Bundle wouldn't be able to be used by another package, since a Resource Processor can only be used by Resources in the same package; additionally that Bundle couldn't be used by a separate, unrelated deployment package, because the Autoconf Bundle is already registered by the first package.
+---------------------------------+ +---------------------------------+
| | | |
| First Deployment Package | | Second Deployment Package |
| | | |
+---------------------------------+ +---------------------------------+
| | | |
| +-----------------------------+ | | +-----------------------------+ |
| | | | | | | |
| | Autoconf Bundle | | | | Autoconf Bundle | |
| | | | | | | |
| | SymbolicName: X | | | | SymbolicName: X | |
| | Version 1.0 | | | | Version 1.0 | |
| | | | | | | |
| +-----------------------------+ | | | Cannot be installed | |
| | | | | | because it's already | |
| | Autoconf Resource Processor | | | | installed by first | |
| | | | | | | |
| | Can only be used by | | | +-----------------------------+ |
| | this Deployment Package | | | | | |
| | | | | | Autoconf Resource Processor | |
| +-----------------------------+ | | | | |
| | | | Can only be used by | |
| Autoconf Resource | | | this Deployment Package | |
| | | | | |
+---------------------------------+ | +-----------------------------+ |
| |
| Autoconf Resource |
| |
| Cannot be used because Bundle |
| containing Resource Processor |
| isn't installed, and therefore |
| can't be used by only this |
| Deployment Package |
| |
+---------------------------------+
It seems like if two Deployment Packages wanted to use Autoconf Resources, we would be blocked because the Resource Processor is either installed as a Bundle, and therefore unusable by the Deployment Admin because it is not in a Deployment Package, or as part of a single Deployment Package, and no other Deployment Package can ever use that version of that Bundle.
Is my understanding correct then, that two Deployment Packages not only have to be separate, but also completely unrelated, to the point of not reusing even a single Bundle? If this is the case, then does that mean that it is impossible to have two Deployment Packages with Autoconf Resources; or at least, two packages with the same type of Resource?

How to connect JDBC remotely

I'm a mac user, I have the oracle database in a virtual machine, and I've been working on it pretty well, I've been installed also Eclipse on my Mac machine, my problem begins now that I need to connect them.
Is there any way I can connect to my oracle database remotely with the IP address on the virtual machine?
In other words connect my Eclipse installed on mac to my oracle database installed on my windows virtual machine.
Yes, this can be done and is a common scenario for development. Depending on the flavour of virtual machine you are using (VMWare, virtualbox, etc) the details will differ, but you will either need to set up port forwarding of the relevant oracle ports onto your host machine (the Mac), or configure bridged networking so the virtual machine with oracle has it's own IP address on the network.
I tend to just port forward and use NAT, which means I can connect to the database using the normal jdbc connection string, with the hostname section set to localhost. The schematic diagram below should give you a rough idea of what's going on.
+--------------------------------------+
| |
| |
| v
| +-------+-----------+
| | |
| | 1512 |
+--------------------------------------------+------+-----+------++
| | Host Machine (Mac) | NAT + PortForwarding
| | | |
| | | |
| | | |
| | | |
| | v |
| | +------+-----------+ |
| | | 1512 | |
| | +-------------+------------------+ |
| | | | |
| +---------+-------------+ | | |
| | java application | | | |
| | | | | |
| | | | | |
| |db-url=localhost:1512/ | | ORACLE VM | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +-----------------------+ | | |
| +--------------------------------+ |
+-----------------------------------------------------------------+
For details on how to set up NAT/portforwding on virtualbox, see this relevant section in this link: https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/

HA karaf cellar cluster

I have three karaf nodes in one cellar group on my machine. First node (lb_node) is used as load_balancer and other two nodes (1_node and 2_node) are used as service nodes (with deployed features). Both nodes have /service address available. I've installed cellar-http-balancer feature on cluster. Also, I have installed locally sample feature both on 1_node and 2_node.
Problem is that, when I start 1_node and 2_node their services are not properly registered in lb_node. http-list output from lb_node:
ID | Servlet | Servlet-Name | State | Alias | Url
103 | CellarBalancerProxyServlet | ServletModel-10 | Failed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-11 | Failed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-12 | Failed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-9 | Failed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-13 | Failed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-8 | Deployed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-14 | Failed | /system/console/res | [/system/console/res/*]
103 | CellarBalancerProxyServlet | ServletModel-15 | Failed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-3 | Deployed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-2 | Deployed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-7 | Deployed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-6 | Deployed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-5 | Deployed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-4 | Deployed | /system/console/res | [/system/console/res/*]
As you can see only one node registered address. When I enter lb_node url in browser to check if feature from other nodes works, it does. But when I shutdown registered node, then lb_node won't act as proxy anymore. It throws java.net.ConnectException: Connection refused
In tutorial https://karaf.apache.org/manual/cellar/latest-4/#_http_balancer there is case for only one node with service and one balancer, this is not my case.
Is there any way to achieve active/active cluster with http load balancing using karaf and cellar?
I put this as answer because I can't comment (no reputation), I'm not answering your question but it may help you.
I found this github project that does what I think you are trying to achieve, I haven't tested it yet.

Web app with JPA doesn't work on extenal Tomcat server

I've some problems with publishing my Dynamic Web application in Tomcat, on my VPS server.
I developed an application which contains Servlet( creating Entity Manager and doing operations on the database), and jar files - also Entity components are packed in jar file.
Application uses eclipselink and postgresql.
On Tomcat 7 Server installed witch eclipse, everything works fine, but when I try to deploy it to Tomcat 7 Server on my VPS I'm getting an exception:
javax.servlet.ServletException: Error instantiating servlet class pl.marekbury.controller.StoreServer
and the root cause
javax.naming.NameNotFoundException: Name [pl.marekbury.controller.StoreServer/PERSISTENCE_UNIT_NAME] is not bound in this Context. Unable to find [pl.marekbury.controller.StoreServer].
I had also the same error on my localhost eclipse-integrated server, but i found a solution (somwere here, on stack) to chcange eclipselink version, after I did id, error's gone.
I'm deploying app in these way:
- export war from eclispe
- deploy it trough tomcat web manager
I tried:
- Change server to Tomee
- Place all jar libs in WEB-INF/lib
Structure of application folder after deploying to tomcat:
----krzyzyk
| |---index.jsp
|---META-INF
| |---MANIFEST.MF
|
|---WEB-INF
| |---web.xml
| |---lib
| | |---all jar files(entities,servlet-api itd..)
| |
| |---classes
| | |---META-INF(remain after making jar with entities)
| | |---pl
| | | |---marekbury
| | | | |---controller
| | | | | |---StoreServer.class
| | | | |---model
| | | | | |---entities
| | | | | | |---User.class (remain after making jar with entities)
index.jsp makes http request to servlet StoreServer
Any ideas how to make it run properly?
I have found solution. Error was caused by different Java versions on vps(oracle 1.7) and my computer(openJDK 1.7). I have made .WAR on VPS and everything works fine.

Apache Tomcat using old resources - changes in Eclipse project not reflected in Web App

I am using the Eclipse Java EE IDE with Apache Tomcat version 6 to develop a web application.
The structure of the application is very simple, there are just two classes, one is the servlet class and the other is a object which is built by the servlet and does most of the work.
The problem that I am having is this: since this morning changes which I have made to both class files have not been appeared in the behaviour of the Web application. The application essentially behaves as if it is running my code from yesterday. To make sure that this is the case, I temporarily altered the behaviour of the program in radical ways and low, these changes still did not affect the web application.
Some relevant information:I am running Ubuntu 12, my Eclipse project is set to build automatically, and the tomcat server is configured to auto load modules by default and to automatically publish whenever a resource is changed. I have also cleaned
out the server's working directory.
How can I overcome this issue? I need my web application to implement the changes which I have made to the source code of my servlet and the class which the servlet uses.
If you add the Tomcat server in the standar way in the tabs Servers, the deploy path lives in the .metadata directory of your workspace.
The next tree directory is for my workspace. I added Tomcat 7.
MY_WORKSPACE
+---.metadata
| \---.plugins
| +---org.eclipse.wst.server.core
| | | monitors.xml
| | | publish.xml
| | | servers.xml
| | | tmp-data.xml
| | |
| | +---publish
| | | publish0.dat
| | | publish1.dat
| | |
| | \---tmp0
| | +---conf
| | | | catalina.policy
| | | | catalina.properties
| | | | context.xml
| | | | server.xml
| | | | tomcat-users.xml
| | | | web.xml
| | | |
| | | \---Catalina
| | | \---localhost
| | +---logs
| | | catalina.2013-07-06.log
| | | catalina.2013-07-11.log
| | | host-manager.2013-07-06.log
| | | host-manager.2013-07-11.log
| | | localhost.2013-07-06.log
| | | localhost.2013-07-11.log
| | | localhost_access_log.2013-07-06.txt
| | | localhost_access_log.2013-07-11.txt
| | | manager.2013-07-06.log
| | | manager.2013-07-11.log
| | |
| | +---temp
| | +---webapps
| | +---work
| | | \---Catalina
| | | \---localhost
| | \---wtpwebapps
| | +---ROOT
| | | \---WEB-INF
| | | web.xml
| | |
| | \---MyWebProject
| | | index.html
| | |
| | \---WEB-INF
| | +---classes
| | | |
| | | \---several-packages-and-clases
| | \---lib
| | log4j-1.2.17.jar
| | slf4j-api-1.7.5.jar
| | slf4j-log4j12-1.7.5.jar
| |
| \---org.other.plugins
|
\---Servers
| .project
|
+---.settings
| org.eclipse.wst.server.core.prefs
|
\---Tomcat v7.0 Server at localhost-config
catalina.policy
catalina.properties
context.xml
server.xml
tomcat-users.xml
web.xml
The real path for my web application MyWebProject is in the directory wtpwebapps. You can delete MyWebProject and try again.
If you have several Tomcat's in your workspace, you see directories like tmp0, tmp1, tmp2 ...
I have faced similar issue , in my case even if the tmp0 folder was correctly updated with new class files and resource files, still I was getting the same issue.
In my case, i got it resolved by disabling jrebel plugin for eclipse. I had jrebel installed in eclipse which was still monitoring and deploying the project from other workspace (not even this workspace), I am going to raise a bug to jrebel team as well.
Simply disable jrebel for that project and get it working back to normal

Categories

Resources