I'm a mac user, I have the oracle database in a virtual machine, and I've been working on it pretty well, I've been installed also Eclipse on my Mac machine, my problem begins now that I need to connect them.
Is there any way I can connect to my oracle database remotely with the IP address on the virtual machine?
In other words connect my Eclipse installed on mac to my oracle database installed on my windows virtual machine.
Yes, this can be done and is a common scenario for development. Depending on the flavour of virtual machine you are using (VMWare, virtualbox, etc) the details will differ, but you will either need to set up port forwarding of the relevant oracle ports onto your host machine (the Mac), or configure bridged networking so the virtual machine with oracle has it's own IP address on the network.
I tend to just port forward and use NAT, which means I can connect to the database using the normal jdbc connection string, with the hostname section set to localhost. The schematic diagram below should give you a rough idea of what's going on.
+--------------------------------------+
| |
| |
| v
| +-------+-----------+
| | |
| | 1512 |
+--------------------------------------------+------+-----+------++
| | Host Machine (Mac) | NAT + PortForwarding
| | | |
| | | |
| | | |
| | | |
| | v |
| | +------+-----------+ |
| | | 1512 | |
| | +-------------+------------------+ |
| | | | |
| +---------+-------------+ | | |
| | java application | | | |
| | | | | |
| | | | | |
| |db-url=localhost:1512/ | | ORACLE VM | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| +-----------------------+ | | |
| +--------------------------------+ |
+-----------------------------------------------------------------+
For details on how to set up NAT/portforwding on virtualbox, see this relevant section in this link: https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/
Related
I have the following situation:
$ sdk java list
AdoptOpenJDK | | 15.0.1.j9 | adpt | | 15.0.1.j9-adpt
| | 15.0.1.hs | adpt | installed | 15.0.1.hs-adpt
| | 14.0.2.j9 | adpt | | 14.0.2.j9-adpt
| | 14.0.2.hs | adpt | | 14.0.2.hs-adpt
| | 13.0.2.j9 | adpt | | 13.0.2.j9-adpt
| | 13.0.2.hs | adpt | | 13.0.2.hs-adpt
| | 12.0.2.j9 | adpt | | 12.0.2.j9-adpt
| | 12.0.2.hs | adpt | | 12.0.2.hs-adpt
| | 11.0.9.j9 | adpt | | 11.0.9.j9-adpt
| >>> | 11.0.9.hs | adpt | installed | 11.0.9.hs-adpt
...
But on adoptopenjdk the following version https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/tag/jdk-11.0.8%2B10 is available and can be downloaded.
But when I try to install it via sdkman:
$ sdk java install
tools$ sdk install java 11.0.8.hs-adpt
Stop! java 11.0.8.hs-adpt is not available. Possible causes:
* 11.0.8.hs-adpt is an invalid version
* java binaries are incompatible with Darwin
* java has not been released yet
Is there a way to handle that via sdkman or not?
sdkman doesn't recognize as candidates every existing version of the jdk.
However it allows to add local version as compensation.
Thus you'll just have to download the specific version you need, store it in any path you want, and then use a command to register the local version as installed in sdkman:
https://sdkman.io/usage#localversion
The Deployment Admin Specification defines a way of creating Deployment Packages, which can contain Bundles and Resources.
I wanted to use the Deployment Admin to install two types of Deployment Packages. The first Deployment Package would contain the "framework" of the program; including all infrastructure level services and bundles, like the Configuration Admin, Declarative Services, etc. The second Deployment Package would contain the "data" of the program; almost exclusively Resources to be processed by the Resource Processors in the framework package.
In this situation, the Resource Processors would be in Customizer bundles in the framework package.
+--------------------------------------------------+
| |
| Framework Deployment Package |
| |
+--------------------------------------------------+
| |
| Declarative Services : Bundle |
| |
+--------------------------------------------------+
| |
| Configuration Admin : Bundle |
| |
+--------------------------------------------------+
| |
| Species Resource Processor : Bundle (Customizer) |
| |
+--------------------------------------------------+
+-------------------------+
| |
| Data Deployment Package |
| |
+-------------------------+
| |
| Configuration Resources |
| |
+-------------------------+
| |
| Species Resources |
| |
+-------------------------+
When trying to do it this way, though, the second package is recognized as a foreign deployment package, and therefore can't be installed. The Specification, (114.5, Customizer) mentions
As a Customizer bundle is started, it should register one or more Resource Processor services. These
Resource Processor services must only be used by resources originating from the same Deployment
Package. Customizer bundles must never process a resource from another Deployment Package,
which must be ensured by the Deployment Admin service.
To check if this was true, I looked to the Auto Configuration Specification (115), which specifies an Autoconf Resource Processor, which is an extension of the Deployment Package Specification. It specifies a Resource Processor implementation, and processes Resources from the Deployment Package.
Based on this, if I wanted to use the Auto Configuration Specification, then I would seemingly need to include the Auto Configuration Bundle, and the Autoconf Resources within the same Deployment Package. This seems to result in another problem though. The Auto Configuration Bundle wouldn't be able to be used by another package, since a Resource Processor can only be used by Resources in the same package; additionally that Bundle couldn't be used by a separate, unrelated deployment package, because the Autoconf Bundle is already registered by the first package.
+---------------------------------+ +---------------------------------+
| | | |
| First Deployment Package | | Second Deployment Package |
| | | |
+---------------------------------+ +---------------------------------+
| | | |
| +-----------------------------+ | | +-----------------------------+ |
| | | | | | | |
| | Autoconf Bundle | | | | Autoconf Bundle | |
| | | | | | | |
| | SymbolicName: X | | | | SymbolicName: X | |
| | Version 1.0 | | | | Version 1.0 | |
| | | | | | | |
| +-----------------------------+ | | | Cannot be installed | |
| | | | | | because it's already | |
| | Autoconf Resource Processor | | | | installed by first | |
| | | | | | | |
| | Can only be used by | | | +-----------------------------+ |
| | this Deployment Package | | | | | |
| | | | | | Autoconf Resource Processor | |
| +-----------------------------+ | | | | |
| | | | Can only be used by | |
| Autoconf Resource | | | this Deployment Package | |
| | | | | |
+---------------------------------+ | +-----------------------------+ |
| |
| Autoconf Resource |
| |
| Cannot be used because Bundle |
| containing Resource Processor |
| isn't installed, and therefore |
| can't be used by only this |
| Deployment Package |
| |
+---------------------------------+
It seems like if two Deployment Packages wanted to use Autoconf Resources, we would be blocked because the Resource Processor is either installed as a Bundle, and therefore unusable by the Deployment Admin because it is not in a Deployment Package, or as part of a single Deployment Package, and no other Deployment Package can ever use that version of that Bundle.
Is my understanding correct then, that two Deployment Packages not only have to be separate, but also completely unrelated, to the point of not reusing even a single Bundle? If this is the case, then does that mean that it is impossible to have two Deployment Packages with Autoconf Resources; or at least, two packages with the same type of Resource?
I have three karaf nodes in one cellar group on my machine. First node (lb_node) is used as load_balancer and other two nodes (1_node and 2_node) are used as service nodes (with deployed features). Both nodes have /service address available. I've installed cellar-http-balancer feature on cluster. Also, I have installed locally sample feature both on 1_node and 2_node.
Problem is that, when I start 1_node and 2_node their services are not properly registered in lb_node. http-list output from lb_node:
ID | Servlet | Servlet-Name | State | Alias | Url
103 | CellarBalancerProxyServlet | ServletModel-10 | Failed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-11 | Failed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-12 | Failed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-9 | Failed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-13 | Failed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-8 | Deployed | /jolokia | [/jolokia/*]
103 | CellarBalancerProxyServlet | ServletModel-14 | Failed | /system/console/res | [/system/console/res/*]
103 | CellarBalancerProxyServlet | ServletModel-15 | Failed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-3 | Deployed | /gogo | [/gogo/*]
103 | CellarBalancerProxyServlet | ServletModel-2 | Deployed | /instance | [/instance/*]
103 | CellarBalancerProxyServlet | ServletModel-7 | Deployed | /features | [/features/*]
103 | CellarBalancerProxyServlet | ServletModel-6 | Deployed | /services | [/services/*]
103 | CellarBalancerProxyServlet | ServletModel-5 | Deployed | /system/console | [/system/console/*]
103 | CellarBalancerProxyServlet | ServletModel-4 | Deployed | /system/console/res | [/system/console/res/*]
As you can see only one node registered address. When I enter lb_node url in browser to check if feature from other nodes works, it does. But when I shutdown registered node, then lb_node won't act as proxy anymore. It throws java.net.ConnectException: Connection refused
In tutorial https://karaf.apache.org/manual/cellar/latest-4/#_http_balancer there is case for only one node with service and one balancer, this is not my case.
Is there any way to achieve active/active cluster with http load balancing using karaf and cellar?
I put this as answer because I can't comment (no reputation), I'm not answering your question but it may help you.
I found this github project that does what I think you are trying to achieve, I haven't tested it yet.
This question already has answers here:
Difference between openjdk-6-jre, openjdk-6-jre-headless, openjdk-6-jre-lib
(2 answers)
Closed 3 years ago.
When I type for java -version in command prompt in ubuntu I get following output
The program 'java' can be found in the following packages:
* default-jre
* gcj-4.8-jre-headless
* openjdk-7-jre-headless
* gcj-4.6-jre-headless
* openjdk-6-jre-headless
Try: apt-get install <selected package>
I get above output as I do not have java install. I want to know difference between openjdk-7-jre-headless and openjdk-7-jre
To quote debian's wiki:
There are several virtual packages used in Debian for Java. These cover runtime compatibility and come in two flavours; headless (omits graphical interfaces) and normal.
Or to be more exact, consider this description from Oracle:
Headless mode is a system configuration in which the display device, keyboard, or mouse is lacking. Sounds unexpected, but actually you can perform different operations in this mode, even with graphic data.
As reported in this blog
Headless is the same version than the latter without the support of
keyboard, mouse and display systems. Hence it has less dependencies
and it makes it more suitable for server application.
To add to previous answers the normal java depends on the headless and install some extra packages.
I tried to compare dependencies of:
java-1.8.0-openjdk-headless-1.8.0.191.b12-0.el7_5.x86_64
java-1.8.0-openjdk-1.8.0.191.b12-0.el7_5.x86_64
The comparison done with yum install on Centos 7.6
The normal Java (in contrast to headless) installed following extras:
=============================|========|=========================|============|=======|=================|
Package | Arch | Version | Repository | Size | Vulnerabilities |
=============================|========|=========================|============|=======|=================|
alsa-lib | x86_64 | 1.1.6-2.el7 | centos_7.6 | 424 k | 1 | 2005 |
dejavu-fonts-common | noarch | 2.33-6.el7 | centos_7.6 | 64 k | - | |
dejavu-sans-fonts | noarch | 2.33-6.el7 | centos_7.6 | 1.4 M | - | |
fontconfig | x86_64 | 2.13.0-4.3.el7 | centos_7.6 | 254 k | 1 | 2016 |
fontpackages-filesystem | noarch | 1.44-8.el7 | centos_7.6 | 9.9 k | - | |
giflib | x86_64 | 4.1.6-9.el7 | centos_7.6 | 40 k | 5 | 2018 |
java-1.8.0-openjdk-headless | x86_64 | 1:1.8.0.191.b12-0.el7_5 | centos_7.6 | 32 M | ? | |
libICE | x86_64 | 1.0.9-9.el7 | centos_7.6 | 66 k | 1 | 2018 |
libSM | x86_64 | 1.2.2-2.el7 | centos_7.6 | 39 k | - | |
libX11 | x86_64 | 1.6.5-2.el7 | centos_7.6 | 606 k | 3 | 2013 |
libX11-common | noarch | 1.6.5-2.el7 | centos_7.6 | 164 k | | |
libXau | x86_64 | 1.0.8-2.1.el7 | centos_7.6 | 29 k | | |
libXcomposite | x86_64 | 0.4.4-4.1.el7 | centos_7.6 | 22 k | | |
libXext | x86_64 | 1.3.3-3.el7 | centos_7.6 | 39 k | | |
libXi | x86_64 | 1.7.9-1.el7 | centos_7.6 | 40 k | | |
libXrender | x86_64 | 0.9.10-1.el7 | centos_7.6 | 26 k | | |
libXtst | x86_64 | 1.2.3-1.el7 | centos_7.6 | 20 k | | |
libfontenc | x86_64 | 1.1.3-3.el7 | centos_7.6 | 31 k | | |
libxcb | x86_64 | 1.13-1.el7 | centos_7.6 | 214 k | | |
ttmkfdir | x86_64 | 3.0.9-42.el7 | centos_7.6 | 48 k | - | |
xorg-x11-font-utils | x86_64 | 1:7.5-21.el7 | centos_7.6 | 104 k | 1 | 2008 |
xorg-x11-fonts-Type1 | noarch | 7.5-9.el7 | centos_7.6 | 521 k | | |
=============================|========|=========================|============|=======|=================|
Note that java-1.8.0-openjdk-headless is a dependency of java-1.8.0-openjdk.
Also note that the concrete dependencies may differ on your system.
I am using the Eclipse Java EE IDE with Apache Tomcat version 6 to develop a web application.
The structure of the application is very simple, there are just two classes, one is the servlet class and the other is a object which is built by the servlet and does most of the work.
The problem that I am having is this: since this morning changes which I have made to both class files have not been appeared in the behaviour of the Web application. The application essentially behaves as if it is running my code from yesterday. To make sure that this is the case, I temporarily altered the behaviour of the program in radical ways and low, these changes still did not affect the web application.
Some relevant information:I am running Ubuntu 12, my Eclipse project is set to build automatically, and the tomcat server is configured to auto load modules by default and to automatically publish whenever a resource is changed. I have also cleaned
out the server's working directory.
How can I overcome this issue? I need my web application to implement the changes which I have made to the source code of my servlet and the class which the servlet uses.
If you add the Tomcat server in the standar way in the tabs Servers, the deploy path lives in the .metadata directory of your workspace.
The next tree directory is for my workspace. I added Tomcat 7.
MY_WORKSPACE
+---.metadata
| \---.plugins
| +---org.eclipse.wst.server.core
| | | monitors.xml
| | | publish.xml
| | | servers.xml
| | | tmp-data.xml
| | |
| | +---publish
| | | publish0.dat
| | | publish1.dat
| | |
| | \---tmp0
| | +---conf
| | | | catalina.policy
| | | | catalina.properties
| | | | context.xml
| | | | server.xml
| | | | tomcat-users.xml
| | | | web.xml
| | | |
| | | \---Catalina
| | | \---localhost
| | +---logs
| | | catalina.2013-07-06.log
| | | catalina.2013-07-11.log
| | | host-manager.2013-07-06.log
| | | host-manager.2013-07-11.log
| | | localhost.2013-07-06.log
| | | localhost.2013-07-11.log
| | | localhost_access_log.2013-07-06.txt
| | | localhost_access_log.2013-07-11.txt
| | | manager.2013-07-06.log
| | | manager.2013-07-11.log
| | |
| | +---temp
| | +---webapps
| | +---work
| | | \---Catalina
| | | \---localhost
| | \---wtpwebapps
| | +---ROOT
| | | \---WEB-INF
| | | web.xml
| | |
| | \---MyWebProject
| | | index.html
| | |
| | \---WEB-INF
| | +---classes
| | | |
| | | \---several-packages-and-clases
| | \---lib
| | log4j-1.2.17.jar
| | slf4j-api-1.7.5.jar
| | slf4j-log4j12-1.7.5.jar
| |
| \---org.other.plugins
|
\---Servers
| .project
|
+---.settings
| org.eclipse.wst.server.core.prefs
|
\---Tomcat v7.0 Server at localhost-config
catalina.policy
catalina.properties
context.xml
server.xml
tomcat-users.xml
web.xml
The real path for my web application MyWebProject is in the directory wtpwebapps. You can delete MyWebProject and try again.
If you have several Tomcat's in your workspace, you see directories like tmp0, tmp1, tmp2 ...
I have faced similar issue , in my case even if the tmp0 folder was correctly updated with new class files and resource files, still I was getting the same issue.
In my case, i got it resolved by disabling jrebel plugin for eclipse. I had jrebel installed in eclipse which was still monitoring and deploying the project from other workspace (not even this workspace), I am going to raise a bug to jrebel team as well.
Simply disable jrebel for that project and get it working back to normal