I've got an Oracle 10g database, and I have a third-party jar file(MQ jars). I want to be able to run a trigger in my database that ultimately runs code in store procedure to operate MQ series and sending messages.
. I can't figure out how to specify a classpath for my jar file that will be recognized when I am executing trigger. How can I do this?
You can use loadjava (or dbms_java.loadjava) to load classes or JARs into the database; but for a third party JAR that seems unwieldy. If it's your database then you may be able to set the CLASSPATH to include the external files before starting the database. I don't think your user session CLASSPATH or any other environment variable (for whoever is taking the action that causes the trigger to fire) will ever have any effect, not least for security reasons - you don't want a user to be able to subvert the expected action by substituting their own Java code.
Related
I am migrating a legacy project to a new server. Previously the project used a Oracle DB but now i want it to use Postgress. The queries are simple enough and work the same in Postgres.
However the project is missing a Postgres jdbc-driver. Can i somehow add this dependency sideways to the jar without recompiling?
Can i somehow add this dependency to the jar without recompiling?
It depends.
If you are running the server as java -jar myserver.jar ..., then you will at least need to modify the manifest in the JAR file. Strictly speaking this doesn't entail recompiling, but you do need to explode, modify and repack the JAR file.
If the server uses Class.forName to explicitly load an Oracle Driver class, then you will need to change that code to load the Postgres Driver class instead. (There are other ways to use JDBC that avoid this, but this depends on how your legacy server is implemented.)
If your server uses Oracle specific database classes, or Oracle specific SQL features (or it needs to do the same in the Postgres world) then more extensive changes will be required.
But without actually examining your codebase in detail, we can't predict what is required.
My advice is to replace the Oracle driver JAR with a Postgres driver JAR, and see what happens when you run your server against a Postgres database with the appropriate schemas and data.
But I wouldn't do this "in production". Do it in a test environment. If you can't set up a suitable test environment ... forget it.
And if you don't have the source code for your server, I would forget it too. If anything goes wrong you will most likely need source code to figure out the problem and fix it.
I've been messing around with Apache Derby inside Eclipse. I've booted up a Network Server, and I've been working with servlets. In my Eclipse project, I have a class called "User", inside the package "base.pack". I have an SQL script open, and I've been trying to convert User, which implements Serializable, into a custom type. When I run the following lines, everything works fine:
CREATE TYPE CARTEBLANCHE.bee
EXTERNAL NAME 'base.pack.User'
LANGUAGE JAVA
This follows the general format they identify here: http://db.apache.org/derby/docs/10.7/ref/rrefsqljcreatetype.html#rrefsqljcreatetype
Now, when I try to create a table using this new type, I get an error. I run the following line:
CREATE TABLE CARTEBLANCHE.TestTabel (ID INTEGER NOT NULL, NAME CARTEBLANCHE.bee, PRIMARY KEY(ID));
And I receive the following error:
The class 'base.pack.User' for column 'NAME' does not exist or is inaccessible. This can happen if the class is not public.
Now, the class is in fact public, and as I noted before, it does implement Serializable. I don't think I'm stating the package name incorrectly, but I could be wrong. I'm wondering, is this an issue with my classpath? If so, how would you suggest I fix this? I admit that I do not know much about the classpath.
Thank you.
(For reference, I have configured my project build path to include derby.jar, derbyclient.jar, derbytools.jar, and derbynet.jar, and I have put these files into my project's lib folder as well).
As politely as I can, may I suggest that if you are uncomfortable with Java's CLASSPATH notion, then writing your own custom data types in Derby is likely to be a challenging project?
In the specific case you describe here, one issue that will arise is that your custom Java code has to be available not only to your client application, but also to the Derby Network Server, which means you will need to be modifying the server's CLASSPATH as well as your application's CLASSPATH.
It's all possible, it's just not a beginner-level project.
To get started with customizing your Derby Network Server, the first topic involves how you are starting it. Here's an overview of the general process: http://db.apache.org/derby/docs/10.11/adminguide/tadmincbdjhhfd.html
Depending on how precisely you are starting the Derby Network Server, you'll possibly be editing the CLASSPATH settting in the startNetworkServer or startNetworkServer.bat script, or you'll be editing the CLASSPATH setting in your own script that you have written to start the server.
If it's a tool like Eclipse or Netbeans which is starting the Derby Network Server, you'll need to dig into the details of that tool to learn more about how to configure its CLASSPATH.
And if you've written a custom Java application to start the Derby Network Server (e.g., as described here: http://db.apache.org/derby/docs/10.11/adminguide/tadminconfig814963.html) then you'd be configuring the CLASSPATH of your custom application.
Regardless, as a basic step, you're going to want to be deploying your custom Java extension classes in the Derby Network Server's classpath, which means you'll want to build them into a .jar file and put that .jar file somewhere that the Derby Network Server has access to, and you'll want to make that build-a-jar-and-copy-it-to-the-right-location process straightforward, so you should integrate it into whatever build tool you're using (Apache Ant?).
And, you'll need to consider Java security policy, because the default security policy will prevent you from trivially loading custom Java classes into your Derby Network Server as that would seem like a malware attack and the Derby Network Server is going to try to prevent that. So study this section of the Security manual: http://db.apache.org/derby/docs/10.11/security/tsecnetservrun.html
I have a single .war file with my web-application. The root of the problem is that I want to keep one .war file for all customers (which have different environment). For this I should keep different versions of libraries in .war and decide which of them to use while deployment.
For example:
First customer have oracle 10, the second one have oracle 11. I want keep both ojdbc.jar files in my war, and choose which of them to use according to some parameter in properties file.
One solution to solve this problem is writing my own ClassLoader, which will not load useless files(as shown here: https://serverfault.com/questions/317901/tomcat-possible-to-exclude-jars-during-app-deployment).
The problem is that our ClassLoader class should be into "tomcat\lib" directory. This do installation of my application is more complex.
Maybe there are another ways how to solve this problem? It will be great to do it programmatically inside my application, using reflection or something else.
Thanks for help!
IMHO the best practice for the example you give (database driver) is to provide a datasource through tomcat. This implies that the database driver jar needs to be in the global classpath (tomcat/lib). This would also keep your intent to have the same .war file for all customers - when you update this, you only need to provide an identical file for all users.
In the case of a database connection, you'll most likely need a customized database for each customer anyway - providing this as a datasource through tomcat makes it necessary to configure it once (on first installation), but you won't need to worry about it afterwards.
If database drivers are not your only problem, please update your question with more examples of differing environments/jars.
Right now my team deals with about 4-5 different servers and about 2-3 different DB servers and we're using environmental variables to decide which server we're on and what server configuration to use.
Is there a better way to do this as my team continues to expand? I've considered compiler flags / args, but it doesn't seem as robust.
From my perspective, in java, you have basically 3 ways to crack this cookie:
Environment variables
-D JVM parameters (which are System Properties)
properties files
You've already discovered Environment Variables and that is pretty much "the unix way" to get the effect you are after; different configuration to common binary that customizes the running application for the environment it is executing on.
System Properties are really the Java "moral equivalent" of Environment Variables. They come in via -D parameters on your application's command line like...
java -Dlogback.configurationFile=/opt/dm/logback.xml -cp app.jar org.rekdev.App
Explicit Properties file processing http://docs.oracle.com/javase/tutorial/essential/environment/properties.html in Java is a third variant which you often see coupled with -D to get something like default behavior which can be overridden at runtime from the command line. That is what is basically going on with the logback.xml configuration above, the JAR file has a logback.xml inside it that will be used unless a System Property called "logback.configurationFile" exists, at which point the App will load it instead.
As you try to figure out how to keep this all in sync and working correctly in a multi-server environment, consider the use of chef http://wiki.opscode.com/display/chef/Home to do the deployments and put each specific environment's customizations under chefs control. Put the chef "recipes" in version control and, voila, full on configuration management.
SHIP IT!
I can see two scenarios
You embed all the different properties within your package (can be a war, ear, jar, or on the file system /yourapp/etc/)
You embed only one property file and this one is created during build (with ant or maven)
Say your app is named foo
Solution 1
It has the advantage that your app can be put as-is on any of the supported servers (all that have a property file in your app package).
Your properties will be named foo.dev.properties, foo.test.properties, foo.prod.properties, foo.damien.properties, foo.bob.properties.
One other advantage is that every developer working has its own dev file that he can safely push on svn/git/whatever and be sure that other developer won't destroy his config.
At runtime the application can check the -D parameter or even retrieve the hostname dinamycally, in order to load the correct property file.
Solution 2
It has the advantage that your package is not polluted by unnecessary properties files.
You must configure a lot of ant tasks/maven target in order to build for specific environment. You will have in your source directory the properties files for the environments also, but only one will be shipped with your app. This one foo.properties will only have placeholders for values, and values will be inferred within it using foo.ENV.properties with the right ant task/maven target.
At my actual job and the previous one also, we did use the solution 1, I think it brings flexibility.
Some parameter (like database user/password) were fetched directly from environment variables on the Unix servers though (so that only the server admins knew the credentials).
You can safely mix the solutions, in order to get where you feel there is the more flexibility for you and your team.
I have a particularly tricky situation here. I am supporting an java based packaged web application that runs on JBoss. The application is a packaged application so I don't really have access to any code or know how the code was developed.
What I am trying to find out is what the JDBC fetchSize is being set to. I don't know where it is being set, meaning at the ResultSet level or Statement level etc..
Here is what I have tried so far:
1) I tried to configure tracing for the JDBC driver by downloading the ojdbc_g.jar file from Oracle and following the instructions that they provided. Unfortunately, in those instructions, it states that I need to add this jar file to the CLASSPATH and I am not sure where that is being set in this application. Needless to say, this path was not fruitful, as I was never able to successfully get a trace file output.
2) I tried using WireShark to capture all TNS packets to see if I could find it out that way, and this again was fruitless because at that level, it is next to impossible to actually find how many rows are being passed each time. There are so many other dependencies there, and I was unable to get the info I am after.
3) The last thing I tried was configuring a database side trace of SQL Net calls to the database, but I am not sure whether I have to do this in the sqlnet.ora file or if I have to trace the listener.
This is where I am stuck now. Again, all I want to do is find out what the application's JDBC fetchSize is being set (if its even being set at all). I know that this can impact the performance of the application so that's why I want to find it.
Any guidance is appreciated.
Thanks.
unpack the jar file, try grepping for fetchSize in the class files, this will give you the class file. Then disassemble or decompile, or debug to see what it is set to.