I am currently working on a legacy application where database calls are kind of scattered all over. I need to execute some logic linked to security (business) every time some sort of DML is executed. For this, I am thinking of using a java agent and intercepting the calls and subsequently executing the business logic.
The issue is that this agent needs to be secured and I need to ensure that there is no way that a different agent developed with a similar but different logic is loaded.
Is there any kind of mutual authentication possible between the application and the java agent which would ensure that a wrong agent is in no way able to be loaded
Is there any kind of mutual authentication possible between the application and the java agent which would ensure that a wrong agent is in no way able to be loaded.
No. The application has (almost) no knowledge of the agent. Certainly there is no way to for the application to properly validate the agent.
I guess, you could design a "protocol" where the application only works if an agent calls a particular application method in a particular way. However that could be circumvented by reverse engineering the application or a real agent, and using that knowledge to write a bad agent that imitates the required behavior.
But I think that you are going about this the wrong way. There are many better (simpler, cleaner, more efficient) ways to inject behavior into a Java application. And I think I can detect that part of your motivation is that you want to modify the legacy application as little as possible. And that is the primary motivation for your complicated agent-based approach.
(My reaction to that is that you are likely to spend more effort on the agent stuff than you would save by not modifying the legacy app. And the result will be much harder to maintain.)
I also suspect that your requirement for mutual authentication of the application and the interceptors (however they are implemented) is not really necessary. A consequence of using agents rather than a fundamental requirement.
If I was doing this and I was this concerned about protecting against (insider) attempts to subvert the business rules, I would:
Choose an alternative mechanism
Implement the new code and modifications to the legacy app as required.
Audit the main application and interceptor logic
Join the two parts together (depending on how the mechanism work)
Put the combined system onto a secured machine or container.
The Java attach API is already secured in the way that an agent must either:
be specified on the command line.
be attached from a JVM running by the same OS user.
To establish both cases, you must own privileges that can already escalate beyond the privileges of the agent running within the JVM process. For these reasons, it should not be necessary to validate your agent from within the JVM.
In theory you could write a Java agent that instruments the instrumentation API and make sure this agent is attached first. The instrumentation API's instrumentation could then check the jar file of the attached agent prior to its attachment and compare it to some known seed, for example. If this seed does not match, you could fail the agent's initialization.
Related
For a few days, I am stuck at a (for me) quite challenging problem.
In my current project, we have a big SOA based architecture, our goal is to monitor and log all incoming requests, the invoked services, the invoked DAOs, and their result. For certain reasons we cant uses aspects, so our idea is to connect directly to the JavaVM and observe what's going on.
In our research, we found Byteman and Bytebuddy which both use the Java Machine Tool Interface to connect and inject code into the VM.
Looking closer at Byteman we discovered that we have to specify the Byteman-Operation for each operational class which in our case is simply impossible.
Would there be a better, more efficient way to log all incoming requests, the invoked services, the invoked DAOs, and their results? Should we write our own Agent which connects to the JMTI? What would you guys recommend?
I think the way to figure out a specific service method call can be overloaded. Wouldn't it be simplest and smarter to use APM?
Are there any recommendations, best practices or good articles on providing integration hooks ?
Let's say I'm developing a web based ordering system. Eventually I'd like my client to be able to write some code, packaged it into a jar, dump it into the classpath, and it would change the way the software behaves.
For example, if an order comes in, the code
1. may send an email or sms
2. may write some additional data into the database
3. may change data in the database, or decide that the order should not be saved into the database (cancel the data save)
Point 3 is quite dangerous since it interferes too much with data integrity, but if we want integration to be that flexible, is it doable ?
Options so far
1. provide hooks for specific actions, e.g. if this and that occurs, call this method, client will write implementation for that method, this is too rigid though
2. mechanism similar to servlet filters, there is code before the actual action is executed and code after, not quite sure how this could be designed though
We're using Struts2 if that matters.
This integration must be able to detect a "state change", not just the "end state" after the core action executes.
For example if an order changes state from In Progress to Paid, then it will do something, but if it changes from Draft to Paid, it should not do anything.The core action in this case would be loading the order object from the database, changing the state to Paid, and saving it again (or doing an sql update).
Many options, including:
Workflow tool
AOP
Messaging
DB-layer hooks
The easiest (for me at the time) was a message-based approach. I did a sort-of ad-hoc thing using Struts 2 interceptors, but a cleaner approach would use Spring and/or JMS.
As long as the relevant information is contained in the message, it's pretty much completely open-ended. Having a system accessible via services/etc. means the messages can tap back in to the main app in ways you haven't anticipated.
If you want this to work without system restarts, another option would be to implement handlers in a dynamic language (e.g., Groovy). Functionality can be stored in a DB. Using a Spring factory makes this pretty fun and reduces some of the complexity of a message-based approach.
One issue with a synchronous approach, however, is if a handler deadlocks or takes a long time; it can impact that thread at the least, or the system as a whole under some circumstances.
I have a java app (in fact it is grails) I need to execute an external program. Preferably I want my app to be self-contained, i.e. the external scripts/programs to be part of the war file. This external script/program also needs to produce some files.
I guess, my question is if there is some kind of best practices how to do these sort of things so that the final product is not too flaky depending on app permissions and what not?
One of the things you need to ensure that, only one thread executes an instance of your program at a time. so you need some locking and synchronization there.
Imagine a scenario where multiple users/requests/threads trying to execute the same program with different input, that will be a disaster. so you either need to lock the program while one is executing and others wait, or you need to create new instances everytime you want to run the program. you should be very careful about this.
Also, you want to clean up after the program runs and if it produces any output.
You need to be careful if the user can pass malicious commands to your system and tries to hijack other applications.
Overall, you have to be careful about security and correctness (the first scheme i mentioned.)
Security - ensure that your app does not allow for the execution of arbitary (user supplied) code on the host system. Think SQL-Injection style attacks. If you need to pass around data, I suggest inserting it into a database first and then passing the primary key to your external process, this will help avoid buffer overflow type situations.
Robustness - can this program fail, or take along time, or have other unknown side effects. Isolate your main web app from this program by executing from a different thread, or even a different process.
Logging - if you need to collect logging from this external app, you may want to pass in a session id (or equivalent) so you can track back any errors to web sessions.
You could design a small administrative system that will track service requests. It would be a very useful component, as most projects have a purpose like this.
The app should be executed from a service, the request to that service itself should be asynchronous. Also on top of this you can get feedback and track that service status.
I am working on an application in Java on Google App Engine where we have a concept of RPC, but when should we ideally make use of RPC? The same functionality that I am doing with RPC could be implemented even without it. So in what scenario should ideally RPC be designed.....?
You typically use RPC (or similar) when you need to access a service that is external to the current application.
If you are simply trying to call a method that is part of this application (i.e. in this application's address space) then it is unnecessary ... and wasteful to use RPC.
... when should we ideally make use of RPC?
When you need it, and not when you don't need it.
The same functionality that I am doing with RPC could be implemented even without it.
If the same functionality can be implemented without RPC, then it sounds like you don't need it.
So in what scenario should ideally RPC be designed.....?
When it is needed (see above).
A more instructive answer would be scenarios where there are good reasons to implement different functions of a "system" in different programs running in different address spaces and (typically) on different machines. Good reasons might include such things as:
insulating one part of a system from another
implementing different parts of a system in different languages
interfacing with legacy systems
interfacing with subsystems provided by third party suppliers; e.g. databases
making use of computing resources of other machines
providing redundancy
and so on.
It sounds like you don't need RPC in your application.
RPC is used whenever you need to access data from resources on the server that are not available on the client. For example web services, databases, etc.
RPC is designed for request/response. i.e. you have a self contained request to a service and you expect a response (return value or a success/failure status)
You can use it anywhere you might use a method call except the object you are calling is not local to the current process.
I'd like to trace a java application at runtime to log and later analyze every its behaviour.
Is there a possibility to hook into a java application to get runtime information like method calls (with parameters and return values) and the status of an object (i.e. its attributes and whose values)?
My goal is to get a complete understanding of the applications behaviour and how it deals with the data.
If you need highly customized logging and runtime processing, one alternative to profilers is to use aspects and load-time weaving.
We use AspectJ in this way to capture and log the authentication information for users who call a number of low-level methods for debugging purposes and to undo mistaken changes.
Use a profiler. For example JProfiler or one from this overview of opensource java profilers. Whenever I had to find deadlocks for example, these tools were priceless...
In Netbeans the profiler exist and work properly for use it see http://profiler.netbeans.org/
Maybe have a look at Glassbox a troubleshooting agent for Java applications that automatically diagnoses common problems. From Glassbox - Automated monitoring and troubleshooting using AOP:
Glassbox deploys as a war file to your
appserver and then uses AspectJ load
time weaving to monitor application
components and other artifacts, in
order to identify problems like excess
or failed remote calls, slow queries,
too many database queries, thread
contention, even what request
parameters caused failures. All this without
changing the code or the build process.
(...)
Glassbox monitors applications non-invasively by using aspects to track component interactions. We also monitor built-in JMX data, notably on a Java 5 VM we sample thread data (every 100 ms by default). As a request is processed, we summarize noteworthy events such as where time was spent and what parameters were involved in making things slow or fail. We also detect higher-level operations (such as Struts actions or Spring controllers) that we use to report on. Our AJAX Web client then provides summaries of status by operation on the machines being monitored and we generate a more detailed analysis on request. Glassbox allows monitoring clusters of servers: the Web app uses JMX Remote or direct RMI to access data from remote servers. We also provide JMX remote access to the lower-level summary statistics.
It's a nice application, give it a try.