Hi to all Java experts!
I am working on onboarding a new and shiny process visualization service and I need your help!
My project structure goes like this:
Service Package is dependant on Core Package which is dependant on Util package. Something like this:
Service
|-|- Core
|-|-|- Util
The application package has the main method from where our code begins. It's calling some of the Core methods that is using the Util package for read information from the input.
package com.dummy.service;
public void main(Object input) {
serviceCore.call(input);
}
package com.dummy.core;
public void call(Object input) {
String stringInput = util.readFromInput(input);
//Do stuff
}
package com.dummy.util;
public String readFromInput(Object input) {
//return stuff;
}
The problem starts when I want to onboard to the visualization service. One requirement is to use a unique transaction Id for each call to the service.
My question is - how to share the process Id between all of these methods without doing too much refactoring to the code? To see the entire process in the Process Visualization tool I will have to use the same ID across the entire call. My vision that this is going to be something like:
package com.dummy.service;
public void main(Object input) {
processVisualization.signal(PROCESS_ID, "transaction started");
serviceCore.call(input);
processVisualization.signal(PROCESS_ID, "transaction ended");
}
package com.dummy.core;
public void call(Object input) {
processVisualization.signal(PROCESS_ID, "Method call is invoked");
String stringInput = util.readFromInput(input);
//Do stuff
}
package com.dummy.util;
public String readFromInput(Object input) {
processVisualization.signal(PROCESS_ID, "Reading from input");
//return stuff;
}
I was thinking about the following, but all of these are just abstract ideas that I am not even sure can be implemented. And if yes - then how?
Creating a new package that all of the three packages are going to be dependant on and going to "hold" the process Id for each call. But how? Should I use a static class in this package? A singelton?
I've read this post about ThreadLocal variables: When and how should I use a ThreadLocal variable? but I am not familiar with these and not sure how to implement this idea - should it go to a separate package like I mentioned in 1?
Changing the method's signatures to pass the id as a variable. This is, unfortunately, too pricey in terms of time and the danger of a large refactoring.
Using file writing - save the ID in some file that is accessible throughout the process.
Constructing a unique id from the input - I think this can be the perfect solution but we may receive the same input in separate calls to the service.
Access the JVM for some unique transaction id. I know that when we are logging stuff we have the RequestId printed in the log line. This is the pattern we use in Log4J configuration:
<pattern>%d{dd MMM yyyy HH:mm:ss,SSS} %highlight{[%p]} %X{RequestId} (%t) %c: %m%n</pattern>
This RequestId is a variable on the ThreadContext that is created before the job. Is this possible and/or recommended to access this parameter and use it as a unique transaction id?
In the end we've utilized Log4J's Thread context.
It's probably not the best solution since we are mixing the purpose of the same thing, but this is how we did it:
The process id is extracted like that:
org.apache.logging.log4j.ThreadContext.get("RequestId");
And initiated on the Handler Chain (depends on which service you are using):
ThreadContext.put("RequestId", Objects.toString(job.getId(), (String)null));
This is happening on every job that is received.
Disclaimer: This solution haven't been fully tested yet but this is the direction we go with
Related
I own a DDD/CQRS application.
My question concerns the handling of an item creation through POST (Rest).
CQRS (based on CQS principle) promotes that commands should never return a value.
Queries are there for that.
So I wonder how to handle the use case of Item creation.
Here's my current command handler pattern (light for the sample (no interfaces etc.)):
#Service
#Transactional
public CreateItem {
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
}
}
By reading this article, an easy method would be to declare an output property in the command, populated at the end of the handle method like this:
public void handle(CreateItemCommand command) {
Customer customer = customerRepository.findById(command.customerId);
ItemId generatedItemId = itemRepository.nextIdentity(); //generating the GUID
customer.createItem(generatedItemId, .....);
command.itemId = generatedItemId; //populating the output property
}
However, I see one drawback with this approach:
- A command, in theory, is meant to be immutable.
This itemId would then be sent thanks to the calling controller (webapp) through Location Header with the status 201 or 202 (depending if I expect async or not).
An other solution would be to let the controller initialize the GUID by accessing the repository itself, thus letting the command immutable:
//in my controller:
ItemId generatedItemId = itemRepository.nextIdentity(); //controller generating the GUID
createItem.handle(command);
// setting here the location header (201-202) containing the URL to the newly created item with the using the previous itemId.
Drawback: Controller (adapter layer) accessing directly the repository ..., that is too low-level IMO.
My extreme client being a Javascript application, I might have another solution to let the Javascript itself generate a GUID, and feed CreateItemCommand with it before sending the whole command to server.
Advantage: No more issues about potential violation of CQ(R)S guidelines.
Drawback: Should check the validity of the passed id at server side. Although there would have an index unique on this preventing an unexpected insertion in database.
What is the best (or just a good) strategy to handle this?
I am the developer of a CRM application based on the CQRS pattern. I tend to see commands as immutable. The team decided early on, that all IDs are generated on the client to have immutable commands. This is perfectly ok, as we are using UUIDs. So we are quite confident, that the IDs are valid and there are no ID collisions. We went well with that approach up to this point - I can definitely recommend this. In that scenario the client just knows the IDs.
Sometimes it happens though - especially in manual testing - that a create command is dispatched twice with the same ID. In that case the addition of events in the event store fails (we use event sourcing) with a duplicate key exception. The exception is passed to the controller. In fact we do return results from command executions with a call back, even though it's just "everything ok" most of the time - so no exception thrown. Also command validation is done this way. We do this using a command bus concept.
I would recommend taking a look at the Axon framework. We use it, it provides the common building blocks, and it just works. Maybe you can get some inspirations from that!
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
I have looked around and around for this answer, but I have not been able to find a good answer. I would like to create a system based on Quartz that allows people to schedule their own tasks. I will use a pseudo example.
Let's say my main method for my Quartz program is called quartz.java.
Then I have a file called sweep.java that implements the Quartz "job" interface.
So in my quartz.java, I schedule my sweep.java to run every hour. I run quartz.java, and it works fine. GREAT; however, now I want to add a dust.java to the quartz scheduler; however, since this is a production service, I don't want to have to stop my quartz.java file, add in my dust.java, and recompile and run quartz.java again. This downtime would be unacceptable.
Does anyone have any ideas on how I could accomplish this? It seems impossible because how could you ever feed another java file into the program without recompiling, linking, etc.
I hope that this example is clear. Please let me know if I need to clarify any part of it.
Partial answer: it is possible to compile, and then instantiate, a class, programatically.
Here are links to example code:
how to compile from a String;
CompilerOutput;
CompilerOutputDirectory.
The extracted class is grabbed in the third source file (see method getGeneratedClass, which returns a Class<?> object).
HOWEVER: keep in mind that this is potentially dangerous to do so. One problem, which can be quite serious if you are not careful, is that when you dynamically instantiate a class, its static initialization blocks are executed. And these can potentially wreak havoc on your application. So, in addition, you'll have to create an appropriate SecurityContext.
In the code above, I actually only ever get the Class<?> object and never instantiate it in any way, so no code is executed. But your usage scenario is quite different.
I have not tried any of these but are worth trying .
1) Consider using Quartz camel endpoint .
If my understanding is right, Apache Camel lets you create the camel routes on the fly.
It just needs to deploy the camel-context.xml into a container taking into consideration that the required classes would be already available on classpath of container.
2) Quartz lets you create a job declaratively i.e. with xml configuration of job and trigger.
You can find more information here.
3) Now this requires some efforts ;-)
Create an interface which has a method which you will execute as a part of job. Lets say this will have a method called
public interface MyDynamicJob
{
public void executeThisAsPartOfJob();
}
Create your instances of Job methods.
public EmailJob implements MyDynamicJob
{
#Override
public void executeThisAsPartOfJob()
{
System.out.println("Sending Email");
}
}
Now in your main scheduler engine, use the Observer pattern to store/initiate the job dynamically.
Something like,
HashMap jobs=new HashMap<String,MyDynamicJob>();
// call this method to add the job dynamically.
// If you add a job after the scheduler engine started , find a way here how to reiterate over this map without shutting down the scheduler :-).
public void addJob(String someJobName,MyDynamicJob job)
{
jobs.add(someJobName,job);
}
public void initiateScheduler()
{
// Iterate over the jobs map to get all registered jobs. Create
// Create JobDetail instances dynamically for each job Entry. add your custom job class as a part of job data map.
Job jd1=JobBuilder.newJob(GenericJob.class)
.withIdentity("FirstJob", "First Group").build();
Map jobDataMap=jd1.getJobDataMap();
jobDataMap.put("dynamicjob", jobs.get("dynamicjob1"));
}
public class GenericJob implements Job {
public void execute(JobExecutionContext arg0) throws JobExecutionException {
System.out.println("Executing job");
Map jdm=arg0.getJobDetail().getJobDataMap();
MyDynamicJob mdj=jdm.get("dynamicjob");
// Now execute your custom job method here.
mdj.executeThisAsPartOfJob();
System.out.println("Job Execution complete");
}
}
I have a log statement in which I always use this.getClass().getSimpleName()as the 1st parameter.
I would like to put this in some sort of macro constant and use that in all my log statements.
But I learned that Java has no such simple mechanism unlike say C++.
What is the best way to achieve this sort of functionality in Java?
My example log statements (from Android) is as follows..
Log.v(this.getClass().getSimpleName(),"Starting LocIden service...");
Java doesn't have macros but you can make your code much shorter:
Log.v(this, "Starting LocIden service...");
And in the Log class:
public void v(Object object, String s)
{
_v(object.getClass().getSimpleName(), s);
}
Another approach could be to inspect the call stack.
Karthik, most logging tools allow you to specify the format of the output and one of the parameters is the class name, which uses the method Mark mentioned (stack inspection)
For example, in log4j the parameter is %C to reference a class name.
Another approach is to follow what android suggests for its logging functionality.
Log.v(TAG, "Message");
where TAG is a private static final string in your class.
Use a proper logging framework (e.g. slf4j). Each class that logs has its own logger, so there's no need to pass the class name to the log method call.
Logger logger = LoggerFactory.getLogger(this.getClass());
logger.debug("Starting service");
//...
logger.debug("Service started");
My spaghetti monster consumes XML from several different SOAP services, and the URL for each service is hardcoded into the application. I'm in the process of undoing this hardcoding, and storing the URLs in a properties file.
In terms of reading the properties file, I'd like to encompass that logic in a Singleton that can be referenced as needed.
Change this:
accountLookupURL ="http://prodServer:8080/accountLookupService";
To this:
accountLookupURL =urlLister.getURL("accountLookup");
The Singleton would be contained within the urlLister.
I've tended to shy away from the Singleton pattern, only because I've not had to use it, previously. Am I on the right track, here?
Thanks!
IVR Avenger
You haven't said why you need only one of whatever it is which will be getting the URL. If that just involves reading a properties file, I don't think you do need only one. Seems to me that having two threads read the same properties file at the same time isn't a problem at all.
Unless you were thinking of having some object which only reads the properties file once and then caches the contents for future use. But this is a web application, right? So the way to deal with that is to read in the properties when the application starts up, and store them in the application context. There's only one application context, so there's your "only one" object.
As an alternative, did you consider using something like Apache Commons Configuration (or maybe another configuration framework)?
Singletons are appropriate for this scenario, BUT you have to make sure you're doing the singleton right.
So, for example, what Bozhno suggests is not a singleton, it's an ugly concoction of nasty statics that's not mockable, not easily testable, not injectable, and generally comes back to bite you in the ass.
An acceptable singleton is just your average class with one notable exception that it is guaranteed either by itself or by some external factory/framework (e.g Spring IoC) to exist in only one instance. If you go with the first approach, you do something like
private MyUberSingletonClass() {
//..do your constructor stuff, note it's private
}
private static MyUberSingletonClass instance = null;
public static synchronized MyUberSingletonClass instance() {
if (instance == null) {
instance = new MyUberSingletonClass();
}
return instance;
}
public String getUberUsefulStuff(){
return "42";
}
That's acceptable if you don't really feel the need for a factory otherwise, and aren't using any IoC container in your app (good idea to think about using one though). Note the difference from Bozhno's example: this is a good vanilla class where the only static is an instance var and a method to return it. Also note the synchronized keyword required for lazy-initialization.
update: Pascal recommends this very cool post about a better way to lazy-init singletons in the comments below: http://crazybob.org/2007/01/lazy-loading-singletons.html
Based on your suggestions, and the fact that I don't think I have as much access to this application as I'd hoped (a lot of it is abstracted away in compiled code), here's the solution I've cooked up. This is, of course, a stub, and needs to be fleshed out with better exception handling and the like.
public class WebServiceURLs {
private static class WebServiceURLsHolder
{
public static WebServiceURLs webServiceURLs = new WebServiceURLs();
}
private Properties webServiceURLs;
public WebServiceURLs()
{
try
{
Properties newURLProperties = new Properties();
InputStreamReader inputStream = new InputStreamReader(
FileLoader.class.getClassLoader().getResourceAsStream("../../config/URLs.properties") );
newURLProperties.load(inputStream);
webServiceURLs =newURLProperties;
}
catch (Exception e)
{
webServiceURLs =null;
}
}
public String getURLFromKey(String urlKey)
{
if (webServiceURLs==null)
return null;
else
return webServiceURLs.getProperty(urlKey);
}
public static WebServiceURLs getInstance()
{
return WebServiceURLsHolder.webServiceURLs;
}
}
Is this a good effort as my "first" Singleton?
Thanks,
IVR Avenger
To restate the obvious, Singleton is to be used when all client code should talk to a single instance of the class. So, use a Singleton IFF you are certain that you would not want to load multiple properties files at once. Personally, I would want to be able to have that functionality (loading multiple properties files).
Singletons are mutable statics and therefore evil. (Assuming a reasonably useful definition of "singleton".
Any code that uses the static (a transitive relationship), is has assumptions about pretty much everything else (in this case, a web server and the internet). Mutable statics are bad design, and bad design makes many aspects go rotten (dependency, understandability, testing, security, etc).
As an example, the only thing stopping late versions of JUnit 3 being used in a sandbox was loading a configuration file in one static initialiser. If it had used Parameterisation from Above, there would have been no issue.