I'm looking into the Axon framework and I'm having a hard time with the automatic persistence of the command state.
I've looked at the documentation regarding the command model repository and from my understanding the state of the command model for the standard repository should be auto-persisted provided I have the correct dependencies. This sentiment is also present in another blog/tutorial I've looked at (you might need to scroll down a bit to the Repository section).
The problem is that although I've added the axon-mongo dependency, the command state is not being persisted automatically. I've tried to configure the relevant Repository beans as per the docs but It doesn't seem to have worked either. I'm not even sure whether this is required given that (from my understanding of the docs) you would do this mainly if you want to query the command state.
While I understand that I can create my own repository and save the entities myself (similar to this tutorial), I'd rather not given seems to provide this out of the box.
Am I missing something here?
NOTE: My Mongo setup seems to be correct since I've managed to persist my events in MongoDB as per the documentation.
UPDATE
As per Steven's comment (and subsequent comments), I decided to try and implement a state-stored aggregate however I found an issue with the (de)serialization of the aggregate. I've sent my Aggregate to Steven and he has confirmed that it is simple enough that it should be (de)serialized by XStream. I have also tried to serialize my aggregate using a standalone instance of XStream and it worked, which led me to believe that this is more of an Axon issue than an XStream issue. I also tried to use the Jackson and java (de)serializers (since they are the other options provided by Axon) and I found similar problems. I have concluded that this is an Axon bug and i have stopped trying to solve the issue.
From your question it is not immediately clear if you are aware of the possible Command Model storage mechanisms you can choose from.
So firstly, like #Mzzl points out in his comment, you can view the Command Model state from two angles:
Event Sourced
State-stored
By default, Axon Framework will set up your Aggregate with an EventSourcingRepository behind it. This means, that if an Aggregate (e.g. your Command Model) is required to handle a new Command, that the Aggregate will be loaded by retrieving a stream of all the events that it is has published.
Second, it will call all the #EventSourcingHandler annotated methods on your Aggregate implementation to recreate the state of the Command Model.
Lastly, once all the Events which are part of the Aggregate's Event Stream have been handled, the command will be provided to the #CommandHandler annotated method.
The state-stored approach is obviously a little different, as this means the entirety of your Aggregate will be stored in a repository.
Note though, that the State-Stored approach is only supported through the GenericJpaRepository class. Thus, storing the Aggregate in it's entirety in a MongoDB is not an option.
If you are looking for an Event Sourcing approach for your Aggregate, the events can be sourced from any EventStore implementation covered by the framework.
That thus means you can choose JPA, JDBC, MongoDb and Axon Server as the means to storing your events and retrieving a stream of events to recreate your Command Model.
Configuration wise, there are a couple of approaches to this.
If you are using the Configuration API provided by Axon directly, you can use:
AggregateConfigurer#defaultConfiguration(Class<A>) for an Event Sourced approach
AggregateConfigurer#jpaMappedConfiguration(Class<A>) for a State-Stored approach
If you are in a Spring Boot environment with your application, it's a little simpler to switch between event sourced and state-stored.
Simply having the #Entity annotation on your Aggregate implementation is sufficient for the framework to note you want to store the Aggregate as is.
Hope this sheds some light on the situation #The__Malteser!
Update
Based on the comments, it's clear that the XStreamSerializer which the framework uses by default to de-/serialize objects, is incapable of serializing your Aggregate instance in a State-Stored fashion.
Based on the exception you're receiving, being Cannot marshal the XStream instance in action, I did some searching/digging. I have a hunch that XStream is by default not capable to simply serialized non-static inner classes.
However, as we're not sure what the implementation is of your Aggregate, it's hard to deduce whether that's the problem at hand. Would you be able to share your implementation with us here so that we can better deduce whether an inner class is the problem?
Related
I wonder if there is a java framework to journalize operations done on objects and then save them in database.
In fact, I'm working on an application where a particular object undergo many operations, each one is changing its logic (many conrols may be applicated on the object depending on user).
Now, I would like to trace controls or operations done on this object and store them in new tables serving just for statistics. I think that this could be implemented without modifying the whole exiting code of the application. I mean it could be seen as a vertical layer...
I have already seen the description of hibernate interceptors but I'm not sure that it could meet my needs
I would like also to precize that I'm working with spring core and hibernate..
Anyone has an idea about a java framework or an API meeting my need
thanks in advance..
I'm sure Hibernate Interceptors can be helpful for you. But, there is a little change that your entities might have to go through, because interceptors work when saving all the entities, you have to let the interceptor know that you are not interested in saving a few of them by adding custom annotations to them.
Other ways of doing is by using Spring AOP, you can log work without touching any of your code, but for this to happen, you need to be using spring in your environment already.
Other ways could be using traditional Servlet filters to do this.
There is a concept of Hibernate Event handlers, you may also look it up.
I want to transfer hibernate objects with GWT-RPC to the frontend. Of course i can not transfer the annotated class because the annotations can not be compiled to javascript. So i did the hibernate mapping purely in the ".hbm.xml". This worked fine for very simple objects. But as soon as i add more complex things like a oneToMany relationship realized with e.g. a set, the compiler complains about some serialization issues with the set (But the objects in the set are serializable as well).
I guess it does't work because hibernate creates some kind of special set that can not be interpreted by GWT?
Is there any way to get around this or do i need another approach to get my objects to the frontend?
Edit: It seems that my approach is not possible with RPC because hibernate changes the objects. (see answer from thanos). There is a newer approach from google to transfer objects to the the frontend: The request factory. It looks really good and i will try this now.
Edit2: Request factory works perfectly and is much more convenient than RPC!
This is a quote from GWT documentation. It says that hibernate changes the object from the original form in order to make it persistent.
What this means for GWT RPC is that by the time the object is ready to be transferred over the wire, it actually isn't the same object that the compiler thought was going to be transferred, so when trying to deserialize, the GWT RPC mechanism no longer knows what the type is and refuses to deserialize it.
Unfortunately the only way to implement the solution is by making DTOs and their appropriate converters.
Using Gilead is a cleaner approach (no need for all this DTO code), but DTOs are more ligtweight and thus produce less traffic through the wire.
Anyhow there is also Dozer, that will generate the DTOs for you so there will not be much need for yo to actually write the code.
Either way as mchq08 said the link he provided will solve many of questions.
I would also make another suggestion! Separate the projects. Create a new one as a model for your application and include the jar into the GWT. In this way your GWT project will be almost in its' entirety the GUI and the jar library can be re-used for other projects too.
When I created my RPC to Hibernate I used this example as a framework. I would recommend downloading their source code and reading the section called "Integration Strategies" since I felt the "Basic" section did not justify DTO. One thing this tutorial did not go over as well is the receiving and sending part from the web page(which converts to JS) so thats why I am recommending you downloading their source code and looking at how they send/receive each the DTOs.
Post the stack trace and some code that you believe will be useful to solving this error.
Google's GWT & Hibernate
Reading this (and the source code) can take some time but really helps understands their logic.
I used the next approatch: for each hibernate entity class I had client replica without any hibernate stuff. Also I had mechanism for copy data between client <-> server clases.
This was working, but I belive current GWT version should work with hibernate-annotated classes..
On a client project, I use Moo (which I wrote) to translate Hibernate-enhanced domain objects into DTOs relatively painlessly.
I am in the process of creating a UI configuration tool for my pet project. One aspect of this tool lets the end user DEFINE his orchestration. I then need to save this orchestration definition into a database. There will be a executable version of this definition in a running system. The executable version is created dynamically on-demand.
Idea is to separate the DEFINITION from EXECUTABLE version so that I have the flexibility to choose the runtime version among BPMN or JPDL or a POJO based workflow solution (BeanFlow).
Limitation: I can't use the BPMN editors that come with frameworks like jBPM, Activiti etc as I wan't to use my own UI that is specific to my domain.
I need suggestions on HOW to PERSIST the definition.
Should I use rdbms tables? If so, is there a db schema I can borrow that is close to orchestration concepts?
Should I serialize my definition to BPMN/JPDL XML instance document?
Are there any other simple formats that I can use?
By "orchestration" I'm assuming you mean a finite state machine. Where the current state dictates what transitions can be followed to other states. The representation of states and transitions as edges and vertices often produces a directed acyclic graph, however there are times when the graph will cycle (e.g. draft -- submit for approval --> pending approval -- reject --> draft).
In practice, separating the definition from execution calls for a persistence format that can easily accommodate customization. As your system evolves you will find a number of unanticipated edge cases whose solution should not require altering a persistence schema, only code. This implies XML or a NoSQL solution - something whose schema is easily changed or non existent.
Now, having written my own XML definition for this purpose (for uninteresting reasons I'll exclude), my suggestion is using JPDL (or BPMN). Reason is their definitions likely incorporate whatever you're considering now, will in the future, and enable customization - such as hanging arbitrary data or behavior off them at a given point. You also get the advantage of tools already built - not just UI - for dealing with cycle detection and ensuring there is a path to completion for example.
Some of the interesting features I know JPDL possesses are an ability to help merge forked processes, timed tasks (including those that repeat periodically), and facilities for sending notification. This last item - notification - bears some further exposition. One of the things I've found with my own system is the need for sending out configurable email whose content is based on the data flowing through. These existing engines make that relatively easy by providing a way to plugin variables for instance into text that's then dynamically evaluated at run time before transmission. Also they provide bridges between the engine and whatever user store for the purpose of sending notifications to groups of people, tasking them and enforcing security policy.
Finally, depending on the scope of your system, you will probably still be using a database as well. What I suggest is storing off the XML and data being orchestrated into the database in a serialized format. Then, if the data is being altered as it travels through the execution, write out serializations of the data - and perhaps workflow if it is also changed - into a history/audit log table as well.
I would NOT use rdbms tables, or if you do, store the definitions as text blobs. Trying to make records for the definition is a bad idea because it's much more inflexible and difficult to change your definition over time. Many people would use different approaches, but I'd use JSON or YAML, and avoid XML. The motivation for that is to make it as simple as possible. Trying to use XML, especially a formalized specific format of XML is going to make you spend much more time meeting an exact specification that doesn't actually do anything to help what you're trying to accomplish. JSON and YAML are both very easy to work with from a code perspective. YAML is more easily readable by humans and easier to edit, and isn't as tricky for punctuation and escaping as JSON. JSON is more widely used, and is smaller than YAML. JSON also has a binary counterpart, BSON, if document size is a concern.
Once you have an importer/exporter that goes to/from your internal objects to your data format, then persisting using RDBMS, or other mechanisms, will be straightforward. You could even use CouchDB, which could offer other benefits to your application and may be a great fit.
Very good question! Here is my two cents:
RDBMS: if you do this you will be able to query the workflow instances, for example which tokens are at 'node X'?
Storing XML as clob: the simplicity is the truth of this solution, but you can't really query these just get them by id
NOSQL: there are a lot of different solutions for different problems. MongoDB is a popular solution, it provides document oriented persistence.
How about a simple serialisation of the composed UI using for example XStream and then store the serialised bits into the database as a binary column. Then when user logs in, get the associated data, deserialise, initialise if required and display.
When I look at Java frameworks like Hibernate, JPA, or Spring, I usually have the possibility to make my configuration via an xml-file or put annotations directly in my classes.
I am cosistently asking myself what way should I go.
When I use annotations, I have the class and its configuration together but with an xml I get a bigger picture of my system because I can see all class-configurations.
Annotations also get compiled in some way I guess and it should be quicker than parsing the xml, but on the other hand, if I want to change my configuration I have to recompile it instead of just changing the xml file (which might become even more handy for production environments on customer side).
So, when looking at my points, I tend to go the xml-way. But when looking at forums or tutorials usually annotations are used.
What are your pros and cons?
A good rule of thumb: anything you can see yourself wanting to change without a full redeploy (e.g. tweaking performance parameters) should really go in something "soft-configurable" such as an XML file. Anything which is never realistically going to change - or only in the sort of situation where you'll be having to change the code anyway - can reasonably be in an annotation.
Ignore any ideas of performance differences between annotations and XML - unless your configuration is absolutely massive the difference will be insignificant.
Concentrate on flexibility and readability.
If you're writing an API, then a word of warning: Annotations can leak out into your public interface which can be unbelievably infuriating.
I'm currently working with APIs where the API vendor has peppered his implementation with Spring MBean annotations, which suddenly means I have a dependency upon the Spring libraries, despite the possibility I might not need to use Spring at all :(
(Of course, if your API was an extension to Spring itself, this might be a valid assumption.)
I think the decision comes down to 'lifecycle', and impedance mismatch between lifecycles.
Lifecycle: Every piece of data, whether its source code, a database row, a compiled class, an object, has a lifecycle associated with it. When does it come into existence and when is it garbage collected?
Suppose I put Hibernate annotations on a Java class. Seems like a reasonable idea, especially if I am creating a new database from scratch and am confident that only this one application will ever connect to it - the lifecycles of my classes, the database schema and the ORM mapping are naturally in sync.
Now suppose I want to use that same class in an API and give it to some third party to consume. The Hibernate annotations leak into my API. This happens because the lifecycle of that class and the database are not the same thing. So we end up using mapping tools to translate between layers of beans in a system.
I try to think about lifecycles and that annotations that can cause lifecycle mismatches should be avoided. Some annotations are relatively harmless in this respect, and some are a hidden danger.
Examples of bad annotations: ORM mapping, database configuration, hard coded config for items that may vary between deployment environments, validations that may vary depending on context.
Examples of harmless annotations: REST endpoint definitions, JSON/XML serialization, validations that always apply.
I'm hoping to find out what tools folks use to synchronize data between databases. I'm looking for a JDBC solution that can be used as a command-line tool.
There used to be a tool called Sync4J that used the SyncML framework but this seems to have fallen by the wayside.
I have heard that the Data Replication Service provided by Db4O is really good. It allows you to use Hibernate to back onto a RDBMS - I don't think it supports JDBC tho (http://www.db4o.com/about/productinformation/drs/Default.aspx?AspxAutoDetectCookieSupport=1)
There is an open source project called Daffodil, but I haven't investigated it at all. (https://daffodilreplicator.dev.java.net/)
The one I am currently considering using is called SymmetricDS (http://symmetricds.sourceforge.net/)
There are others, they each do it slightly differently. Some use triggers, some poll, some use intercepting JDBC drivers. You need to decide what technical limitations you are under to determine which one you really want to use.
Wikipedia provides a nice overview of different techniques (http://en.wikipedia.org/wiki/Multi-master_replication) and also provides a link to another alternative DBReplicator (http://dbreplicator.org/).
If you have a model and DAO layer that exists already for your codebase, you can just create your own sync framework, it isn't hard.
Copy data is as simple as:
read an object from database A
remove database metadata (uuid, etc)
insert into database B
Syncing has some level of knowledge about what has been synced already. You can either do it at runtime by getting a list of uuids from TableInA and TableInB and working out which entries are new, or you can have a table of items that need to be synced (populate with a trigger upon insert/update in TableInA), and run from that. Your tool can be a TimerTask so databases are kept synced at the time granularity that you desire.
However there is probably some tool out there that does it all without any of this implementation faff, and each implementation would be different based on business needs anyway. In addition at the database level there will be replication tools.
True synchronization requires some data that I hope your database schema has (you can read the SyncML doc to see how they proceed). Sync4J won't help you much, it's really high-level and XML oriented. If you don't foresee any conflicts (which means: really easy synchronisation), you could try with a lightweight ETL like Enhydra Octopus.
I'm primarily using Oracle at the moment, and the most full-featured route I've come across is Red Gate's Data Compare:
http://www.red-gate.com/products/oracle-development/data-compare-for-oracle/
This old blog gives a good summary of the solution routes available:
http://www.novell.com/coolsolutions/feature/17995.html
The JDBC-specific offerings I've come across have been very basic. The solution mentioned by Aidos seems the most feature complete if you want to go down the publish-subscribe route:
http://symmetricds.codehaus.org/
Hope this helps.