I want to transfer hibernate objects with GWT-RPC to the frontend. Of course i can not transfer the annotated class because the annotations can not be compiled to javascript. So i did the hibernate mapping purely in the ".hbm.xml". This worked fine for very simple objects. But as soon as i add more complex things like a oneToMany relationship realized with e.g. a set, the compiler complains about some serialization issues with the set (But the objects in the set are serializable as well).
I guess it does't work because hibernate creates some kind of special set that can not be interpreted by GWT?
Is there any way to get around this or do i need another approach to get my objects to the frontend?
Edit: It seems that my approach is not possible with RPC because hibernate changes the objects. (see answer from thanos). There is a newer approach from google to transfer objects to the the frontend: The request factory. It looks really good and i will try this now.
Edit2: Request factory works perfectly and is much more convenient than RPC!
This is a quote from GWT documentation. It says that hibernate changes the object from the original form in order to make it persistent.
What this means for GWT RPC is that by the time the object is ready to be transferred over the wire, it actually isn't the same object that the compiler thought was going to be transferred, so when trying to deserialize, the GWT RPC mechanism no longer knows what the type is and refuses to deserialize it.
Unfortunately the only way to implement the solution is by making DTOs and their appropriate converters.
Using Gilead is a cleaner approach (no need for all this DTO code), but DTOs are more ligtweight and thus produce less traffic through the wire.
Anyhow there is also Dozer, that will generate the DTOs for you so there will not be much need for yo to actually write the code.
Either way as mchq08 said the link he provided will solve many of questions.
I would also make another suggestion! Separate the projects. Create a new one as a model for your application and include the jar into the GWT. In this way your GWT project will be almost in its' entirety the GUI and the jar library can be re-used for other projects too.
When I created my RPC to Hibernate I used this example as a framework. I would recommend downloading their source code and reading the section called "Integration Strategies" since I felt the "Basic" section did not justify DTO. One thing this tutorial did not go over as well is the receiving and sending part from the web page(which converts to JS) so thats why I am recommending you downloading their source code and looking at how they send/receive each the DTOs.
Post the stack trace and some code that you believe will be useful to solving this error.
Google's GWT & Hibernate
Reading this (and the source code) can take some time but really helps understands their logic.
I used the next approatch: for each hibernate entity class I had client replica without any hibernate stuff. Also I had mechanism for copy data between client <-> server clases.
This was working, but I belive current GWT version should work with hibernate-annotated classes..
On a client project, I use Moo (which I wrote) to translate Hibernate-enhanced domain objects into DTOs relatively painlessly.
Related
I'm looking into the Axon framework and I'm having a hard time with the automatic persistence of the command state.
I've looked at the documentation regarding the command model repository and from my understanding the state of the command model for the standard repository should be auto-persisted provided I have the correct dependencies. This sentiment is also present in another blog/tutorial I've looked at (you might need to scroll down a bit to the Repository section).
The problem is that although I've added the axon-mongo dependency, the command state is not being persisted automatically. I've tried to configure the relevant Repository beans as per the docs but It doesn't seem to have worked either. I'm not even sure whether this is required given that (from my understanding of the docs) you would do this mainly if you want to query the command state.
While I understand that I can create my own repository and save the entities myself (similar to this tutorial), I'd rather not given seems to provide this out of the box.
Am I missing something here?
NOTE: My Mongo setup seems to be correct since I've managed to persist my events in MongoDB as per the documentation.
UPDATE
As per Steven's comment (and subsequent comments), I decided to try and implement a state-stored aggregate however I found an issue with the (de)serialization of the aggregate. I've sent my Aggregate to Steven and he has confirmed that it is simple enough that it should be (de)serialized by XStream. I have also tried to serialize my aggregate using a standalone instance of XStream and it worked, which led me to believe that this is more of an Axon issue than an XStream issue. I also tried to use the Jackson and java (de)serializers (since they are the other options provided by Axon) and I found similar problems. I have concluded that this is an Axon bug and i have stopped trying to solve the issue.
From your question it is not immediately clear if you are aware of the possible Command Model storage mechanisms you can choose from.
So firstly, like #Mzzl points out in his comment, you can view the Command Model state from two angles:
Event Sourced
State-stored
By default, Axon Framework will set up your Aggregate with an EventSourcingRepository behind it. This means, that if an Aggregate (e.g. your Command Model) is required to handle a new Command, that the Aggregate will be loaded by retrieving a stream of all the events that it is has published.
Second, it will call all the #EventSourcingHandler annotated methods on your Aggregate implementation to recreate the state of the Command Model.
Lastly, once all the Events which are part of the Aggregate's Event Stream have been handled, the command will be provided to the #CommandHandler annotated method.
The state-stored approach is obviously a little different, as this means the entirety of your Aggregate will be stored in a repository.
Note though, that the State-Stored approach is only supported through the GenericJpaRepository class. Thus, storing the Aggregate in it's entirety in a MongoDB is not an option.
If you are looking for an Event Sourcing approach for your Aggregate, the events can be sourced from any EventStore implementation covered by the framework.
That thus means you can choose JPA, JDBC, MongoDb and Axon Server as the means to storing your events and retrieving a stream of events to recreate your Command Model.
Configuration wise, there are a couple of approaches to this.
If you are using the Configuration API provided by Axon directly, you can use:
AggregateConfigurer#defaultConfiguration(Class<A>) for an Event Sourced approach
AggregateConfigurer#jpaMappedConfiguration(Class<A>) for a State-Stored approach
If you are in a Spring Boot environment with your application, it's a little simpler to switch between event sourced and state-stored.
Simply having the #Entity annotation on your Aggregate implementation is sufficient for the framework to note you want to store the Aggregate as is.
Hope this sheds some light on the situation #The__Malteser!
Update
Based on the comments, it's clear that the XStreamSerializer which the framework uses by default to de-/serialize objects, is incapable of serializing your Aggregate instance in a State-Stored fashion.
Based on the exception you're receiving, being Cannot marshal the XStream instance in action, I did some searching/digging. I have a hunch that XStream is by default not capable to simply serialized non-static inner classes.
However, as we're not sure what the implementation is of your Aggregate, it's hard to deduce whether that's the problem at hand. Would you be able to share your implementation with us here so that we can better deduce whether an inner class is the problem?
Does anyone have any strategies or examples of cross-framework libraries?
I am working on a project with an android app, a java server and a Java desktop client, which all use different frameworks. I need to refactor some core business logic into a separate library that can be used across all of these to ensure consistent behavior, but the field annotations are killing me.
The problem is that I am using Room in the Android app (which requires the #PrimaryKey annotation on the primary key field of a database entity) and JPA in the server and JavaFX client (which requires #Id).
Given this level of difficulty with the models, we initially copy-pasted the fields without annotations to the others when changing them. However, the business logic needs to make use of the models and accommodate each platform's specific ORM, Http client and Json serializer. (I know that it is technically possible to get Gson, Apache Http and Hibernate to run on all of these platforms, but actually doing any of these solutions created too many nightmares of its own)
As far as I can tell, there isn't a nice solution to this. Fortunately, the same #Inject is used in Dagger2 and CDI/CDI-SE so I have created some interfaces that each platform/framework will implement.
Does anybody have any examples or case studies I could look at which might help me arrive at a solution?
(I realize this question doesn't include any code samples, but it's more of a general programming strategy question.)
Disclaimer: I am the architect of JDX for Java and JDXA for Android ORMs.
You may consider using JDX for Java and JDXA for Android ORM frameworks to share the common object model, the core business logic code, and the data integration code across Java server, Java desktop, and Android platforms.
JDX and JDXA don't use annotations to define the mapping - instead they use an external text file to define the mapping specification based on a simple ORM grammar. So you may use the same mapping specification for your common object model across different platforms. Also, the APIs for both JDX and JDXA are simliar.
So, you just need to use the appropriate JDX(A) ORM library for your target platform and an appropriate JDBC driver for your target database without needing to change your object model or business logic.
We have a large offline process that updates the model I designed inside of Play Framework. I think it makes sense to keep this code as a stand-alone project -- but I would like it to be able to use the JPA Model designed inside Play.
I'm wondering if there's a good way to handle this -- a way to reference the JPA Model independently of Play Framework (inside another vanilla Java project).
Another option is to create an API that the external process calls, which is what I've done so far, but it introduces a lot of unnecessary network latency.
Any pointers on how to accomplish this?
Passing around a Play specific JPA entity (ie. that extends Model) is probably not a good idea. You'd be introducing a dependency on the Play jars where they are not required.
As I see it you have two viable options:
Create the object as a POJO and use a Hibernate Xml Config (for Play
versions less than 2.0) to define the mapping to the database. You
can keep the pojo and the config entirely separate - ie. keep the
config in the classpath of your Play App.
Pass your object around in a serialized form eg. XML or JSON.
I'm having a problem. I would like to create Document object, and I would like to have a user property with com.google.appengine.api.users.User type (on GAE's docs site, they said we should use this object instead of email address or something else, because this object probably will be enchanced to be unique). But now the object can't be compiled by GWT, because I don't have the source for that object.
How can I resolve the problem?
I was searching for documents about DTOs, but I realized that maybe that's not the best pattern I should use.
What do you recommend?
Very thanks for your help!
Regards,
Bálint Kriván
to avoid DTOs for objects with com.google.appengine.api.users.User inside you can probably use the work from
http://www.resmarksystems.com/code/
He has build wrappers for the Core GAE data types (Key, Text, ShortBlob, Blob, Link, User). I've tested it with datastore.Text and it worked well.
There is a lot of debate about whether you should be able to reuse objects from the server on the client. However, the reuse rarely works out well in real applications so I generally recommend creating pure java objects that you copy your data into to send to the client. This allows you to tailor the data to what you need on the client and avoids pitfalls where you accidently send sensitive information over the wire.
So in this case, I would recommend that you create a separate object to send over the wire. BTW if you have the AppEngine SDK for Java (http://code.google.com/appengine/downloads.html), it includes a demo application that I did (sticky) that demonstrates this technique.
this question also addresses the issue:
It links to a semi workable solution for automatically making your persistent objects gwt-rpc compatible.
I had the same question, your answer is interesting, but i am always sad to copy twice a data... Plus, when your dao gets the data, you will have to parse all the results to copy them to the pure java object, isn't it? It seems to be a heavy operation. What's your opinion about those question?
Currently, I only know a way of doing RPC for POJOs in Java, and is with the very complex EJB/JBoss solution.
Is there any better way of providing a similar functionality with a thiner layer (within or without a Java EE container), using RMI or something that can serialize and send full blown objects over the wire?
I'm not currently interested in HTTP/JSON serialization BTW.
EDIT: For clarification: I'm trying to replace an old EJB 2.1/JBoss 4 solution with something more easy to manage at the container level. I need to have entire control over the database(planning to use iBATIS which would allow me to use fairly complex SQL very easily), but the only things I want to keep over the wire are:
Invocation of lookup/data modification methods (automagic serialization goes here).
Transparent session control (authentication/authorization). I still have to see how to accomplish this.
Both items have to work as a whole, of course. No access should be granted to users without credentials.
Because I'm not very fond of writing webapps, I plan to build a GUI (Swing or SWT) that would only manage POJOs, do some reporting and invoke methods from the container. I want the serialization to be as easy as possible.
As is nearly always the case, Spring comes to the rescue. From the reference documentation, you will want to read Chapter 17. Remoting and web services using Spring.
There are several methods to choose from. The beauty of Spring is that all your interfaces and implementations are vanilla POJOs. The wiring into RMI or whatever is handled by Spring. You can:
Export services using RMI:
probably the simplest approach;
Use HTTP invoker: if remote access is an issue, this might be better for firewalls, etc than pure RMI; or
Use Web Services, in which case I would favour JAX-WS over JAX-RPC.
Spring has the additional benefit in that it can do the wiring for both the server and the client, easily and transparently.
Personally I would choose either (2) or (3). HTTP is network friendly. It's easy to deploy in a Web container. Jetty's long-lived connections give you the option over server push (effectively) over HTTP.
All of these methods allow complex objects to be sent across the wire but they are subtly different in this regard. You need to consider if your server and client are going to be distributed separately and whether it's an issue if you change the interface that you need to redistribute the class files. Or you can use a customized serialization solution (even XML) to avoid this. But that has issues as well.
Using a Web container will allow you to easily plug-in Spring Security, which can be a bit daunting at first just because there are so many options. Also, HttpSession can be used to provide state information between requests.
Simple RPC is exactly what RMI was built for. If you make a serializable interface, you can call methods on one app from another app.
If you only need value objects then just ensure the POJOs implement Serializable and write the objects across sockets (using ObjectOutputStream). On the receiving end read the objects using ObjectInputStream. The receiving end has to have a compatible version of the POJO (see serialVersionUID).
Hessian/Burlap 'protocol-ize this: http://hessian.caucho.com/ and http://www.caucho.com/resin-3.0/protocols/burlap.xtp
You could try XStream (http://x-stream.github.io/) over REST. Easy to apply on e pre-existing set of pojos.
Can you give some further information as to what you're trying to achieve, since you're not interested in rest/json ?