EclipseLink Moxy minimum libraries required for unmarshalling - java

i have been working with EclipseLink in the past couple of days to implement one of our small converter applications. The input for these are usually one document format type and now i.e. in the future a sophisticated metadata xml.
Since we have a schema for this and there are still slight changes to be expected in the future, i wanted to give the JAXB approach a try and i like it very much so far.
However, as i finish the application i noticed that due to the usage of eclipselink.jar, the application is rather large (~10MB) in comparision to similar converters (~1MB).
This is due to the fact that there is, for reasons of technological environment, no global classpath for the converter jars, but every one of them needs to be self sufficient.
This means that i copy every required jar into one big jar using ant.
I am not quite fond of this approach myself but so far can only hint that some different approach may or may not be more elegant.
There are some smaller jars containing fragments of needed classes with the eclipselink distribution, but i found none that contained the
org.eclipse.persistence.jaxb.JAXBContextFactory
(plus the dependencies for this).
It seems to me, but this is a lot of guess work, that the
eclipselink.jar
includes the complete-wellness-package-that-leaves-nothing-to-be-desired and that is a bit of an overkill for me.
Long story short:
Is there a light weight version of the eclipselink.jar which would support the unmarshalling of an xml for which i generated java classes in advance? Or am i trying the impossible?
Thank you in advance
Christian

Instead of using eclipselink.jar, you can use the bundles. Then you will need to include the following
org.eclipse.persistence.asm.version.jar
org.eclipse.persistence.core.version.jar
org.eclipse.persistence.moxy.version.jar
The total is still larger than other providers, but we're working on fixing that.

Related

Spring annotation-based DI vs xml configuration?

Recently in our team we started discussing using spring annotations in code to define spring dependencies. Currently we are using context.xml to define our dependencies. Would you give me some clues for either approach, and when one is better to be used?
Edit: I know this seems a duplicate question to a more-general one, but I am interested in the impacts of annotations vs configuration for dependency injection only, which I believe would have different answers and attitude than the general question.
After reading some related posts here and having further discussion in the team we come to the following conclusions. I hope the would be useful to others here.
About XML configuration (which we are using up to now), we decided to keep it for dependencies defined by libraries (regardless if being developed by us, or by third parties).
Libraries, by definition, provide a particular functionality and can be used in various scenarios, not necessarily involving DI. Therefore, using annotations in the library projects we develop ourselves, would create a dependency of the DI framework (Spring in our case) to the library, making the library unusable in non-DI context. Having extra dependencies is not considered a good practice among our team (an in general IMHO).
When we are assembling an application, the application context would define the necessary dependencies. This will simplify dependency tracking as the application becomes the central unit of combining all the referenced components, and usually this is indeed where all the wiring up should happen.
XML is also good for us when providing mock implementations for many components, without recompiling the application modules that will use them. This gives us flexibility when testing running in local or production environment.
In regards to annotations, we decided that we can benefit using them when the injected components will not vary -- for instance only a certain implementation for a component will be used troughout the application.
The annotations will be very useful for small components/applications that will not change or support different implementations of a dependency at once, and that are unlikely to be composed in a different way (for instance using different dependencies for different builds). Simple micro-services would fit in this category.
Small enough components, made up with annotations, can be used right out of the box in different projects, without having the respective applications to cover them in their XML configuration. This would simplify the application dependency wiring for the application and reduce repetitive setups.
However, we agreed that such components should have the dependencies well described in our technical documentation, so that when assembling the entire application, one can have an idea of these dependencies without scrolling through the code, or even loading the module in the IDE.
A negative side effect of annotation-configured components, is that different components could bring clashing transitive dependencies, and again it is up to the final application to resolve the conflicts. When these dependencies are not defined in XML, the conflict resolution approaches become quite limited and straying far from the best practices, if they are at all possible.
So, when going with annotations, the component has to be mature enough about what dependencies it is going use.
In general if our dependencies may vary for different scenarios, or a module can be used with different components, we decided to stick to XML. Clearly, there MUST be a right balance between both approaches, and a clear idea for the usages.
An important update regarding the mixed approach. Recently we had a case with a test framework we created for our QA team, which required dependencies from another project. The framework was designed to use the annotation approach and Spring configuration classes, while the referenced project had some xml contexts that we needed to reference. Unfortunately, the test classes (where we used org.testng with spring support) could only work with either the xml or java configuration classes, not mixing both.
This situation illustrates a case where mixing the approaches would clash and clearly, one must be discarded. In our case, we migrated the test framework to use spring xml contexts, but other uses could imply the other way around.
Some advantages of using XML configuration:
The XML configuration is at one place, instead of being scattered all over the source code in case of annotations. Some people may argue that IDEs like STS allow you to look at all annotations based configuration in one place, but I never like having dependencies on IDEs.
Its takes a little more efforts to write XML config, but it saves a lot of time later when you search for dependencies and try to understand the project.
XML keeps configuration well organized and simple. Hence is easier to understand, it helps new relatively inexperienced team members get up to speed quickly.
Allows you to change the config without a need to recompile and redeploy code. So it is better, when it comes to production support.
So in short XML configuration takes a little more efforts, but it saves you a lot of time & headache later in big projects.
2.5 years later:
We use annotations mostly these days, but most crucial change is that we create many small projects (instead of a one big project). Hence understanding dependencies is not a problem anymore; as each project has it's unique purpose and relatively small codebase.
from my experience, I would prefer(or rather am forced by limitations) to use a combination of XML and annotation based DI . If I need to inject a Map of elements inside a bean , I would have to define a util:map and autowire it . Also, I need to use XML DI to inject datasource into the sessionFactory if I have multiple datasources and so on . So a combination of both would be requited .
I prefer the usage of component-scan to autodetect the services and Dao . This cuts down a lot of Configuration (We cut down the configuration files by around 50% switching to component-scan). Annotation based DI supports both byName(#Resource) and byType(#Autowired).
In short my advice to be to go for a fixture of both . I feel that more annotation support will definitely be on cards in future Spring releases.
Take a look at this answer here: Xml configuration versus Annotation based configuration
A short quote directly from there:
Annotations have their use, but they are not the one silver bullet to
kill XML configuration. I recommend mixing the two!
For instance, if using Spring, it is entirely intuitive to use XML for
the dependency injection portion of your application. This gets the
code's dependencies away from the code which will be using it, by
contrast, using some sort of annotation in the code that needs the
dependencies makes the code aware of this automatic configuration.
However, instead of using XML for transactional management, marking a
method as transactional with an annotation makes perfect sense, since
this is information a programmer would probably wish to know.
EDIT: Also, take a look at the answers here: Java Dependency injection: XML or annotations They most probably target the area of your interest much better.
From my own experience annotations better than xml configuration. I think in any case you can override xmls and use annotations. Also Spring 4 give us a huge support for annotations, we can override security from xml to annotations e.t.c, so we will have not 100 lines xml but 10 lines Java Code.
Are annotations better than XML for configuring Spring?
The introduction of annotation-based configurations raised the
question of whether this approach is 'better' than XML. The short
answer is it depends. The long answer is that each approach has its
pros and cons, and usually it is up to the developer to decide which
strategy suits them better. Due to the way they are defined,
annotations provide a lot of context in their declaration, leading to
shorter and more concise configuration. However, XML excels at wiring
up components without
touching their source code or recompiling them. Some developers prefer
having the wiring close to the source while others argue that
annotated classes are no longer POJOs and, furthermore, that the
configuration becomes decentralized and harder to control.
No matter the choice, Spring can accommodate both styles and even mix
them together. It’s worth pointing out that through its JavaConfig
option, Spring allows annotations to be used in a non- invasive way,
without touching the target components source code and that in terms
of tooling, all configuration styles are supported by the Spring Tool
Suite.
my personal option is that xml is better since you have all at one place and you do not need to deep into your packages to search the class.
We can not tell which method is good, it depends on your project. We can nither avoid xml nor annotation. One advantage of using xml is that we can understand the project structure just seeing the xml context files, but annotation reduces lots of meta configuration. So I prefer 30% xml and 70% annotation.
By using XML, you prevent code from being polluted with framework-specific annotations and thus creating an undesired coupling. Keep the framework at the application boundary so you can always replace it should the need arise.
Frameworks come and go, but many applications live for decades. Fortunately, Spring is a non-invasive framework and doesn't bend your architecture. Keeping the configuration in XML will make it even more detached from your application.
Remark: in order to benefit from all this, your application should be well-designed in the first place.

JAR Hell Hacks for Non-OSGi Developers

Edit: After reviewing the play, the example I used below is a tad misleading. I am looking for the case where I have two 3rd party jars (not homegrown jars where I have access to the source code) that both depend on different versions of the same jar.
Original:
So I've recently familiarized myself with what OSGi is, and what ("JAR Hell") problems it addresses at its core. And, as intrigued as I am with it (and plan on migrating somewhere down the road), I just don't have it in me to begin learning what it will take to bring my projects over to it.
So, I'm now lamenting: if JAR hell happens to me, how do I solve this sans OSGi?
Obviously, the solution would almost have to involve writing my own ClassLoader, but I'm having a tough time visualizing how that would manifest itself, and more importantly, how that would solve the problem. I did some research and the consensus was that you have to write your own ClassLoader for every JAR you produce, but since I'm already having a tough time seeing that forest through the trees, that statement isn't sinking in with me.
Can someone provide a concrete example of how writing my own ClassLoader would put a band-aid on this gaping wound (I know, I know, the only real solution is OSGi)?
Say I write a new JAR called SuperJar-1.0.jar that does all sorts of amazing stuff. Say my SuperJar-1.0.jar has two other dependencies, Fizz-1.0.jar and Buzz-1.0.jar. Both Fizz and Buzz jars depend on log4j, except Fizz-1.0.jar depends on log4j-1.2.15.jar, whereas Buzz-1.0.jar depends on log4j-1.2.16.jar. Two different versions of the same jar.
How could a ClassLoader-based solution resolve this (in a nutshell)?
If you're asking this question from an "I'm building an app, how do I avoid this" problem rather than a "I need this particular solution" angle, I would strongly prefer the Maven approach - namely, to only resolve a single version of any given dependency. In the case of log4j 1.2.15 -> 1.2.16, this will work fine - you can include only 1.2.16. Since the older version is API compatible (it's just a patch release) it's extremely likely that Fizz 1.0 won't even notice that it's using a newer version than it expected.
You'll find that doing this will probably be way easier to debug issues with (nothing confuses me like having multiple versions of even classes or static fields floating around! Who knows which one you're dealing with!) and doesn't need any clever class loader hacks.
But, this is exactly what all the appservers out there have to deal with. Pretend that your Fizz and Buzz are web applications (WARs), and Super-Jar is you appserver. Super-Jar will arrange a class loader for each web app that "breaks" the normal delegation model, i.e. it will look locally (down) before looking up the hierarchy. Go read about it in any of the appservers's documentation. For example http://download.oracle.com/docs/cd/E19798-01/821-1752/beade/index.html.
Use log4j-1.2.16. It only contains bugfixes wrt 1.2.15.
If Fizz breaks with 1.2.16, fork and patch it, then submit those patches back to the author of Fizz.
The alternative of creating custom classloaders with special delegation logic is very complex and likely to cause you many problems. I don't see why you would want to do this rather than just use OSGi. Have you considered creating an embedded OSGi framework, so you don't have to convert your whole application?

Compiling huge schema into Java

There are two major tools which provides a way to compile XSD schema into Java: xmlbeans and JAXB.
The problem is the XSD schema is really huge: 30MB of XML files, most of the schema isn't used in my project, so I can comment out most of the code, but it not a good solution.
Currently my project uses xmlbeans which compiles the schema with major changes. It produces ~60MB of classes and it takes ~30 min to compile.
Another solution is to use JAXB, which generates ~14MB of code without need to edit the code. But it produces huge ObjectFactory class, which fails to compile with "too many constants" error. I can throw the class away and compile the schema without it, but as I understand, it's very useful class.
Any ideas how to handle this huge schema?
Could you create a script to extract the portion(s) of the schema you need and integrate that into your build process prior to mapping with XmlBeans or JAXB?
You could probably script this extraction fairly simply and easily in Python, Perl, Awk, etc; or even in XSL if you have expertise there (I've never spent enough contiguous time coding XSL to get proficient, so I'd probably stick to a scripting language, but that's just me).
e.g.:
python extract.py big-schema.xsd >small-schema.xsd
xsd2java <args> small-schema.xsd
...
You might find that a subsequent update by the 3rd-party vendor would invalidate your extraction script, but unless they're making very large changes to the overall schema, you should be able to update the script fairly quickly, and it sounds like those updates should be fairly infrequent.
Incidentally, I'm a little partial to XmlBeans; when we did our own evaluation of XML-Java mapping tools, it seemed to handle constructs like xs:choice, xs:all, and type-substitution better than anything else we tried. But that was several years ago, and could certainly have changed by now. At this point, we're continuing to use it more out of institutional inertia than anything else, so take that recommendation with a dash of salt.
30Mb of schema? What on earth is this - I'd be interested to know if it's available as a test case for schema processors.
Data mapping (a la JAXB) works best with small schemas. I've seen people really struggle when the schema gets as large as about 200 element types. You must be dealing with something a couple of orders of magnitude larger here - I would say it's a non starter.

Castor performance issues

We recently upgraded to Castor 1.2 from version 0.9.5.3 and we've noticed a dramatic drop in performance when calling unmarshal on XML. We're unmarshaling to java classes that were generated by castor in both cases. For comparison, using identical XML the time for the XML unmarshal call used to take about 10-20ms and now takes about 2300ms. Is there something obvious I might be missing in our new castor implementation, maybe in a property file that I missed or should I start looking at reverting to the old version? Maybe there was something in the java class file generation that is killing the unmarshal call? I might also consider Castor alternatives if there were a good reason to drop it in favor of something else. We're using java 1.5 in a weblogic server.
We had very serious performance issues by using castor 1.0.5, with .castor.cdr file (a few seconds to unmarshal, whereas it took milliseconds by the past).
It appeared that the .castor.cdr generated file contained old/wrong values (inexisting types and descriptor). After deleting the incriminated lines in this file, all was back to normal.
Hope this could help for anyone who have the same problem!
You might want to consider using JiBX instead. It is considerably faster than Castor (9x faster on one project where I made the switch), and cleaner.
EDIT: See also my answer to this related question.
We wound up reverting to Castor version 0.9.5.3 and the performance jumped back up after we regenerated the java classes from the new XSD's. I'm not sure why exactly there's such a big difference between the resulting java but the 1.2 classes were about 2 orders of magnitude slower when unmarshaling.
**EDIT:**It looks like by creating ClassDescriptorResolvers/mapping file and turing off validation that we could improve the performance also but since we create about 1000 objects with the schema generation process this isn't really viable from a cost perspective.
I too have this issue, when generating a basic customer/address set of XML it takes around 3s to generate a document including 74 customers.
Reverting to 1.0.4 (for testing) sees this return to 1.4s,
But hand rolling the XML sees the output at under 40ms.. I know the frameworks add some overhead, but there must be something causing this.
Has there been any profiling run on Castor?
I'll go investigate JiBX as suggested by Dan.

Is it worth the effort to move from a hand crafted hibernate mapping file to annotaions?

I've got a webapp whose original code base was developed with a hand crafted hibernate mapping file. Since then, I've become fairly proficient at 'coding' my hbm.xml file. But all the cool kids are using annotations these days.
So, the question is: Is it worth the effort to refactor my code to use hibernate annotations? Will I gain anything, other than being hip and modern? Will I lose any of the control I have in my existing hand coded mapping file?
A sub-question is, how much effort will it be? I like my databases lean and mean. The mapping covers only a dozen domain objects, including two sets, some subclassing, and about 8 tables.
Thanks, dear SOpedians, in advance for your informed opinions.
"If it ain't broke - don't fix it!"
I'm an old fashioned POJO/POCO kind of guy anyway, but why change to annotations just to be cool? To the best of my knowledge you can do most of the stuff as annotations, but the more complex mappings are sometimes expressed more clearly as XML.
One thing you'll gain from using annotations instead of an external mapping file is that your mapping information will be on classes and fields which improves maintainability. You add a field, you immediately add the annotation. You remove one, you also remove the annotation. you rename a class or a field, the annotation is right there and you can rename the table or column as well. you make changes in class inheritance, it's taken into account. You don't have to go and edit an external file some time later. this makes the whole thing more efficient and less error prone.
On the other side, you'll lose the global view your mapping file used to give you.
I've recently done both in a project and found:
I prefer writing annotations to XML (plays well with static typing of Java, auto-complete in IDE, refactoring, etc). I like seeing the stuff all woven together rather than going back and forth between code and XML.
Encodes db information in your classes. Some people find that gross and unacceptable. I can't say it bothered me. It has to go somewhere and we're going to rebuild the WAR for a change regardless.
We actually went all the way to JPA annotations but there are definitely cases where the JPA annotations are not enough, so then had to use either Hibernate annotations or config to tweak.
Note that you can actually use both annotations AND hbm files. Might be a nice hybrid that specifies the O part in annotations and R part in hbm files but sounds like more trouble than it's worth.
As much as I like to move on to new and potentially better things I need to remember to not mess with things that aren't broken. So if having the hibernate mappings in a separate file is working for you now I wouldn't change it.
I definitely prefer annotations, having used them both. They are much more maintainable and since you aren't dealing with that many classes to re-map, I would say it's worth it. The annotations make refactoring much easier.
All the features are supported both in the XML and in annotations.
You will still be able to override your annotations with xml declaration.
As for the effort, i think it is worth it as you will be able to see all in one place and not switch between your code and the xml file (unless of-course you are using two monitors ;) )
The only thing you'll gain from using
annotations
I would probably argue that this is the thing you want to gain from using annotations. Because you don't get compile time safety with NHibernate this is the next best thing.
"If it ain't broke - don't fix it!"
#Macka - Thanks, I needed to hear that. And thanks to everyone for your answers.
While I am in the very fortunate position of having an insane amount of professional and creative control over my work, and can bring in just about any technology, library, or tool for just about any reason (baring expensive stuff) including "because all the cool kids are using it"...It does not really make sense to port what amounts to a significant portion of the core of an existing project.
I'll try out Hibernate or JPA annotations with a green-field project some time. Unfortunately, I rarely get new completely independent projects.

Categories

Resources