I have the following scenario: I am using a very big external library in my Eclipse RCP application for a specific purpose.
At this point in time I am not sure if I may not have to replace this library in the future to another one (because it does not provide the necessary functionality or something like that). Also I have users using this library from day one so I would like to encapsulate the library, giving me at least a chance of changing the library in the future without the user noticing or having to change anything in their code.
Is there a simple way to encapsulate a whole library in some automated fashion?
Unless the part of the library's interface you are actually using is completely trivial, or standardized the way JSF or JAX-B are (in which case you don't need encapsulation) this is a completely wasted effort.
I can guarantee that if you have to switch to a different library, the encapsulation would prove worthless because the other library has different underlying concepts and usage patters that cannot be made to fit the existing ones.
I don't think that's possible, since the syntax and semantics of the library might be unique to some extent.
Sure, you could create proxies for all the classes and provide those, but that might require quite some work (writing a framework that scans the library) and that wouldn't guarantee you that exchanging the library would be easy.
Imagine the replacement would provide different methods and even use different semantics (to some extent). What if methods/fields etc. were missing in the replacement?
The best way to handle that would be to write an explicit wrapper and make the users use only that wrapper. This way you could restrict the API to the core concepts that are really needed. This still might not provide a good enough encapsulation however, based on what the library actually does.
Example:
For 3D programming you could use OpenGL or Direct3D. Both have somewhat different APIs but use the same core concepts. Thus you could create a wrapper for them that provides a unified API. That wrapper might then have to convert some data etc. (like making column-oriented matrics row-oriented and vice versa) but since the core concepts are the same, that should be doable.
However, you'd need to stick to the core concepts and couldn't use additional features. For example, Direct3D would also provide some more highlevel API (Direct3DX) which isn't provided by OpenGL.
Related
This questions looks weird or may be pointless at all.
Recently, I was reviewing some code in java language where the developer used one of the methods from a unit testing library "org.easytesting".
Example: He was using a method "Strings.isNullOrEmpty" of "Strings" class from this library to verify the non-nullability of some values and was using other classes/methods at other places in the code.
I know a library is developed to make our life easier(basic principles of Java) and can be used anywhere/everywhere, but is there a recommendation about using a unit test library in live development code ?
I know using it won't led to a compatibility issue because unit test cases are executed always.
I searched at many places may be I'm missing a good word to search.
It could be argued that a unit-test library is just a library, but I don't see it like this.
First, the purpose of a unit-test library is to be used in code that is not production code. Which means, that certain quality criteria relevant for production code might not be met. For example, if there is a bug in a unit-test library it is annoying, but normally does not harm the production code. Or, performance may not be quite as relevant, thread safety and so on. I don't want to say that the popular unit-testing frameworks are of a bad quality. But, the developers of these libraries have all the right in the world to take design decisions based on the assumption that their code will not be part of production code.
Secondly, using a library should be done in accordance to the philosophy of the respective library. For example, if you use a certain gui library, this brings implications on the way event handling is done in your application. And, unit-testing frameworks come under the assumption that the framework is in control of the executable (from the test runner). Therefore, all functions from that library may depend on the test runner being set up and running. If some function from the library does not have this dependency, that is an implementation detail which may change with a new version of the library.
Finally, code should communicate intent. That includes includes (pun intended). It was not the intent of the developer to write unit-testing code, but that would be what the inclusion of a unit-testing library would communicate.
Considering that there are other, production-oriented libraries out there which check if a string is empty or null, any use of the testing framework's method should be treated as a strong code smell and identified in code reviews.
In the future, this testing library may introduce a change in other parts which make running it in production either prohibitively expensive or insecure, as the code running through this empty or null check could be leveraged as an area of attack. Worse, if your team wants to pivot away from this testing framework, you now have to change production code which many teams would be reluctant to do if all they're doing is changing test dependencies.
Without looking specifically at test libraries, here's an answer to this more general question:
Should you use the general-programming utility classes that are provided by any framework or library? (For example, should you use the StringUtils/CollectionUtils/etc provided by a web/UI/logging/ORM/... framework).
The arguments by the other answers are still mostly valid even in this more general case. Here are some more:
These utilities have been developed specifically for use by the framework. They likely only contain very specific methods with narrow use cases (those that are actually required by the framework) and nothing more. They might be optimized for specific internal behavior of the framework and not for general purposes.
Framework developers may introduce breaking changes without much thought, since they don't expect many users outside of their framework.
It would be alarming to see imports from e.g. a UI library in your back end code, it looks like code smell.
In modular projects, you wouldn't want to introduce additional dependencies to the framework's utilities (again, a dependency to an UI framework from you back end modules is code smell). It would also add a bunch of unnecessary transitive dependencies that may aren't even compatible with other dependencies.
So I would say generally you shouldn't use the utilities of frameworks, except in those places where you are actually working with those frameworks. But even then you should consider using Apache Commons or Guava etc. for consistency.
Now you could also replace the terms UI and back end with test and production in the last two points. Test frameworks are also special in the sense that you usually don't include them as run-time dependency. So you would need to change the scope of the test dependencies to make them available at run-time. You should avoid this for the reasons given in the last point.
Our company has purchased a SDK that can be used for Android apps. The SDK is written in java and its very big.
I have to create a wrapper for this SDK so our partners doesn't need to directly use the SDK, but call our wapper functions instead.
I am wondering what is the best design pattern for doing a job like this? I am have been looking at the proxy design pattern, but not sure if this is the correct one.
Many thanks for any suggestions,
The one that comes to mind is the Facade pattern.
From the Wikipedia description:
A facade is an object that provides a simplified interface to a larger
body of code, such as a class library.
The facade pattern is one of the original design patterns that was described in the GoF book. There the description reads:
Provide a unified interface to a set of interfaces in a subsystem.
Facade defines a higher-level interface that makes the subsystem
easier to use.
Seems to fit your use case perfectly.
This sounds like the classic case for the facade pattern. You want to design an API for your partners that captures your high-level use cases. The implementation would delegate to this purchased SDK, but without allowing its implementation details to propagate up to your partner's code.
http://en.wikipedia.org/wiki/Facade_pattern
The proxy pattern looks less relevant to this problem. Proxy tends to be a one-for-one mapping to the wrapped API just for infrastructure requirements. The classic example is remote method invocation.
http://en.wikipedia.org/wiki/Proxy_pattern
The Facade pattern sounds like the most appropriate one.
However, choosing the best design pattern is not a guarantee of success. Whether you will succeed really depends on the nature of the SDK that you are attempting to "wrap", and what you are really trying to achieve by wrapping it.
For example, in a past project I made the mistake of using facade-like wrappers to abstract over two different triple-store APIs. I wanted to have the freedom to switch triple-store implementations. It was a bad idea. I essentially ended up mirroring a large subset of the API functionality in the facade. It was a major implementation effort, and the end result was no simpler. In hindsight, it was simply not worth it.
In summary:
If you can make your facade API small and simple, this is a good idea. (The ideal facade is one that simply hides a whole bunch of complexity or configurability that you don't want the clients to use.)
If your facade is large and complicated, and/or entails complicated "mapping" to the actual APIs you are wrapping, you are liable to have a lot of coding to do, and you may end up with something that is objectively worse than what you started with.
I don't think this question can be answered without knowing exactly how your partners are going to use your wrapper. Design an API that's suitable for them and then just delegate the calls to the appropriate (series of) SDK calls.
You might call that a facade if you wish, but I think your task is bigger. I'd say it's a new layer in a layered architecture. In that layer you can use all the patterns you see fit.
I am building an extension of ORMLite to target Android.
What I want to do
I want to reproduce one of the behavior that Doctrine and Symfony are achieving in PHP with models.
In a word:
From a yml file generate a bunch of BaseModel class with accessors
and things that won't change.
Let the real model inherits from this
BaseModel so that the user changes could persist even if they regenerate the models from the yml.
My question
I was wondering if this is good in practice to try to achieve such an objective on Android or if this will be risky in terms of performance (the heavy usage of inheritance).
If you think that it is clumsy, how can I allow the user to change the .yml file, generate the model and do no start from scratch rebuilding the customized aspects of his model.
I know this can be done by some "trick" but I really would like not to reinvent the wheel.
EDIT
Sorry, I forgot to add: I am using python to do this.
Thanks
It's the correct, and probably only, Java way. In Java all calls are virtual anyway unless you use final all over the place, but that means you couldn't even use interfaces. So most calls will probably be virtual dispatch whatever you do. Inheritance does not incur any other significant penalty.
Besides, Android devices are generally so powerful that trying to sqeeze out tiny bits of performance at the cost of readability and maintainability of the program is almost certainly not needed. In fact, most android devices are almost as powerful as web servers that do the same things in much slower PHP and still manage thousands of users while the Android device serves one.
I'm currently working on a project which needs to persist any kind of object (of which implementation we don't have any control) so these objects could be recovered afterwards.
We can't implement an ORM because we can't restrict the users of our library at development time.
Our first alternative was to serialize it with the Java default serialization but we had a lot of trouble recovering the objects when the users started to pass different versions of the same object (attributes changed types, names, ...).
We have tried with the XMLEncoder class (transforms an object into a XML), but we have found that there is a lack of functionality (doesn't support Enums for example).
Finally, we also tried JAXB but this impose our users to annotate their classes.
Any good alternative?
It's 2011, and in a commercial grade REST web services project we use the following serializers to offer clients a variety of media types:
XStream (for XML but not for JSON)
Jackson (for JSON)
Kryo (a fast, compact binary serialization format)
Smile (a binary format that comes with Jackson 1.6 and later).
Java Object Serialization.
We experimented with other serializers recently:
SimpleXML seems solid, runs at 2x the speed of XStream, but requires a bit too much configuration for our situation.
YamlBeans had a couple of bugs.
SnakeYAML had a minor bug relating to dates.
Jackson JSON, Kryo, and Jackson Smile were all significantly faster than good old Java Object Serialization, by about 3x to 4.5x. XStream is on the slow side. But these are all solid choices at this point. We'll keep monitoring the other three.
http://x-stream.github.io/ is nice, please take a look at it! Very convenient
of which implementation we don't have any control
The solution is don't do this. If you don't have control of a type's implementation you shouldn't be serialising it. End of story. Java serialisation provides serialVersionUID specifically for managing serialisation incompatibilities between different versions of a type. If you don't control the implementation you cannot be sure that IDs are being changed correctly when a developer changes a class.
Take a simple example of a 'Point'. It can be represented by either a cartesian or a polar coordinate system. It would be cost prohibitive for you to build a system that could cope dynamically with these sorts of corrections - it really has to be the developer of the class who designs the serialisation.
In short it's your design that's wrong - not the technology.
The easiest thing for you to do is still to use serialization, IMO, but put more thought into the serialized form of the classes (which you really ought to do anyway). For instance:
Explicitly define the SerialUID.
Define your own serialized form where appropriate.
The serialized form is part of the class' API and careful thought should be put into its design.
I won't go into a lot of details, since pretty much everything I have said comes from Effective Java. I'll instead, refer you to it, specifically the chapters about Serialization. It warns you about all the problems you're running into, and provides proper solutions to the problem:
http://www.amazon.com/Effective-Java-2nd-Joshua-Bloch/dp/0321356683
With that said, if you're still considering a non-serialization approach, here are a couple:
XML marshalling
As many has pointed out is an option, but I think you'll still run into the same problems with backward compatibility. However, with XML marshalling, you'll hopefully catch these right away, since some frameworks may do some checks for you during initialization.
Conversion to/from YAML
This is an idea I have been toying with, but I really liked the YAML format (at least as a custom toString() format). But really, the only difference for you is that you'd be marshalling to YAML instead of XML. The only benefit is that that YAML is slightly more human readable than XML. The same restrictions apply.
google came up with a binary protocol -- http://code.google.com/apis/protocolbuffers/ is faster, has a smaller payload compared to XML -- which others have suggested as alternate.
One of the advanteages of protocol buffers is that it can exchange info with C, C++, python and java.
Try serializing to json with Gson for example.
Also a very fast JDK serialization drop-in replacement:
http://ruedigermoeller.github.io/fast-serialization/
If serialization speed is important to you then there is a comprehensive benchmark of JVM serializers here:
https://github.com/eishay/jvm-serializers/wiki
Personally, I use Fame a lot, since it features interoperability with Smalltalk (both VW and Squeak) and Python. (Disclaimer, I am the main contributor of the Fame project.)
Possibly Castor?
Betwixt is a good library for serializing objects - but it's not going to be an automatic kind of thing. If the number of objects you have to serialize is relatively fixed, this may be a good option for you, but if your 'customer' is going to be throwing new classes at you all the time, it may be more effort than it's worth (Definitely easier than XMLEncoder for all the special cases, though).
Another approach is to require your customer to provide the appropriate .betwixt files for any objects they throw at you (that effectively offloads the responsibility to them).
Long and short - serialization is hard - there is no completely brain dead approach to it. Java serialization is as close to a brain dead solution as I've ever seen, but as you've found, incorrect use of the version uid value can break it. Java serialization also requires use of the marker 'Serializable' interface, so if you can't control your source, you are kind of out of luck on that one.
If the requirement is truly as arduous as you describe, you may have to resort to some sort of BCE (Byte code modification) on the objects / aspects / whatever. This is getting way outside the realm of a small development project, and into the realm of Hibernate, Casper or an ORM....
SBE is an established library for fast, bytebuffer based serialization library and capable of versioning. However it is a bit hard to use as you need to write length wrapper classes over it.
In light of its shortcomings, I have recently made a Java-only serialization library inspired by SBE and FIX-protocol (common financial market protocol to exchange trade/quote messages), that tries to keep the advantages of both while overcoming their weaknesses. You can take a look at https://github.com/iceberglet/anymsg
Another idea: Use cache. Caches provide much better control, scalability and robustness to the application. Still need to serialize, though, but the management becomes much easier with within a caching service framework. Cache can be persisted in memory, disk, database or array - or all of the options - with one being overflow, stand by, fail-over for the other . Commons JCS and Ehcache are two java implementations, the latter is an enterprise solution free up to 32 GB of storage (disclaimer: I don't work for ehcache ;-)).
I want to move a legacy Java web application (J2EE) to a scripting language - any scripting language - in order to improve programming efficiency.
What is the easiest way to do this? Are there any automated tools that can convert the bulk of the business logic?
Here's what you have to do.
First, be sure you can walk before you run. Build something simple, possibly tangentially related to your main project.
DO NOT build a piece of the final project and hope it will "evolve" into the final project. This never works out well. Why? You'll make dumb mistakes. But you can't delete or rework them because you're supposed to evolve that mistake into the final project.
Next, pick a a framework. What? Second? Yes. Second. Until you actually do something with some scripting languages and frameworks, you have no real useful concept of what you're doing. Once you've built something, you now have an informed opinion.
"Wait," you say. "To do step 1 I had to pick a framework." True. Step 1, however, contains decisions you're allowed to revoke. Pick the wrong framework for step 1 has no long-term bad effects. It was just learning.
Third, with your strategic framework, and some experience, break down your existing site into pieces you can build with your new framework. Prioritize those pieces from most important to least important.
DO NOT plan the entire conversion as one massive project. It never works. It makes a big job more complex than necessary.
We'll use Django as the example framework. You'll have templates, view functions, model definitions, URL mapping and other details.
For each build, do the following:
Convert your existing model to a Django model. This won't ever fit your legacy SQL. You'll have to rethink your model, fix old mistakes, correct old bugs that you've always wanted to correct.
Write unit tests.
Build a conversion utility to export old data and import into the new model.
Build Django admin pages to touch and feel the new data.
Pick representative pages and rework them into the appropriate templates. You might make use of some legacy JSP pages. However, don't waste too much time with this. Use the HTML to create Django templates.
Plan your URL's and view functions. Sometimes, these view functions will leverage legacy action classes. Don't "convert". Rewrite from scratch. Use your new language and framework.
The only thing that's worth preserving is the data and the operational concept. Don't try to preserve or convert the code. It's misleading. You might convert unittests from JUnit to Python unittest.
I gave this advice a few months ago. I had to do some coaching and review during the processing. The revised site is up and running. No conversion from the old technology; they did the suggested rewrite from scratch. Developer happy. Site works well.
If you already have a large amount of business logic implemented in Java, then I see two possibilities for you.
The first is to use a high level language that runs within the JVM and has a web framework, such as Groovy/Grails or JRuby and Rails. This allows you to directly leverage all of the business logic you've implemented in Java without having to re-architect the entire site. You should be able to take advantage of the framework's improved productivity with respect to the web development and still leverage your existing business logic.
An alternative approach is to turn your business logic layer into a set of services available over a standard RPC mechanisim - REST, SOAP, XML-RPC or some other simple XML (YAML or JSON) over HTTP protocol (see also DWR) so that the front end can make these RPC calls to your business logic.
The first approach, using a high level language on the JVM is probably less re-architecture than the second.
If your goal is a complete migration off of Java, then either of these approaches allow you to do so in smaller steps - you may find that this kind of hybrid is better than whole sale deprecation - the JVM has a lot of libraries and integrates well into a lot of other systems.
Using an automated tool to "port" the web application will almost certainly guarantee that future programming efficiency will be minimised -- not improved.
A good scripting language can help programming efficiency when used by good programmers who understand good coding practices in that language. Automated tools are usually not designed to output code that is elegent or well-written, only code that works.
You'll only get an improvement in programming efficiency after you've put in the effort to re-implement the web app -- which, due to the time required for the reimplementation, may or may not result in an improvement overall.
A lot of the recommendations being given here are assuming you -- and just you -- are doing a full rewrite of the application. This is probably not the case, and it changes the answer quite a bit
If you've already got J2EE kicking around, the correct answer is Grails. It simply is: you probably already have Hibernate and Spring kicking around, and you're going to want the ability to flip back and forth between your old code and your new with a minimum amount of pain. That's exactly Groovy's forte, and it is even smoother than JRuby in this regards.
Also, if you've already got a J2EE app kicking around, you've already got Java developers kicking around. In that case, learning Groovy is like falling off a ladder -- literally. With the exception of anonymous inner classes, Groovy is a pure superset of Java, which means that you can write Java code, call it Groovy, and be done with it. As you become increasingly comfortable with the nicities of Groovy, you can integrate them into your Java-ish Groovy code. Before too long, you'll be writing very Groovy code, and not even really have realized the transition.