Java design question: Is this a good design? - java

I am writing a Java client that communicates with a remote server via HTTP/XML.
Server sends commands to my client in XML format, like this:
<command>
<name>C1</name>
<param>
.....
</param>
</command>
There is about 10 or more different commands (C1, C2, ...), each of them has different set of params.
My client will process the command, then response server with a execute result, looks like this:
<C1>
<code>200</code>
<desc>xxx</desc>
</C1>
I am only familiar with C, but very new to Java and OOP,
so, my question is simple, how to design the following logic gracefully in a OOP way?
1.Convert the XML string to an XML Object
2.Find correspoding executor based on 'name' element of the XML, and parse the params
3.Execute the command along with the params
4.Convert the result to an XML Object
5.Convert the XML Object to an XML string
Is this a good design?
1.Define an abstract base class and many sub-classes for each command, which include the following method:
void parseCommand(MyXMLObject obj);
void execute();
MyXMLObject generateResult();
or just a simple method:
MyXMLObject execute(MyXMLObject obj);
and these fields:
String mCommandName;
int mRetCode;
String mRetDesc;
2.Then define a factory class to return an instance of one of the sub-classes based on the XML Object.
3.The logic part code:
MyXMLObject obj = MyXMLUtil.getXMLObject(XMLString);
MyCommand command = MyCommandFactory.getCommand(obj);
MyXMLObject retObj = command.execute();
String retStr = MyXMLUtil.getString(retObj);
...//network operation

Generally speaking, it is a good design (you probably need more interfaces and such, and there are various refinements).
The greater problem is that this is, in many ways, a reinvention of the wheel. There are numerous frameworks that handle mapping from POJOs (java objects) to structured textual formats (such as XML or JSON). On top of that, there are many general implementations of command frameworks. It might be worth investigating available options before you provide your own implementation.

In principle, yes, that's how things work. But I think you should separate the different concerns.
Basically I'd say you have a service layer that needs to execute commands and return the results. This layer should know nothing about XML, so I'd design the plain Java Object model and then worry about tying it to your XML infrastructure, probably using an existing XML mapping technology like XStream.

As the others said, the design looks pretty good. To make your life easier, you should have a look at Simple XML to parse XML -> Java and to create XML from Java objects.

Related

Confusion regarding protobufs

I have a server that makes frequent calls to microservices (actually AWS Lambda functions written in python) with raw JSON payloads and responses on the order of 5-10 MB. These payloads are gzipped to bring their total size under lambda's 6MB limit.
Currently payloads are serialized to JSON, gzipped, and sent to Lambda. The responses are then gunzipped, and deserialized from JSON back into Java POJOs.
Via profiling we have found that this process of serializing, gzipping, gunzipping, and deserializaing is the majority of our servers CPU usage by a large margin. Looking into ways to make serialization more efficient led me to protobufs.
Switching our serialization from JSON to protobufs would certainly make our (de)serialization more efficient, and might also have the added benefit of eliminating the need to gzip to get payloads under 6MB (network latency is not a concern here).
The POJOs in question look something like this (Java):
public class InputObject {
... 5-10 metadata fields containing primitives or other simple objects ...
List<Slots> slots; // usually around 2000
}
public class Slot {
public double field1; //20ish fields with a single double
public double[] field2; //10ish double arrays of length 5
public double[][] field3; //1 2x2 matrix of doubles
}
This is super easy with JSON, gson.toJson(inputObj) and you're good to go. Protobufs seem like a whole different beast, requiring you to use the generated classes and litter your code with stuff like
Blah blah = Blah.newBuilder()
.setFoo(f)
.setBar(b)
.build()
Additionally, this results in an immutable object which requires more hoop jumping to update. Just seems like a bad bad thing to put all that transport layer dependent code into the business logic.
I have seen some people recommend writing wrappers around the generated classes so that all the protobuffy-ness doesn't leak into the rest of the codebase and that seemed like a good idea. But then I am not sure how I could serialize the top level InputObject in one go.
Maybe protobufs aren't the right tool for the job here, but it seems like the go-to solution for inter-service communication when you start looking into improving efficiency.
Am I missing something?
with your proto you can always serialize in one-go. You have an example in the java tuto online:
https://developers.google.com/protocol-buffers/docs/javatutorial
AddressBook.Builder addressBook = AddressBook.newBuilder();
...
FileOutputStream output = new FileOutputStream(args[0]);
addressBook.build().writeTo(output);
Also what you might want to do, is to serialize your proto into a ByteArray, and then encode it in Base64 to carry it through your wrapper:
String yourPayload = BaseEncoding.base64().encode(blah.toByteArray())
You have additional library that can help you to transform existing JSON into a proto, such as JsonFormat:
https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/util/JsonFormat
And the usage is straightforward as well:
to serialize as Json:
JsonFormat.printToString(yourProto)
To build from a proto:
JsonFormat.merge(yourJSONstring, yourPrototBuilder);
No need to iterate through each element of the object.
Let me know if this answer your question!

Call Java Function in XSLT2

I have the Java method ...
public static Object parseXMLtoXLSX(File xmlFile, String path)
So I want to call the method from XSLT.
I understand, that I have to introduce the class in my XSLT File e.g. like this:
<xsl:stylesheet version="2.0" xmlns:trans="pathToMyJavaClass">
But how can I call the method?
Is this the right way?:
<xsl:value-of select="trans:parseXMLtoXLSX($xmlFIle,$path)" />
But how can I store the Java File Object, that I get back from the Method in a Variable?
Edit: I can't show the < > in this question...
The calling conventions from XSLT to other languages depend entirely on which XSLT processor you are using, so you need to provide this information.
If you're using XSLT 2.0 under Java then it's likely that the processor you are using is Saxon, in which case the calling conventions are documented at http://saxonica.com/documentation/index.html#!extensibility/functions
In cases where you're handling objects (like a Java java.util.File) that have no equivalent in the XDM data model used by XSLT, the calling conventions can be quite complicated. It's simpler if you organize things so that you only need to pass simple values like strings and integers. For example, write another method in Java that accepts a String (containing a file name) rather than a File.

Can generate .thrift files from existing java/scala interfaces and data types?

Is there an easy way to take existing Java/scala datatypes and API interfaces and produce corresponding .thrift files? Having Thrift generate server data structures is excessively invasive as it has the consequences:
I cannot annotate my data structures (e.g. for, XML, JSON, hibernate persistence, ...)
this pattern conflicts with other serialization frameworks that want to own, or require modification of my source files.
As a result, its seems like thrift forces itself into being the exclusive persistence format for my server -- unless, that is, I create a data-marshalling wrapper around Thrift or the other my persistence formats that deal with these data structures (hibernate, Jackson, scala BeanProperty, ...). However, this defeats the purpose of an automated data-marshalling tool such as thrift and leads straight to the error-prone world of having to maintain identical-but-separate interfaces and data-structures (= waste of talented engineer time and energy).
I'm totally happy with Thrift auto-generating client code. However, I (strongly) feel that I need the freedom to edit the data structures my server deals with in the APIs.
You can use Swift.
To make a long story short; annotate your classes and interfaces (structs and services in Thrift parlance). Then you can either run Swift's client/server code or you can use the swift2thrift generator to produce equivalent IDL and use the Thrift compiler to generate clients (the latter is what I recommend for what you're describing).
Once that is done to create a TProcessor that you can use in a TServlet with normal TProtocol/TTransport objects, do something like this in your servlet's init():
protected void addProcessor(String name, Object svc) {
ThriftCodecManager codecManager = new ThriftCodecManager(
new CompilerThriftCodecFactory(false)
);
List<ThriftEventHandler> eventList = Collections.emptyList();
ThriftServiceProcessor proc = new ThriftServiceProcessor(codecManager, eventList, svc);
this.processors.put(name, proc);
this.multiplex.registerProcessor(name, NiftyProcessorAdapters.processorToTProcessor(proc));
}
The multiplex instance variable in this example is an instance of TMultiplexedProcessor from libthrift.jar.
Then just do this in your doPost():
#Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
getServletContext().log("entering doPost()");
TTransport inTransport = null;
TTransport outTransport = null;
try {
InputStream in = request.getInputStream();
OutputStream out = response.getOutputStream();
TTransport transport = new TIOStreamTransport(in, out);
inTransport = transport;
outTransport = transport;
TProtocol inProtocol = getInProtocolFactory().getProtocol(inTransport);
TProtocol outProtocol = getOutProtocolFactory().getProtocol(outTransport);
if (multiplex.process(inProtocol, outProtocol)) {
out.flush();
} else {
throw new ServletException("multiplex.process() returned false");
}
} catch (TException te) {
throw new ServletException(te);
} finally {
if (inTransport != null) {
inTransport.close();
}
if (outTransport != null) {
outTransport.close();
}
}
}
FYI - TJSONProtocol doesn't work with the version of Swift prior to version 0.14 so at this time you'll need to build from source if you need to use that.
Also... Swift forces your structs to be marked final... JPA spec says entities can't be final... seems to work ok with Eclipselink anyhow but YMMV
Since you mention Java: For some project of ours I did implement an Xtext based solution which generates source code and Thrift IDL files from a project specific DSL. Since Xtext/Xtend is based on Java, it may at least be worth a look if that solution could fit your needs. However, I have a slight feeling that it may be overkill in your situation.
As a result, its seems like thrift forces itself into being the exclusive persistence format for my server [...]
That's not Thrifts fault, and it could be any format.
However, I (strongly) feel that I need the freedom to edit the data structures my server deals with in the APIs.
I fully agree. Especially in such cases it is recommended to separate serialization from internal data structures, and look at serialization as what it really is: just one way to manipulate data1). If you have more than one serialization format, you will always end up with a lot of stuff being implemented multiple times in a similar way. This is effect is more or less inavoidable, providing us with the opportunity to do it in whatever is the best way for the project.
The simple serialization strategy works fine with one format. It becomes cumbersome with two/three formats, and finally turns into a real PITA with more than three formats. 2) As always, the maintainable and extendable solution comes at the cost of some added complexity. 3)
You see, altough you may pick one of the formats as the "favourite master data format", technically you don't have to.
(1) A surprising percentage of programming beginners tutorials and books fail to bring that point across properly. (2) This was exactly the use case I solved by means of a DSL as mentioned at the beginning. (3) Yes, I know what YAGNI means. And I know how to estimate the expectation value of risks.
No. Thrift is only supposed to be used for messages between client and server, not for persistence inside server: how would you query it, for example?

Forcing devs to explicitly define keys for configuration data

We are working in a project with multiple developers and currently the retrieval of values from a configuration file is somewhat "wild west":
Everybody uses some string to retrieve a value from the Config object
Those keys are spread across multiple classes and packages
Sometimes the are not even declared as constants
Naming of the keys is inconsistent and the config file (.properties) looks messy
I would like to sort that out and force everyone to explicitly define their configuration keys. Ideally in one place to streamline how config keys actually look.
I was thingking of using an Enum as a key and turning my retrieval method into:
getConfigValue(String key)
into something like
getConfigValue(ConfigKey)
NOTE: I am using this approach since the Preferences API seems a bit overkill to me plus I would actually like to have the configuration in a simple file.
What are the cons of this approach?
First off, FWIW, I think it's a good idea. But you did specifically ask what the "cons" are, so:
The biggest "con" is that it ties any class that needs to use configuration data to the ConfigKey class. Adding a config key used to mean adding a string to the code you were working on; now it means adding to the enum and to the code you were working on. This is (marginally) more work.
You're probably not markedly increasing inter-dependence otherwise, since I assume the class that getConfigValue is part of is the one on which you'd define the enum.
The other downside to consolidation is if you have multiple projects on different parts of the same code base. When you develop, you have to deal with delivery dependencies, which can be a PITA.
Say Project A and Project B are scheduled to get released in that order. Suddenly political forces change in the 9th hour and you have to deliver B before A. Do you repackage the config to deal with it? Can your QA cycles deal with repackaging or does it force a reset in their timeline.
Typical release issues, but just one more thing you have to manage.
From your question, it is clear that you intend to write a wrapper class for the raw Java Properties API, with the intention that your wrapper class provides a better API. I think that is a good approach, but I'd like to suggest some things that I think will improve your wrapper API.
My first suggested improvement is that an operation that retrieves a configuration value should take two parameters rather than one, and be implemented as shown in the following pseudocode:
class Configuration {
public String getString(String namespace, String localName) {
return properties.getProperty(namespace + "." + localName);
}
}
You can then encourage each developer to define a string constant value to denote the namespace for whatever class/module/component they are developing. As long as each developer (somehow) chooses a different string constant for their namespace, you will avoid accidental name clashes and promote a somewhat organised collection of property names.
My second suggested improvement is that your wrapper class should provide type-safe access to property values. For example, provide getString(), but also provide methods with names such as getInt(), getBoolean(), getDouble() and getStringList(). The int/boolean/double variants should retrieve the property value as a string, attempt to parse it into the appropriate type, and throw a descriptive error message if that fails. The getStringList() method should retrieve the property value as a string and then split it into a list of strings based on using, say, a comma as a separator. Doing this will provide a consistent way for developers to get a list value.
My third suggested improvement is that your wrapper class should provide some additional methods such as:
int getDurationMilliseconds(String namespace, String localName);
int getDurationSeconds(String namespace, String localName);
int getMemorySizeBytes(String namespace, String localName);
int getMemorySizeKB(String namespace, String localName);
int getMemorySizeMB(String namespace, String localName);
Here are some examples of their intended use:
cacheSize = cfg.getMemorySizeBytes(MY_NAMSPACE, "cache_size");
timeout = cfg.getDurationMilliseconds(MY_NAMSPACE, "cache_timeout");
The getMemorySizeBytes() method should convert string values such as "2048 bytes" or "32MB" into the appropriate number of bytes, and getMemorySizeKB() does something similar but returns the specified size in terms of KB rather than bytes. Likewise, the getDuration<units>() methods should be able to handle string values like "500 milliseconds", "2.5 minutes", "3 hours" and "infinite" (which is converted into, say, -1).
Some people may think that the above suggestions have nothing to do with the question that was asked. Actually, they do, but in a sneaky sort of way. The above suggestions will result in a configuration API that developers will find to be much easier to use than the "raw" Java Properties API. They will use it to obtain that ease-of-use benefit. But using the API will have the side effect of forcing the developers to adopt a namespace convention, which will help to solve the problem that you are interested in addressing.
Or to look at it another way, the main con of the approach described in the question is that it offers a win-lose situation: you win (by imposing a property-naming convention on developers), but developers lose because they swap the familiar Java Properties API for another API that doesn't offer them any benefits. In contrast, the improvements I have suggested are intended to provide a win-win situation.

Generating ActionScript value objects from an xsd schema

Are there any tools available for transforming types defined in an xsd schema (may or may not include other xsd files) into ActionScript value objects? I've been googling this for a while but can't seem to find any tools and I'm pondering wether writing such a tool would save us more time right now than to simply code our value objects by hand.
Another possibility I've been considering is using a tool such as XMLBeans to transform the types defined by the schema to Java classes and then converting those classes in ActionScript. However, I've come to realize that there are about a gazillion java -> as3 converters out there and the general consesus seems to be that they sort of work, ie, I have no idea which tool is a good fit.
Any thoughts?
For Java -> AS generation, check out GAS3 from the Granite Data Services project:
http://www.graniteds.org/confluence/display/DOC/2.+Gas3+Code+Generator
This is the kind of thing you can write yourself too, especially if you leverage a tool like Ant and write a custom Task to handle it. In fact, I worked on this last year and open-sourced it:
https://github.com/cliffmeyers/Java2As
I don't have any kind of translator either. What I do is have an XML object wrapped by an ActionScript object. Then you have a getter/setter for each value that converts xml->whatever and whatever->XML. You still have to write the getter/setter though, but you can have a macro/snippit handle that work for you.
So for XML like:
<person>
<name>Bob</name>
...
</person>
Then we have an XML Object Wrapper class and extend it. Normally
class XMLObjectWrapper
{
var _XMLObject:XML;
function set XMLObject(xml:XML):void
{
_XMLObject = xml;
}
function get XMLObject():XML
{
return _XMLObject;
}
}
class person extends XMLObjectWrapper
{
function set name(value:String):void
{
_XMLObject.name = value;
}
function get name():String
{
return _XMLObject.name;
}
}

Categories

Resources