I've taken the plunge and used Guice for my latest project. Overall impressions are good, but I've hit an issue that I can't quite get my head around.
Background: It's a Java6 application that accepts commands over a network, parses those commands, and then uses them to modify some internal data structures. It's a simulator for some hardware our company manufactures. The changes I make to the internal data structures match the effect the commands have on the real hardware, so subsequent queries of the data structures should reflect the hardware state based on previously run commands.
The issue I've encountered is that the command objects need to access those internal data structures. Those structures are being created by Guice because they vary depending on the actual instance of the hardware being emulated. The command objects are not being created by Guice because they're essentially dumb objects: they accept a text string, parse it, and invoke a method on the data structure.
The only way I can get this all to work is to have those command objects be created by Guice and pass in the data structures via injection. It feels really clunky and totally bloats the constructor of the data objects.
What have I missed here?
Dependency injection works best for wiring services. It can be used to inject value objects, but this can be a bit awkward especially if those objects are mutable.
That said, you can use Providers and #Provides methods to bind objects that you create yourself.
Assuming that responding to a command is not that different from responding to a http request, I think you're going the right path.
A commonly used pattern in http applications is to wrap logic of the application into short lived objects that have both parameters from request and some backends injected. Then you instantiate such object and call a simple, parameterless method that does all magic.
Maybe scopes could inspire you somehow? Look into documentation and some code examples for read the technical details. In code it looks more less like that. Here's how this might work for your case:
class MyRobot {
Scope myScope;
Injector i;
public void doCommand(Command c) {
myScope.seed(Key.get(Command.class),
i.getInstance(Handler.class).doSomething();
}
}
class Handler {
private final Command c;
#Inject
public Handler(Command c, Hardware h) {
this.c = c;
}
public boolean doSomething() {
h.doCommand(c);
// or c.modifyState(h) if you want c to access internals of h
}
}
Some people frown upon this solution, but I've seen this in code relying heavily on Guice in the past in at least two different projects.
Granted you'll inject a bit of value objects in the constructors, but if you don't think of them as value objects but rather parameters of the class that change it's behaviour it all makes sense.
It is a bit awkward and some people frown upon injecting value objects that way, but I have seen this in the past in projects that relied heavily on Guice for a while and it worked great.
Related
At the moment Room is working well with a DB to UI integration:
Dao for DB operations
Repository for interacting with the Daos and caching data into memory
ViewModel to abstract the Repository and link to UI lifecycle
However, another scenario comes up which I am having a hard time understanding how to properly implement Room usage.
I have a network API that is purely static and constructed as a reflection of the servers' REST architecture.
There is a parser method that walks through the URL structure and translates it to the existing API via reflection and invokes any final method that he finds.
In this API each REST operation is represented by a method under the equivalent REST naming structure class, i.e.:
/contacts in REST equates to Class Contacts.java in API
POST, GET, DELETE in rest equates to methods in the respective class
example:
public class Contacts {
public static void POST() {
// operations are conducted here
}
}
Here is my difficulty; how should I integrate ROOM inside that POST method correctly/properly?
At the moment I have a makeshift solution which is to instantiate the repository I need to insert data into and consume it, but this is a one-off situation everytime the method is invoked since there is absolutely no lifecycle here nor is there a way to have one granular enough to be worthwhile having in place (I don't know how long I will need a repository inside the API to justify having it cached for X amount of time).
Example of what I currently have working:
public class Contacts {
public static void POST(Context context, List<Object> list) {
new ContactRepository(context).addContacts(list);
}
}
Alternatively using it as a singleton:
public class Contacts {
public static void POST(Context context, List<Object> list) {
ContactRepository.getInstance(context).addContacts(list);
}
}
Everything works well with View related Room interaction given the lifecycle existence, but in this case I have no idea how to do this properly; these aren't just situations where a view might call a network request - then I'd just use networkboundrequest or similar - this can also be server sent data without the app ever requesting it, such as updates for app related data like a user starting a conversation with you - the app has no way of knowing that so it comes from the server first.
How can this be properly implemented? I have not found any guide for this scenario and I am afraid I might be doing this incorrectly.
EDIT: This project is not Kotlin as per the tags used and the examples provided, as such please provide any solutions that do not depend on migrating to Kotlin to use its coroutines or similar Kotlin features.
Seems like using a Singleton pattern, like I was already using, is the way to go. There appears to be no documentation made available for a simple scenario such as this one. So this is basically a guessing game. Whether it is a bad practice or has any memory leak risks I have no idea because, again, there is just no documentation for this.
I started playing with Pyspark to do some data processing. It was interesting to me that I could do something like
rdd.map(lambda x : (x['somekey'], 1)).reduceByKey(lambda x,y: x+y).count()
And it would send the logic in these functions over potentially numerous machines to execute in parallel.
Now, coming from a Java background, if I wanted to send an object containing some methods to another machine, that machine would need to know the class definition of the object im streaming over the network. Recently java had the idea of Functional Interfaces, which would create an implementation of that interface for me at compile time (ie. MyInterface impl = ()->System.out.println("Stuff");)
Where MyInterface would just have one method, 'doStuff()'
However, if I wanted to send such a function over the wire, the destination machine would need to know the implementation (impl itself) in order to call its 'doStuff()' method.
My question boils down to... How does Spark, written in Scala, actually send functionality to other machines? I have a couple hunches:
The driver streams class definitions to other machines, and those machines dynamically load them with a class loader. Then the driver streams the objects and the machines know what they are, and can execute on them.
Spark has a set of methods defined on all machines (core libraries) which are all that are needed for anything I could pass it. That is, my passed function is converted into one or more function calls on the core library. (Seems unlikely since the lambda can be just about anything, including instantiating other objects inside)
Thanks!
Edit: Spark is written in Scala, but I was interested in hearing how this might be approached in Java (Where a function can not exist unless its in a class, thus changing the class definition which needs updated on worker nodes).
Edit 2:
This is the problem in java in case of confusion:
public class Playground
{
private static interface DoesThings
{
public void doThing();
}
public void func() throws Exception {
Socket s = new Socket("addr", 1234);
ObjectOutputStream oos = new ObjectOutputStream(s.getOutputStream());
oos.writeObject("Hello!"); // Works just fine, you're just sending a string
oos.writeObject((DoesThings)()->System.out.println("Hey, im doing a thing!!")); // Sends the object, but error on other machine
DoesThings dt = (DoesThings)()->System.out.println("Hey, im doing a thing!!");
System.out.println(dt.getClass());
}
}
The System.out,println(dt.getClass()) returns:
"class JohnLibs.Playground$$Lambda$1/23237446"
Now, assume that the Interface definition wasn't in the same file, it was in a shared file both machines had. But this driver program, func(), essentially creates a new type of class which implements DoesThings.
As you can see, the destination machine is not going to know what JohnLibs.Playground$$Lambda$1/23237446 is, even though it knows what DoesThings is. It all comes down to you cant pass a function without it being bound to a class. In python you could just send a String with the definition, and then execute that string (Since its interpreted). Perhaps thats what spark does, since it uses scala instead of java (If scala can have functions outside of classes)
Java bytecode, which is, of course, what both Java and Scala are compiled to, was created specifically to be platform independent. So, if you have a classfile you can move it to any other machine, regardless of "silicon" architecture, and provided it has a JVM of at least that verion, it will run. James Gosling and his team did this deliberately to allow code to move between machines right from the very start, and it was easy to demonstrate in Java 0.98 (the first version I played with).
When the JVM tries to load a class, it uses an instance of a ClassLoader. Classloaders encompass two things, the ability to fetch the binary of a bytecode file, and the ability to load the code (verify its integrity, convert it into an in-memory instance of java.lang.Class, and make it available to other code in the system). At Java 1, you mostly had to write your own classloader if you wanted to take control of how the byes were loaded, although there was a sun-specific AppletClassLoader, which was written to load classfiles from http, rather than from the file system.
A little later, at Java 1.2, the "how to fetch the bytes of the classfile" part was separated out in the URLClassloader. That could use any supported protocol to load classes. Indeed, the protocol support mechanism was and is extensible via pluggable protocol handlers. So, now you can load classes from anywhere without the risk of making mistakes in the harder part, which is how you verify and install the class.
Along with that, Java's RMI mechanism allows a serialized object (the class name, along with the "state" part of an object) to be wrapped in a MarshaledObject. This adds "where this class may be loaded from", represented as a URL. RMI automates the conversion of real objects in memory to MarshaledObjects and also shipping them around on the network. If a JVM receives a marshaled object for which it already has the class definition, it always uses that class definition (for security). If not, however, then provided a bunch of criteria are met (security, and just plain working correctly, criteria) then the classfile may be loaded from that remote server, allowing a JVM to load classes for which it has never seen the definitions. (Obviously, the code for such systems must typically be written against ubiquitous interfaces--if not, there's going to be a lot of reflection going on!)
Now, I don't know (indeed, I found your question trying to determine the same thing whether Spark uses RMI infrastructure (I do know that hadoop does not, because, seemingly because the authors wanted to create their own system--which is fun and educational of course--rather than use a flexible, configurable, extensively-tested, including security tested!- system.)
However, all that has to happen to make this work in general are the steps that I outlined for RMI, those requirements are essentially:
1) Objects can be serialized into some byte sequence format understood by all participants
2) When objects are sent across the wire the receiving end must have some way to obtain the classfile that defines them. This can be a) pre-installation, b) RMI's approach of "here's where to find this" or c) the sending system sends the jar. Any of these can work
3) Security should probably be maintained. In RMI, this requirement was rather "in your face", but I don't see it in Spark, so they either hid the configuration, or perhaps just fixed what it can do.
Anyway, that's not really an answer, since I described principles, with a specific example, but not the actual specific answer to your question. I'd still like to find that!
When you submit a spark application to the cluster, your code is deployed to all worker nodes, so your class and function definitions exist on all nodes.
I have a library with several packages-
lets say
package a;
package b;
inside package a I have public a_class
inside package b I have public b_class
a_class uses b_class.
I need to generate a library from this , but I do not want the Client to see b_class.
The only solution I know of is to flatten my beautifully understandable packages to single package and to use default package access for b_class.
Is there another way to do so ? maybe using interfaces or some form of design pattern ??
If you reject to move the code to an individual, controlled server, all you can do is to hinder the client programmer when trying to use your APIs. Let's begin applying good practices to your design:
Let your packages organized as they are now.
For every class you want to "hide":
Make it non-public.
Extract its public API to a new, public interface:
public interface MyInterface {...}
Create a public factory class to get an object of that interface type.
public class MyFactory
{
public MyInterface createObject();
}
So far, you have now your packages loosely coupled, and the implementation classes are now private (as good practices preach, and you already said). Still, they are yet available through the interfaces and factories.
So, how can you avoid that "stranger" clients execute your private APIs? What comes next is a creative, a little complicated, yet valid solution, based on hindering the client programmers:
Modify your factory classes: Add to every factory method a new parameter:
public class MyFactory
{
public MyInterface createObject(Macguffin parameter);
}
So, what is Macguffin? It is a new interface you must define in your application, with at least one method:
public interface Macguffin
{
public String dummyMethod();
}
But do not provide any usable implementation of this interface. In every place of your code you need to provide a Macguffin object, create it through an anonymous class:
MyFactory.getObject(new Macguffin(){
public String dummyMethod(){
return "x";
}
});
Or, even more advanced, through a dynamic proxy object, so no ".class" file of this implementation would be found even if the client programmer dares to decompile the code.
What do you get from this? Basically is to dissuade the programmer from using a factory which requires an unknown, undocumented, ununderstandable object. The factory classes should just care not to receive a null object, and to invoke the dummy method and check the return value it is not null either (or, if you want a higher security level, add an undocumented secret-key-rule).
So this solution relies upon a subtle obfuscation of your API, to discourage the client programmer to use it directly. The more obscure the names of the Macguffin interface and its methods, the better.
I need to generate a library from this , but I do not want the Client to see b_class. The only solution I know of is to flatten my beautifully understandable packages to single package and to use default package access for b_class. Is there another way to do so ?
Yes, make b_class package-private (default access) and instantiate it via reflection for use in a_class.
Since you know the full class name, reflectively load the class:
Class<?> clz = Class.forName("b.b_class")
Find the constructor you want to invoke:
Constructor<?> con = clz.getDeclaredConstructor();
Allow yourself to invoke the constructor by making it accessible:
con.setAccessible(true);
Invoke the constructor to obtain your b_class instance:
Object o = con.newInstance();
Hurrah, now you have an instance of b_class. However, you can't call b_class's methods on an instance of Object, so you have two options:
Use reflection to invoke b_class's methods (not much fun, but easy enough and may be ok if you only have a few methods with few parameters).
Have b_class implement an interface that you don't mind the client seeing and cast your instance of b_class to that interface (reading between the lines I suspect you may already have such an interface?).
You'll definitely want to go with option 2 to minimise your pain unless it gets you back to square one again (polluting the namespace with types you don't want to expose the client to).
For full disclosure, two notes:
1) There is a (small) overhead to using reflection vs direct instantiation and invocation. If you cast to an interface you'll only pay the cost of reflection on the instantiation. In any case it likely isn't a problem unless you make hundreds of thousands of invocations in a tight loop.
2) There is nothing to stop a determined client from finding out the class name and doing the same thing, but if I understand your motivation correctly you just want expose a clean API, so this isn't really a worry.
When using Kotlin, you can use the internal modifier for your library classes.
If I understand correctly you are asking about publishing your library for 3rd party usage without disclosing part of your source? If that's the case you can use proguard, which can obfuscate your library. By default everything will be excluded/obfuscated, unless you specify things you want to exclude from being obfuscated/excluded.
If you want to distribute [part of] your code without the client being able to access it at all, that means that the client won't be able to execute it either. :-O
Thus, you just have one option: Put the sensible part of your code into a public server and distribute a proxy to access it, so that your code would be kept and executed into your server and the client would still be able to execute it through the proxy but without accessing it directly.
You might use a servlet, a webservice, a RMI object, or a simple TCP server, depending on the complexity level of your code.
This is the safest approach I can think of, but it also deserves a price to pay: In addition to complexing your system, it would introduce a network delay for each remote operation, which might be big deal depending on the performance requirements. Also, you should securize the server itself, to avoid hacker intrussions. This could be a good solution if you already have a server that you could take advantage of.
In one of the project I'm working on, we have different systems.
Since those system should evolve independently we have a number of CommunicationLib to handle communication between those Systems.
CommunicationLib objects are not used inside any System, but only in communication between systems.
Since many functionality require data retrieval, I am often forced to create "local" system object that are equal to CommLib objects. I use Converter Utility class to convert from such objects to CommLib objects.
The code might look like this:
public static CommLibObjX objXToCommLib(objX p) {
CommLibObjX b = new CommLibObjX();
b.setAddressName(p.getAddressName());
b.setCityId(p.getCityId());
b.setCountryId(p.getCountryId());
b.setFieldx(p.getFieldx());
b.setFieldy(p.getFieldy());
[...]
return b;
}
Is there a way to generate such code automatically? Using Eclipse or other tools? Some field might have a different name, but I would like to generate a Converter method draft and edit it manually.
try Apache commons-beanutils
BeanUtils.copyProperties(p, b);
It copies property values from the origin bean to the destination bean for all cases where the property names are the same
If you feel the need to have source code automatically generated, you are probably doing something wrong. I think you need to reexamine the design of the communication between your two "systems". How do these "systems" communicate?
If they are on different computers or in different processes, design a wire protocol for them to use, rather than serializing objects.
If they are classes used together, design better entity classes, which are suitable for them both.
My original question was quite incorrect, I have classes (not POJO), which have shortcut methods for business logic classes, to give the consumer of my API the ability to use it like:
Connector connector = new ConnectorImpl();
Entity entity = new Entity(connector);
entity.createProperty("propertyName", propertyValue);
entity.close;
Instead of:
Connector connector = new ConnectorImpl();
Entity entity = new Entity();
connector.createEntityProperty(entity, "propertyName", propertyValue);
connector.closeEntity(entity);
Is it good practice to create such shortcut methods?
Old question
At the moment I am developing a small framework and have a pretty nice separation of the business logic in different classes (connectors, authentication tokens, etc.), but one thing is still bothers me. I have methods which manipulates with POJOs, like this:
public class BuisnessLogicImpl implements BusinessLogic{
public void closeEntity(Entity entity) {
// Business Logic
}
}
And POJO entities which also have a close method:
public class Entity {
public void close(){
businessLogic.closeEntity(this);
}
}
Is it good practice to provide two ways to do the same thing? Or better, just remove all "proxy" methods from POJOs for clarity sake?
You should remove the methods from the "POJOs"... They aren't really POJO's if you encapsulate functionality like this. The reason for this comes from SOA design principles which basically says you want loose coupling between the different layers of your application.
If you are familiar with Inversion of control containers, like Google_Guice or Spring Framework-- this separation is a requirement. For instance, let's say you have a CreditCard POJO and a CreditCardProcessor service, and a DebugCreditCardProcess service that doesn't actually charge the CC money (for testing).
#Inject
private CardProcessor processor;
...
CreditCard card = new CreditCard(...params...);
processor.process(card);
In my example, I am relying on an IoC container to provide me with a CardProcessor. Whether this is the debug one, or the real one... I don't really care and neither does the CreditCard object. The one that is provided is decided by your application configuration.
If you had coupling between the processor and credit card where I could say card.process(), you would always have to pass in the processor in the card constructor. CreditCards can be used for other things besides processing however. Perhaps you just want to load a CreditCard from the database and get the expiration date... It shouldn't need a processor to do this simple operation.
You may argue: "The credit card could get the processor from a static factory". While true, singletons are widely regarded as an anti-pattern requiring keeping a global state in your application.
Keeping your business logic separate from your data model is always a good thing to do to reduce the coupling required. Loose coupling makes testing easier, and it makes your code easier to read.
I do not see your case as "two methods", because the logic of the implementation is kept in bussinessLogic. It would be akin of asking if it is a good idea java.lang.System has both a method getProperties() and a getProperty(String), more than a different method is just a shortcut to the same method.
But, in general, no, it is not good practice. Mainly because:
a) if the way to do that thing changes in the future, you need to remember that you have to touch two implementations.
b) when reading your code, other programmers will wonder if there are two methods because they are different.
Also, it does not fit very well with assigning responsabilities to a specific class for a given task, which is one of the tenets of OOP.
Of course, all absolute rules may have a special case where some considerations (mainly performance) may suggest breaking the rule. Think if you win something by doing so and document it heavily.