I think this is more a design specific question, than direct coding issue.
I want to implement a websocket service which serves an updated dataset from a foreign http:// resource to the clients.
but i want to have the data available before the first client connects, so #OnLoad notation won't do.
In HttpServlet world I would
#Override
public void init() throws...
I could not figure out a suitable way for doing so just using JSR-356.
I tried with custom ServerEndpointConfig.Configurator but that does not seem to give me access to method similar to init() from HttpServlet.
So my question is:
Letting the websocket Endpoint class extend HttpServlet gives me access to the init() method and I can place my initial logic there.
Is that a suitable way for solving my problem, or have i missed something in JSR-356 which does the job elegantly and without importing mostly unused servlet packages?
Thank you very much!
A class annotated with #ServerEndpoint("/myEndPoint") is instantiated each time a new connection is created via #OnOpen. It is not a static class nor a singleton (e.g. not behaves as Spring #Service).
I have a similar problem to yours, I need to make a web socket the observer of a Spring web service (don't ask, I'm with you that is a bad architecture the problem). In order to make it an observer, I have to add it to the observable class, but because of the lack of an initialization for the web socket I don't have a clear spot where to add the observer, adding it in the #OnOpen method would repeatedly add it on each new connection.
The only solution I found is a workaround. Usually a web socket class has a static Set of the peers connected to it, you need something similar for your initialization. Either use a static block or a static flag in the constructor. In my case I solved with:
private static boolean observerFlag = false;
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
public MyWebSocket() {
if (!observerFlag) {
observable.addObserver(this);
observerFlag = true;
}
}
And to remove the observer:
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
if (peers.isEmpty()) {
observable.deleteObserver(this);
observerFlag = false;
}
}
I repeat that this is a workaround, I think that there is a more elegant solution.
Related
I am using dynamodb from amazon web services as my database. The client providd by AWS uses http to make the requests to the database. This code will be on a server which will accept requests from users and send it over to dynamodb. I had a few questions how to design this then.
Since this is a server accepting many requests I am using the async client http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/AmazonDynamoDBAsyncClient.html instead of the sync because I don't want for every request to block and instead I will wait for a future to return (better performance). Is it best to make this client static?
public class Connection {
AmazonDynamoDBAsyncClient client;
static DynamoDB dynamoDB;
public Connection(){
client = new AmazonDynamoDBAsyncClient(new ProfileCredentialsProvider());
dynamoDB = null;
}
public void setConnection(String endpoint){
client.setEndpoint(endpoint);
dynamoDB = new DynamoDB(client);
}
public DynamoDB getConnection(){
return dynamoDB;
}
}
Then to call this static variable from main:
public class Main{
Connection c;
DynamoDB con;
public Main() throws Exception {
try {
c = new Connection();
c.setConnection("http://dynamodbserver:8000");
con = c.getConnection();
//Do stuff with the connection now
} catch (Exception e) {
System.err.println("Program failed:");
System.err.println(e.getMessage());
}
Is this a good approach? What will happen if two users are requesting to use the static variable at the same time (I am using a framework called vertx so this program will run on a single thread, but there will be multiple instances of this program)?
You should not set the connection as static member, Also the way you are setting endpoints of your connection is:
not thread safe
may lead to race condition
Endpoints should be setup at the time of AmazonDynamoDBAsyncClient
construction and then this Async client should be used in the DynamoDB
construction. Please also refer to the documentation.
Why don't you use the SDK provided with AWS for dynamoDb? It will take care of connection management for you in a thread safe manner.
On a side note, If you still want to roll out your own solution for connection management, I would recommend that you use a Dependecy Injection framework. I would highly recommend google-guice.
Here is the sample code of DI through guice.
public class DynamoDBProvider implements Provider<DynamoDB> {
// Notice that endpoint is set at the time of client construction and the
// get() method provides an instance of DynamoDb.
// In another configuration class, we define that DynamoDb will be
// served under singleton scope, so you will have a single instance.
private final AmazonDynamoDBAsyncClient asyncClient;
#Inject
public DatabaseTransactionLogProvider(
ProfileCredentialsProvider creds,
#Named("Endpoint") String endpoint) {
this.asyncClient = new AmazonDynamoDBAsyncClient(creds);
// endpoint is a configuration so it must also be injected to the provider.
this.setEndpoint(endPoint);
}
public DynamoDb get() {
return new DynamoDB(asyncClient);
}
}
This is how you ensure your connection instance is served as singleton.
public class DynamoDBModule extends AbstractModule {
protected void configure() {
bind(Dynamodb.class).toProvider(DynamoDbProvider.class).in(Singleton.class);
}
}
Learning DI through guice or any other framework will require some
effort but it will go a a long way in making your code maintainable
and unit testable. Please be aware to utilize the benefits of DI, you
will have to refactor your project so that all depedencies are
injected.
static would be good since it is shared by all connection instance. but you can further improve the code by introducing singleton design pattern for connection class so that only one connection instance will be created and used for serve all the requests.
I just started using dagger 2 and have not used any other dependency injection framework before. Now I'm stuck with a cyclic dependency and I don't know how to solve it properly. Consider the following example in a server application, which uses the Reactor pattern with Java NIO:
I have a Handler object attached to a selection key that is executed when new information arrives on the network:
class Handler implements Runnable {
Server server;
Client client;
public void run {
// static factory method that eventually calls a method on server, passing in 'client' as argument
Command.parse(input).execute(server, client);
}
public void send(String string) {
// enqueu string and inform reactor about interest in sending
}
}
The Client class holds some state about the connected client. All connected Clients are managed in the Server class.
class Client {
Handler h;
public send(String response) {
h.send(response);
}
}
When new input arrives, the Handler creates the Command objects, executes them on the server, and the server eventually will respond to the client.
So what I'm doing right now, is creating a Client object manually in Handler, passing in a this reference, in order to be able to send the response:
client = new Client(this);
So my question now is: Is something wrong with the design? Is it possible to decouple Client and Handler? Or should I just live with this and don't use dependency injection everywhere?
I appreciate your suggestions
If you want the client to be able to send a message back through handler, then the following can break your cycle:
// lives in a common package both classes access
public interface ResponseClass {
public void sendSomeMessage(String message);
}
public class Handler { // handler can also implement ResponseClass directly but I prefer using an additional class for flexibility.
public void whenYouCreateClient() {
Client client = new Client(new HandlerWrapper(this));
}
public static class HandlerWrapper implements ResponseClass {
private final Handler handler;
public HandlerWrapper(Handler handler) { this.handler = handler; }
public void sendSomeMessage(String message) {
handler.send(message);
}
}
public void send(String string) {
// enqueu string and inform reactor about interest in sending
}
}
public class Client {
ResponseClass rc; // naming should be improved :)
public void sendMessage(String message) {
rc.sendSomeMessage(message);
}
}
Now runtime your classes are still tied together but as far as your design is concerned your Client is only attached to a generic ResponseClass.
you can have a hierarchy like:
common <-- client <-- handler
where handler knows of client and common
and client only knows of common. (assuming you put the interface in the common package)
instead of
client <--> handler
I deliberately used sendSomeMessage to highlight that it's a different method you call ont he wrapper/interface but ofcourse you can name them however you like.
One remark: I didn't use dagger2 so I cannot say for sure that what I do can be done using that product, but this is how I would decouple this sort of cyclic dependency
I realized that what I really was trying to solve was not breaking the dependency between Client and Handler, but to use dependency injection instead of the new operator.
The solution I was looking for: Inject a ClientFactory into the constructor of Handler and use clientFactory.create(this) to create a Client object instead. The brilliant library AutoFactory allows you to create such a factory with a simple #AutoFactory annotation. The constructor of the created class is automatically annotated with #Inject.
I am trying to implement a web socket through servlet. My app server is tomcat 7.
I could find examples, where WebSocketServlet class is used. But this class is deprecated and removed in tomcat 8.
I see another alternative, which is to annotate the class with the following
#ServerEndpoint(value = "/websocket/test")
I need helpin understanding,
How will I use this annotation in servlets? Are servlets irrelevent in case of web sockets?
If I create a normal class with the above annotation, and other annotation like onOpen,onClose etc, should I need to put the entry for
that class in web.xml? Or are web.xmls are irrelevant too?
Any hello world link will also be very helpful.
Thank you.
============Edited====================
I have tried the chat example found in this link
http://svn.apache.org/viewvc/tomcat/trunk/webapps/examples/WEB-INF/classes/websocket/
But when I try to invoke the socket through javascript, the events are not reaching my server at all....
Finally I figured this out. So I am answering here for others to refer.
1)How will I use this annotation in servlets? Are servlets irrelevent in case of web sockets?
Apparently yes, we don't need servlets for web sockets.
2)If I create a normal class with the above annotation, and other annotation like onOpen,onClose etc, should I need to put the entry for that class in web.xml? Or are web.xmls are irrelevant too?
No entry needed in web.xml either.
Following, is a sample server side code.
#ServerEndpoint(value = "/echo")
public class Echo {
#OnOpen
public void start(Session session) {
//TODO
}
#OnClose
public void end() {
//TODO
}
#OnMessage
public void incoming(String message) {
//TODO
}
#OnError
public void onError(Throwable t) throws Throwable {
//TODO
}
}
For client, either you can use Javascript is you have a HTML5 compatible browser.
Else you write java clients using the tyrus library. Refer here
I have hopefully a trivial problem. I wrote super short 'program' for Apache Camel to read the context.xml and then do as it is told:
public class CamelBridge {
public static void main(String[] args) throws Exception {
ApplicationContext context = new FileSystemXmlApplicationContext("camelContext.xml");
}
}
I connect between two JMS queues. The program works, but just when I start it. Then it stops sending messages. If I restart- it sends them all again. Is there something oviously wrong that I am missing here?
Edit:
I have updated my Main, but it does not help:
public class Bridge {
private Main main;
public static void main(String[] args) throws Exception {
Bridge bridge = new Bridge ();
bridge.boot();
}
public void boot() throws Exception{
main = new Main();
main.enableHangupSupport();
main.setApplicationContextUri("camelContext.xml");
main.run();
}
}
Edit 2
I think I found the issue (not the solution). After enabling tracing, I found the error message which reads:
jms cannot find object in dispatcher with id --some id--
And after some more digging I found that this is connected clientLeasePeriod in the remoting file. Any idea if it is possible to fix this kind of problem on Camel side?
You have to prevent JVM from finishing
Check this example: http://camel.apache.org/running-camel-standalone-and-have-it-keep-running.html
Provided you app contains only Main and xml file which configures Camel's context then context will be destroyed (so your routes destroyed as well). Even if different context run JMS implementation on same JVM. Sergey link should help you.
If you want just make it work to test things, add while(true) as a last line of your main. Note this is not the best approach :).
I realised that the problem was with the server on which the program was installed. The server thought that it resides on a public network, rather than private network (Windows Server 2012). After changing the network to private, the process worked correctly.
Note- the Camel did not give any errors regarding this, so this can be difficult to spot.
The question might seem stupid/trivial and might be, but I simply cannot understand how to achieve my goal. (Sorry if the title is misguiding, couldn't think of a better one)
I have a webpage on a App Engine server which uses GWT. I got client code and server code. The client code can call RPC methods without any problem (my problem has nothing to do with the "gwt-client" at all).
I got the following classes:
//MyClassService.java - client package
#RemoteServiceRelativePath("myService")
public interface MyClassService extends RemoteService{
public doSomething();
}
//MyClassServiceAsync.java - client package
public interface MyClassServiceAsync{
public void doSomething(AsyncCallback<Void> callback);
}
//MyClassServiceImpl.java - server package
public class MyClassServiceImpl extends RemoteServiceServlet implements MyClassService{
#Override
public void doSomething()
{
//does something
}
}
A scenario and what I want to do:
I've got a remote client, in other words, a client who's not connecting through the page via the "GWT-interface", it's a client who's simply making GET, POST requests to a path on the server (from elsewhere). This remote client is not "using" GWT at all. The client is connecting through an HttpServlet, inside this servlet I want to reuse the RPC mechanics so that i don't have to rewrite the interfaces, who are on the client side and using client-dependent code (the implementation is already server-side).
To reuse the existing methods on the server-side I could create an instance of MyClassServiceImpl.java and just use those. BUT as you can see above, they are implemented as synchronous methods, since GWT-RPC automatically makes the calls asyncronous when using the GWT-RPC.
How would i go about to reuse the MyClassServiceImpl on the server-side and also get them as asynchronous?
Also if I'm wrong with the approach I'm taking, please suggest some other solution. For example, one solution might be for the remote client to directly communicate with the RemoteServiceServlet instead of creating a HttpServlet which the client connects through, but I don't know if that's possible (and if it is, please tell me how)!
Thank you!
EDIT (thanks to some answers below I got some insight and will try to improve my question):
The server-side implementation of the methods is SYNCHRONOUS. Meaning they will block until results a returned. When invoking these method from the gwt-client code, they are 'automatically' made ASYNCHRONOUS one can call them by doing the following:
MyClassServiceAsync = (MyClassServiceAsync) GWT.create(MyClassService.class);
ServiceDefTarget serviceDef = (ServiceDefTarget) service;
serviceDef.setServiceEntryPoint(GWT.getModuleBaseURL() + "myService");
service.doSomething(new AsyncCallback<Void>() {
#Override
public void onSuccess(Void result) {
//do something when we know server has finished doing stuff
}
#Override
public void onFailure(Throwable caught) {
}
});
As you can see from the above code, there is support for the doSomething method to take an AsyncCallback, without even having the implementation for it. This is what I wanted on the server-side so i didn't have to use threads or create a new implementation for "async-usage". Sorry if I was unclear!
1) Any client can call MyClassServiceImpl.doSomething() with the current configuration. MyClassServiceImpl is a servlet and properly exposed. In order to achieve communication this way, the client must be able to "speak" the GWT dialect for data transportation. Google may provide you with libraries implementing this. I haven't used any, so I cannot make suggestions.
An example, proof-of-concept setup: Check the network communications with Firebug to get an idea of what is going on. Then try calling the service with curl.
2) If you do not want to use the GWT dialect, you can easily expose the same service as REST (JSON) or web services (SOAP). There are plenty of libraries, e.g. for the REST case RestEasy and Jersey. You do not mention any server-side frameworks (Spring? Guice? CDI?), so the example will be simplistic.
I'd suggest implementing your business method in a class independent of transportation method:
public class MyBusinessLogic {
public void doSomething() {
...
}
}
Then, the transport implementations use this business logic class, adding only transport-specific stuff (e.g. annotations):
GWT:
public class MyClassServiceImpl extends RemoteServiceServlet implements MyClassService{
#Override
public void doSomething() {
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}
JAX-RS:
#Path("myService")
public class MyResource {
#GET
public void doSomething() {
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}
So the transport endpoints are just shells for the real functionality, implemented in one place, the class MyBusinessLogic.
Is this a real example? Your method takes no arguments and returns no data.
Anyhow you can create a new servlet and invoke it via normal HTTP request. The servlet then just invokes the target method:
public class MyNewServlet extends HttpServlet{
protected void doGet(HttpServletRequest request, HttpServletResponse response){
MyBusinessLogic bean = ... // get it from IoC, new, whatever
bean.doSomething();
}
}