I am working on a web application that will run on a Jetty server. I want to be able to put an object into a hashmap from the Application itself and then read back the same Object using a Client. I created a Singleton instance of the HashMap in both the application and the client thinking it would be the same instance, but after printing both Objects to the console, I am seeing that they are two separate instances and thus do not contain the same data. Does anyone know how to make the HashMap accessible from the app and from the client? Looking at this post, Singleton is not really a singleton I see that I should be using org.eclipse.jetty.webapp.WebAppContext to set the parent loader but am not exactly sure how to approach this. Here is what I have so far.
Server:
public static void main(String[] args) throws Exception {
Server server = new Server(8888);
ServletHandler handler = new ServletHandler();
handler.addServletWithMapping(App.class, "/S3Mock");//Set the servlet to run.
server.setHandler(handler);
server.start();
server.join();
}
App:
#SuppressWarnings("serial")
public class App extends HttpServlet {
#Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
S3Controller s3cont = new S3Controller();
Map appMap = Storage.getInstance();
s3cont.putObject(appMap, "aws.amazon.com/buckets/mybucket", new Object());
s3cont.putObject(appMap, "aws.amazon.com/buckets/mybucket2", new Object());
response.setContentType("text/html");
response.setStatus(HttpServletResponse.SC_OK);
response.getWriter().println("<h2>AWS S3</h2>");
}
}
Client:
public static void main(String[] args) {
Client client = Client.create();
//Get
WebResource webResource = client.resource("http://localhost:8888/S3Mock");
ClientResponse response = webResource.accept(MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON)
.get(ClientResponse.class);
if (response.getStatus() != 200) {
throw new RuntimeException("Failed : HTTP error code : "
+ response.getStatus());
}
S3Controller s3cont = new S3Controller();
Map m = Storage.getInstance();
Object b = s3cont.getObject(m, "aws.amazon.com/buckets/mybucket2");
if(b != null)
System.out.println(b.toString());
else
System.out.println("Object not found");
}
Storage:
public class Storage {
private static Map<String, Object> theMap = new HashMap<String, Object>();
private Storage() {}
public static Map getInstance() {
return theMap;
}
}
After firing up the Jetty server, when I run the client, the console reads, Object not found which tells me that that there are, in fact, two separate HashMap instances.
Related
In my spring boot project, one of my Service depends on external service like Amazon. I am writing the integration testing of the Controller classes. So, I want to mock the method in the AmazonService class(as it depends on third party API). The method is void with a single Long argument and can throw a custom application-specific exceptions.
The method is as follows:-
class AmazonService{
public void deleteMultipleObjects(Long enterpriseId) {
String key = formApplicationLogokey(enterpriseId,null);
List<S3ObjectSummary> objects = getAllObjectSummaryByFolder(key);
List<DeleteObjectsRequest.KeyVersion> keys = new ArrayList<>();
objects.stream().forEach(object->keys.add(new DeleteObjectsRequest.KeyVersion(object.getKey())));
try{
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(this.bucket).withKeys(keys);
this.s3client.deleteObjects(deleteObjectsRequest);
log.debug("All the Application logos deleted from AWS for the Enterprise id: {}",enterpriseId);
}
catch(AmazonServiceException e){
throw new AppScoreException(AppScoreErrorCode.OBJECT_NOT_DELETED_FROM_AWS);
}
}}
class Test
class Test
{
#Autowired
AmazonServiceImpl amazonService;
#Autowired
EnterpriseService enterpriseService;
#Before
public void init()
{
amazonService = Mockito.mock(AmazonServiceImpl.class);
Mockito.doNothing().when(amazonService).deleteMultipleObjects(isA(Long.class));
}
#Test
public void testDeleteEnterprise(){
setHeaders();
EnterpriseDTO enterpriseDTO = createEnterpriseEntity(null,"testDeleteEnterpriseName3",null,null,null);
String postUrl = TestUrlUtil.createURLWithPort(TestConstants.ADD_ENTERPRISE,port);
HttpEntity<EnterpriseDTO> request1 = new HttpEntity<>(enterpriseDTO,headers);
ResponseEntity<EnterpriseDTO> response1 = restTemplate.postForEntity(postUrl,request1,EnterpriseDTO.class);
assert response1 != null;
Long enterpriseId = Objects.requireNonNull(response1.getBody()).getId();
String url = TestUrlUtil.createURLWithPort(TestConstants.DELETE_ENTERPRISE,port)+File.separator+enterpriseId;
HttpEntity<EnterpriseDTO> request = new HttpEntity<>(null, headers);
ResponseEntity<Object> response = restTemplate.exchange(url,HttpMethod.DELETE,request,Object.class);
Assert.assertEquals(Constants.ENTERPRISE_DELETION_SUCCESS_MESSAGE,response.getBody());
}
}
class EnterpriseResource
class EnterpriseResource
{
#DeleteMapping("/enterprises/{enterpriseId}")
public ResponseEntity<Object> deleteEnterprise(#PathVariable Long enterpriseId) {
log.debug("REST request to delete Enterprise : {}", enterpriseId);
enterpriseService.delete(enterpriseId);
return ResponseEntity.badRequest().body(Constants.ENTERPRISE_DELETION_SUCCESS_MESSAGE);
}
}
class EnterpriseServiceImpl
class EnterpriseServiceImpl
{
#Override
public void delete(Long enterpriseId) {
log.debug("Request to delete Enterprise : {}", enterpriseId);
enterpriseRepository.deleteById(enterpriseId);
amazonService.deleteMultipleObjects(enterpriseId);
}
}
I have tried various approaches to Mock this method but it didn't work and control is going inside this method during debugging. I want to do nothing in this method during testing.
I have tried the various approaches like throw(), doNothing(), spy() etc.
Please help what is missing here?
Thanks
I need to inform all users about adding new Record to the database.
So, I have the following code
Application.java - here I placed socket handler method
public WebSocket<JsonNode> sockHandler() {
return WebSocket.withActor(ResponseActor::props);
}
Then I opened the connection
$(function() {
var WS = window['MozWebSocket'] ? MozWebSocket : WebSocket
var socket = new WS("#routes.Application.sockHandler().webSocketURL(request)")
socket.onmessage = function(event) {
console.log(event);
console.log(event.data);
console.log(event.responseJSON)
}});
My Actor class
public class ResponseActor extends UntypedActor {
private final ActorRef out;
public ResponseActor(ActorRef out) {
this.out = out;
}
public static Props props(ActorRef out) {
return Props.create(ResponseActor.class, out);
}
#Override
public void onReceive(Object response) throws Exception {
out.tell(Json.toJson(response), self());
}
}
And the last, as I think, I need to invoke the Actor from my Response Controller
public Result addPost() {
Map<String, String[]> request = request().body().asFormUrlEncoded();
Response response = new Response(request);
Map<String, String> validationMap = ResponseValidator.validate(response.responses);
if (validationMap.isEmpty()) {
ResponseDAO.create(response);
ActorRef responseActorRef = Akka.system().actorOf(ResponseActor.props(outRef));
responseActorRef.tell(response, ActorRef.noSender());
return ok();
} else {
return badRequest(Json.toJson(validationMap));
}
}
My question is: what is ActorRef out and where can I get it in my Controller?
Could you please clarify the logic for sending update to all clients through web sockets?
I'm working on a similar problem, myself, though in Scala, so I'll see if I can assist based on what I've learned so far (I'm having my own problems with getting the message to my actor after the socket opens).
Accepting a WebSocket connection with an actor isn't done with the typical request/response model like making a GET request to the server for a page. Instead, you need to use Play's WebSockets API:
import akka.actor.*;
import play.libs.F.*;
import play.mvc.WebSocket;
public static WebSocket<String> socket() {
return WebSocket.withActor(ResponseActor::props);
}
The Play WebSockets documentation should be able to help you from there better than I can:
https://www.playframework.com/documentation/2.4.x/JavaWebSockets
I would like to test okhttp's http2 function. And I make multi requsts to same host in async style. But, I found that, it involved multi connections, since the protocol is h2, It should use just one connection, right?
The code is below.
Ah, I'm using okhttp2.5
public class Performance {
private final OkHttpClient client = new OkHttpClient();
private final Dispatcher dispatcher = new Dispatcher();
private final int times = 20;
public Performance(){
dispatcher.setMaxRequestsPerHost(2);
client.setDispatcher(dispatcher);
// Configure the sslContext
// MySSLSocketFactory mySSLSocketFactory = new MySSLSocketFactory();
// client.setSslSocketFactory(mySSLSocketFactory);
// client.setHostnameVerifier(new HostnameVerifier() {
// public boolean verify(String s, SSLSession sslSession) {
// return true;
// }
// });
}
public void run()throws Exception{
for(int i=0; i<times; i++) {
Request request = new Request.Builder()
.url("https://http2bin.org/delay/1")
.build();
client.newCall(request).enqueue(new Callback() {
public void onFailure(Request request, IOException e) {
e.printStackTrace();
}
public void onResponse(Response response) throws IOException {
System.out.println(response.headers().get("OkHttp-Selected-Protocol"));
}
});
}
}
public static void main(String[] args)throws Exception{
Performance performance = new Performance();
performance.run();
}
}
There's a bug in OkHttp where multiple simultaneous requests each create their own socket connection, rather than coordinating a shared connection. This only happens when the connections are created simultaneously. Work around by yielding 500ms before the 2nd connection.
I have two socketio servers behind haproxy, and for the purpose of stickiness i am inserting the cookie at haproxy, to identify the server from which response came from. Now the problem is, when i do so using the code below, my connection is not getting upgraded to 'websocket', i tried to debug the issue and realized that 'transport' event is only emitted in 'setTransport' method in engineio socket class, and not in the 'createTransport' method, (in short during probe 'transport' event is not emitted, leading to websocket handshake request going on different socketio server than where the client is already connected using xhrpolling).
I was wondering if there is some other work around this issue, or the problem is genuine. Plz help out. Thanks in advance.
link to - issue with code that i have tried
final StringBuffer buf = new StringBuffer();
socket.io().on(Manager.EVENT_TRANSPORT, new Emitter.Listener() {
#Override
public void call(Object... args) {
final Transport transport = (Transport)args[0];
if(transport.name.equalsIgnoreCase(WebSocket.NAME)){
transport.once(Transport.EVENT_REQUEST_HEADERS, new Emitter.Listener() {
#Override
public void call(Object... args) {
#SuppressWarnings("unchecked")
Map<String, String> headers = (Map<String, String>)args[0];
if(buf.length()!=0){
headers.put("Cookie", buf.toString());
}
}
});
}
if(transport.name.equalsIgnoreCase(PollingXHR.NAME)){
transport.once(Transport.EVENT_RESPONSE_HEADERS,
new Emitter.Listener() {
#Override
public void call(Object... args) {
#SuppressWarnings("unchecked")
Map<String, String> headers = (Map<String, String>) args[0];
String cookie = null;
if (headers.containsKey("Set-Cookie") && headers.get("Set-Cookie").contains("SERVERID")) {
cookie = headers.get("Set-Cookie").split(";")[0];
}
if (cookie != null) {
final String stickyCookie = cookie;
if(buf.length()==0){
buf.append(stickyCookie);
}
transport.on(
Transport.EVENT_REQUEST_HEADERS,
new Emitter.Listener() {
#Override
public void call(Object... args) {
#SuppressWarnings("unchecked")
Map<String, String> headers = (Map<String, String>) args[0];
// set header
headers.put("Cookie", stickyCookie);
}
});
}
}
});
}
}
});
i found workaround my problem for now as haproxy provides another balancing algorithm 'source' which direct requests coming from the same 'ip' to the same server ... as it hashes on the source of the request. And that way i don't have to use all this code above .. (ie. i don't have to do anything on transport event). haproxy takes care of everything... :)
It is in AKKA documentation written that
... Actors should not block (i.e. passively wait while occupying a Thread) on some external entity, which might be a lock, a network socket, etc. The blocking operations should be done in some special-cased thread which sends messages to the actors which shall act on them.
source http://doc.akka.io/docs/akka/2.0/general/actor-systems.html#Actor_Best_Practices
I have found the following information at the moment :
I read Sending outbound HTTP request from Akka / Scala and checked the example at https://github.com/dsciamma/fbgl1
I found following article http://nurkiewicz.blogspot.de/2012/11/non-blocking-io-discovering-akka.html explaining how to use https://github.com/AsyncHttpClient/async-http-client non blocking http client with akka. But is written in Scala.
How can i write an actor that make non-blocking http requests?
It must downlad a remote url page as file and than send the generated file object to the master actor. master actor then sends this request to parser actor to parse the file...
In the last response, Koray is using a wrong reference for the sender, the correct way to do it is:
public class ReduceActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof URI) {
URI url = (URI) message;
AsyncHttpClient asyncHttpClient = new AsyncHttpClient();
final ActorRef sender = getSender();
asyncHttpClient.prepareGet(url.toURL().toString()).execute(new AsyncCompletionHandler<Response>() {
#Override
public Response onCompleted(Response response) throws Exception {
File f = new File("e:/tmp/crawler/" + UUID.randomUUID().toString() + ".html");
// Do something with the Response
// ...
// System.out.println(response1.getStatusLine());
FileOutputStream fao = new FileOutputStream(f);
IOUtils.copy(response.getResponseBodyAsStream(), fao);
System.out.println("File downloaded " + f);
sender.tell(new WordCount(f));
return response;
}
#Override
public void onThrowable(Throwable t) {
// Something wrong happened.
}
});
} else
unhandled(message);
}
Checkout this other thread of akka: https://stackoverflow.com/a/11899690/575746
I have implemented this in this way.
public class ReduceActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof URI) {
URI url = (URI) message;
AsyncHttpClient asyncHttpClient = new AsyncHttpClient();
asyncHttpClient.prepareGet(url.toURL().toString()).execute(new AsyncCompletionHandler<Response>() {
#Override
public Response onCompleted(Response response) throws Exception {
File f = new File("e:/tmp/crawler/" + UUID.randomUUID().toString() + ".html");
// Do something with the Response
// ...
// System.out.println(response1.getStatusLine());
FileOutputStream fao = new FileOutputStream(f);
IOUtils.copy(response.getResponseBodyAsStream(), fao);
System.out.println("File downloaded " + f);
getSender().tell(new WordCount(f));
return response;
}
#Override
public void onThrowable(Throwable t) {
// Something wrong happened.
}
});
} else
unhandled(message);
}