I am working on setting up a Lagom application in production. I have tried contacting Lightbend for ConductR license but haven't heard back in ages. So, now I am looking for an alternative approach. I have multiple questions.
Since the scale of the application is pretty small right now, I think using a static service locator works for me right now (open to other alternatives). Also, I am using MySQL as my event store instead of the default configuration of Cassandra (Reasons not relevant to this thread).
To suppress Cassandra and Lagom's Service Locator, I have added the following lines to my build.sbt:
lagomCassandraEnabled in ThisBuild := false
I have also added the following piece to my application.conf with service1-impl module.
lagom.services {
service1 = "http://0.0.0.0:8080"
}
For the dev environment, I have been able to successfully run my application using sbt runAll in a tmux session. With this configuration, there is no service locator running on the default 8000 port but I can individually hit service1 on 8080 port. (Not sure if this is the expected behaviour. Comments?)
I ran sbt dist to create a zip file and then unzipped it and ran the executable in there. Interestingly, the zip was created within the service1-impl folder. So, if I have multiple modules (services?), will sbt dist create individual zip files for each of the service?
When I run the executable created via sbt dist, it tries to connect to Cassandra and also launches a service locator and ignores the static service locator configuration that I added. Basically, looks like it ignores the lines I added to build.sbt. Anyone who can explain this?
Lastly, if I were to have 2 services, service1 and service2, and 2 nodes in the cluster with node 1 running service1 and node 2 running both the services, how would my static service locator look like in the application.conf and since each of the service would have its own application.conf, would I have to copy the same configuration w.r.t. static service locator in all the application.confs?
Would it be something like this?
lagom.services {
service1 = "http://0.0.0.0:8080"
service1 = "http://1.2.3.4:8080"
service2 = "http://1.2.3.4:8081"
}
Since each specific actor would be spawned on one of the nodes, how would it work with this service locator configuration?
Also, I don't want to run this in a tmux session in production. What would be the best way to finally run this code in production?
You can get started with ConductR in dev mode immediately, for free, without contacting sales. Instructions are at: https://www.lightbend.com/product/conductr/developer
You do need to register (read: provide a valid email) and accept TnC to access that page. The sandbox is free to use for dev mode today so you can see if ConductR is right for you quickly and easily.
For production, I'm thrilled to say that soon you'll be able to deploy up to 3 nodes in production if you register w/Lightbend.com (same as above) and generate a 'free tier' license key.
Lagom is opinionated about microservices. There's always Akka and Play if those opinions aren't shared by a project. Part of that opinion is that deployment should be easy. Good tools feel 'right' in the hand. You are of course free to deploy the app as you like, but be prepared to produce more polyfill the further from the marked trails you go.
Regarding service lookup, ConductR provides redirection for HTTP service lookups for use with 'withFollowRedirects' on Play WS [1]
Regarding sbt dist, each sub-project service will be a package. You can see this in the Chirper example [2] on which sbt dist generates chirp-impl.zip, friend-impl.zip, activity-stream-impl, etc as seen in the Chirper top level build.sbt file.
As that ConductR is the clean and lighted path, you can reference how it does things in order to better understand how to replace Lagom's deployment poly w/ your own. That's the interface Lagom knows best. Much of ConductR except the core is already OSS so can try github if the docs don't cover something.
Disclosure: I am a ConductR-ing Lightbender.
http://conductr.lightbend.com/docs/1.1.x/ResolvingServices
git#github.com:lagom/activator-lagom-java-chirper.git
Related
Okay. So i'm taking over a very very old system that uses Groovy, and this application is connecting to a third party web service via its configuration file with the entry like this: (this is for prd)
webService.wsdlUrl = "jar:file:/lib/MyWebService.jar!/META-INF/wsdl/MyServices_live.wsdl"
It has its own set of groovy configuration files for qa and prd. Qa looks like this:
webService.wsdlUrl = "jar:file:/lib/MyWebService.jar!/META-INF/wsdl/MyServices_qa.wsdl"
These MyServices_live.wsdl and MyServices_qa.wsdl files contain the path to the actual web service url (for prd and qa respectively) which is the one that needs to be replaced for both instances.
I already have the new url. So what did I do? I don't have enough experience doing this so I had to do some research. Apparently, I can use "wsimport" to connect to the new web service url and append ?wsdl to the end of this url so I can generate a jax-rs web service client. I was able to produce a set of java codes that I eventually learned that theyI thought problem solved. I just need to compile these java codes and jar the classes, so I'll have an updated MyWebService.jar.
Now, I realized with what I have done, the wsdl parameter is embedded in the code
#WebServiceClient(name = "Services", targetNamespace = "http://blahblah/", wsdlLocation = "https://newhostname.com/Service?wsdl")
With this approach, I would create 2 jars, 1 for prd and 1 for qa which won't be the most ideal thing to do. We would like to retain the previous capability of controlling where the web service client (inside the jar) will point to by using a parameter (e.g. just like here... the wsdl was specified jar:file:/lib/MyWebService.jar!/META-INF/wsdl/MyServices_live.wsdl ).
Again, to highlight the architecture, the application is running in Groovy and it is using a jar file to access some third party services.
I haven't really done any Groovy development, so I would like to as much as possible not touch any Groovy code.
Would you have any idea of how i can go about this problem?
I have developed a micro service (Spring Boot REST service, deployed as executable JAR) to track all activities from third party projects as my requirement and its working now.
Currently it's working apart of some projects, and now I have updated service with some additional features.
But I can't move it to live server without restarting the existing service as it is deployed as jar. I'm afraid to restart my service, restart may be leads to lose data of integrated projects.
What improvements can I make in my architecture to solve my problem?
What about JRebel plugin. It worked perfectly for me, but, unfortunately, it's not a free app. Like alternative, (i used this approach with Spring MVC, with Spring Boot it could be otherwise), I set up a soft link in work directory on a compiled path in JBoss (in my case it was dir with name target and *.class and *.jar files). As for me, the first solution with JRebel is the most appropriate for you.
Finally got a solution as commented by #Gimby .
We can do it by deploying multiple instances of services and it bound to a service registry ,Here i achieved it by using eureka as registry service and also used zuul as proxy .
My application consumes external third-party webservices (Im successfully using cxf for this). How can I mock this webservices using local files to build pre-saved reponses (for test purposes) ?
More specifically:
I was thinking of using 2 maven projects: dao-ws and dao-ws-mock, both having the same interface.
The first dao-ws really calls the webservices using cxf, whereas the second dao-ws-mock uses local files to build pre-saved responses (used for test purposes).
mvn install build the webapp project, whereas mvn install -DuseMock build the webapp project with the dao-ws-mock dependency. Is this the correct way to do it? Is there a better/simpler way to do it?
Depending of the properties used, I will produce the same .war, but with different behavior. It sounds to be a bad practice for me (for example, I don't want to push war with mock dependencies on our internal Nexus). What do you think?
Best regards,
You could use SoapUI's build in mock services - http://www.soapui.org/Getting-Started/mock-services.html
You can generate a mock service based on a wsdl, specify default responses, and you can even create dynamic responses that return different responses depending on the request.
You can then build your mock services into a .war and deploy them: http://www.soapui.org/Service-Mocking/deploying-mock-services-as-war-files.html (This link shows how to do it in the GUI, but it can be done using maven as well)
You could use Sandbox - mock services are hosted and always available so there is no need to launch another server before running tests (disclaimer: I'm a founder).
You can generate mocks from service specifications (wsdl, Apiary, Swagger) and add dynamic behaviour as needed.
I am writing and small app using Java EE. I am using Apache Tomcat v 7 and Eclipse as IDE. When I Run the project (Run on server) I get :
http://127.0.0.1:8080/java-web/lis
(That's fine)
But I don't know If there is some way to rewrite the [java-web] dir just to get :
http://my-local-app.dev/list
I suppose there is some way like in Apache Server using confing files and enabling
the mod_rewrite.
I'll apreciate your help. Thanks
In short: All of the pieces you want to change are components of your deployment environment. Unless you have a specific need to override them, it's usually easiest during development to use the URLs that are a little less pretty.
If you do want to alter them, you need to familiarize yourself with what the various parts of an HTTP URL mean. What you have in your test environment is this:
http:// 127.0.0.1:8080/java-web/list
protocol host port path
You could insert an entry into your hosts file listing my-local-app.dev at 127.0.0.1, but that would not change the port or the path.
The port is determined when Tomcat starts up and is 8080 by default. The general port for HTTP is 80, but specific permission is required to bind to ports below 1024. On Linux, the authbind package makes this pretty easy; on Windows, the necessary steps will depend on your version and configuration (e.g., if you have a Group Policy).
In Tomcat, Web applications are prefixed with their names in the path; it looks like your (hypothetical?) application is named java-web.war. You can install an application as the "root application", but this requires a little bit more configuration and is generally skipped in development.
All of this can indeed also be done using something like mod_rewrite, but that seems like overkill to have slightly prettier URLs for your dev machine.
If you want your application to respond to the my-local-app.dev, you need to purchase the "my-local-app.dev" domain and get a Java web hotel running on it.
If your web application is named "java-web" and you do not want the URL to reflect that, you need to tell Tomcat that you want your application deployed at the ROOT location where the name of the web application is not present in the URL. This is typically done in the deployment stage but unfortunately there is no standard location to say this for WAR files so this is vendor dependent. For example does Glassfish use an extra XML file in your deployment.
I believe Tomcat supports this for ROOT.war files. If not, you probably needs to set the META-INF/context.xml file. See https://tomcat.apache.org/tomcat-7.0-doc/config/context.html for details on what to put in this file - especially the context path.
I'm reading up on JMX for the first time, and trying to see if its a feasible solution to a problem we're having on production.
We have an architecture that is constantly hitting a remote web service (managed by a different team on their own servers) and requesting data from it (we also cache from this service, but its a sticky problem where caching isn't extremely effective).
We'd like the ability to dynamically turn logging on/off at one specific point in the code, right before we hit the web service, where we can see the exact URLs/queries we're sending to the service. If we just blindly set a logging level and logged all web service requests, we'd have astronomically-large log files.
JMX seems to be the solution, where we control the logging in this section with a managed bean, and then can set that bean's state (setLoggingEnabled(boolean), etc.) remotely via some manager (probably just basic HTML adaptor).
My questions are all deployment-related:
If I write the MBean interface and impl, as well as the agent (which register MBeans and the HTML adaptor with the platform MBean server), do I compile, package & deploy those inside my main web application (WAR), or do they have to compile to their own, say, JAR and sit on the JVM beside my application?
We have a Dev, QA, Demo and Prod envrionment; is it possible to have 1 single HTML adaptor pointing to an MBean server which has different MBeans registered to it, 1 for each environment? It would be nice to have one URL to go to where you can manage beans in different environments
If the answer to my first question above is that the MBean interface, impl and agent all deploy inside your application, then is it possible to have your JMX-enabled application deployed on one server (say, Demo), but to monitor it from another server?
Thanks in advance!
How you package the MBeans is in great part a matter of portability. Will these specific services have any realistic usefulness outside the scope of this webapp ? If not, I would simply declare your webapp "JMX Manageable" and build it in. Otherwise, componentize the MBeans, put them in a jar, put the jar in the WEB-INF/lib and initialize them using a startup servlet configured in your web.xml.
For the single HTML adaptor, yes it is possible. Think of it as having Dev, QA, Demo and Prod MBeanServers, and then one Master MBeanServer. Your HTML Adaptor should render the master. Then you can use the OpenDMK cascading service to register cascades of Dev, QA, Demo and Prod in the Master. Now you will see all 5 MBeanServer's beans in the HTML adaptor display.
Does that answer your third question ?
JMX is a technology used for remote management of your application and for a situation for example when you want to change a configuration without a restart is the most proper use.
But in your case, I don't see why you would need JMX. For example if you use Log4j for your logging you could configure a file watchdog and just change logging to the lowest possible level. I.e. to debug. This does not require a restart and IMHO that should have been your initial design in the first place i.e. work arround loggers and levels. Right now, it is not clear what you mean and what happens with setLoggingEnable.
In any case, the managed bean is supposed to be deployed with your application and if you are using Spring you are in luck since it offers a really nice integration with JMX and you could deploy your spring beans as managed beans.
Finally when you connect to your process you will see the managed beans running for that JVM. So I am not sure what exactly you mean with point 2.
Anyway I hope this helps a little