I'm going crazy here.
I have a local project setup in intellij that makes many different service calls. All I'm trying to do is route those service calls through Fiddler so that I can see headers/bodies for requests/responses easily.
I've read countless posts saying to set the VM options of the intellij configuration to the Fiddler defaults aka
-DproxySet=true
-DproxyHost=127.0.0.1
-DproxyPort=8888
Been there, done that, I've put these arguments is so many different VM option places but nothing seems to work
The Fiddler config settings are all set to default with the only thing I've changed being setting up HTTPS decryption.
The intellij project is Spring Framework running with Maven
I'm fairly new to the Java/Intellij world, but this should be simple. What am I missing, are there any other settings that would step on top of what I'm trying to do?
Edit: Here is the where the code actually makes the service call.
public RxWebTarget getWebTarget(int divisionId) {
ClientHolder clientHolder = this.clients.get(EnumWarehouse.Division.fromId(divisionId));
RestClientFactory.JaxRSClientPool bagRestClient = clientHolder.pool;
RxClient rxClient = bagRestClient.getRxClient(clientHolder.poolName);
return rxClient.target(bagRestClient.getHostName()).path(bagRestClient.getBasePath());
}
That call will only work if divisionId is 71 or 72. 76 and 77 aren't supported yet.
The issue lied within the client we were using to actually make the service call. We were using a JaxRS client. I setup a simple example using an HttpUrlConnection and things worked just fine.
Related
I am working on setting up a Lagom application in production. I have tried contacting Lightbend for ConductR license but haven't heard back in ages. So, now I am looking for an alternative approach. I have multiple questions.
Since the scale of the application is pretty small right now, I think using a static service locator works for me right now (open to other alternatives). Also, I am using MySQL as my event store instead of the default configuration of Cassandra (Reasons not relevant to this thread).
To suppress Cassandra and Lagom's Service Locator, I have added the following lines to my build.sbt:
lagomCassandraEnabled in ThisBuild := false
I have also added the following piece to my application.conf with service1-impl module.
lagom.services {
service1 = "http://0.0.0.0:8080"
}
For the dev environment, I have been able to successfully run my application using sbt runAll in a tmux session. With this configuration, there is no service locator running on the default 8000 port but I can individually hit service1 on 8080 port. (Not sure if this is the expected behaviour. Comments?)
I ran sbt dist to create a zip file and then unzipped it and ran the executable in there. Interestingly, the zip was created within the service1-impl folder. So, if I have multiple modules (services?), will sbt dist create individual zip files for each of the service?
When I run the executable created via sbt dist, it tries to connect to Cassandra and also launches a service locator and ignores the static service locator configuration that I added. Basically, looks like it ignores the lines I added to build.sbt. Anyone who can explain this?
Lastly, if I were to have 2 services, service1 and service2, and 2 nodes in the cluster with node 1 running service1 and node 2 running both the services, how would my static service locator look like in the application.conf and since each of the service would have its own application.conf, would I have to copy the same configuration w.r.t. static service locator in all the application.confs?
Would it be something like this?
lagom.services {
service1 = "http://0.0.0.0:8080"
service1 = "http://1.2.3.4:8080"
service2 = "http://1.2.3.4:8081"
}
Since each specific actor would be spawned on one of the nodes, how would it work with this service locator configuration?
Also, I don't want to run this in a tmux session in production. What would be the best way to finally run this code in production?
You can get started with ConductR in dev mode immediately, for free, without contacting sales. Instructions are at: https://www.lightbend.com/product/conductr/developer
You do need to register (read: provide a valid email) and accept TnC to access that page. The sandbox is free to use for dev mode today so you can see if ConductR is right for you quickly and easily.
For production, I'm thrilled to say that soon you'll be able to deploy up to 3 nodes in production if you register w/Lightbend.com (same as above) and generate a 'free tier' license key.
Lagom is opinionated about microservices. There's always Akka and Play if those opinions aren't shared by a project. Part of that opinion is that deployment should be easy. Good tools feel 'right' in the hand. You are of course free to deploy the app as you like, but be prepared to produce more polyfill the further from the marked trails you go.
Regarding service lookup, ConductR provides redirection for HTTP service lookups for use with 'withFollowRedirects' on Play WS [1]
Regarding sbt dist, each sub-project service will be a package. You can see this in the Chirper example [2] on which sbt dist generates chirp-impl.zip, friend-impl.zip, activity-stream-impl, etc as seen in the Chirper top level build.sbt file.
As that ConductR is the clean and lighted path, you can reference how it does things in order to better understand how to replace Lagom's deployment poly w/ your own. That's the interface Lagom knows best. Much of ConductR except the core is already OSS so can try github if the docs don't cover something.
Disclosure: I am a ConductR-ing Lightbender.
http://conductr.lightbend.com/docs/1.1.x/ResolvingServices
git#github.com:lagom/activator-lagom-java-chirper.git
I have created application from Jhipster's template,
I have changed almost nothing in the project and it's working fine on local machine but when I deploy it to my server (ubuntu, apache, tomcat - all are the last versions) weird things start to happen.
I have AJAX call to "/api/account" which on local machine get's in response the following json
{
"timestamp":1440703613150,
"status":401,
"error":"Unauthorized",
"message":"Access Denied",
"path":"/api/account"
}
and on the production server (you can check it here) same call get's json WITHOUT "path" field in it
{
"timestamp":1440703613150,
"status":401,
"error":"Unauthorized",
"message":"Access Denied"
}
I stuck on this for long period of time, so, please help me if you can :)
As you have an Apache front-end, have a look at your mod_http_proxy settings, ProxyPass and ProxyPassReverse.
You should also have a look at your Apache logs.
Or disable Apache and access JHipster directly, so you know if this is caused by Apache or not.
I can't see why this would be an issue.
Your setup is probably different somehow, if you want to have same setup the best is to use the executable jar embedding TOmcat rather than deploying it to a server.
Are you running in prod profile for both your local machine and production server?
I just started developing an unmanaged extension for Neo4j server using the Graphaware framework. Everything is fine so far. Even unit tests are working. But I would like to actually debug the extension running the Neo4j server from within Intellij.
Can anybody give me a hint on how to do that?
Many thanks in advance,
Oliver
PS: This extension is being called via rest interface from a separate web server providing hosting the actual web application.
You need to enable jvm remote debugging in conf/neo4j-wrapper.conf. Amend a new line to this file:
wrapper.java.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -Xdebug-Xnoagent-Djava.compiler=NONE-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
In case you want to debug the startup sequence as well, use set suspend=y above.
In your debugger setup a remote debugging session to localhost:5005 or myhostname:5005.
For neo4j 3.0 you have to amend this line to conf/neo4j-wrapper.conf.
dbms.jvm.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
This basically has me at a loss, and has for almost a week. I'm working on a part of company architecture, trying to get REST all set up. There are two methods that are not in the javax.ws.rs package - SEARCH and PATCH. I've created the following interface in our project to implement SEARCH: (mostly a copy/paste from examples)
/** imports and such as appropriate **/
#Target({ElementType.METHOD})
#Retention(RetentionPolicy.RUNTIME)
#HttpMethod("SEARCH")
public #interface SEARCH {
}
The code using this works flawlessly if called against it directly. However, the web service that talks to the main service fails every time with 500 Invalid HTTP method: SEARCH. So, to be clear, there are two web-enabled services. The first that uses the above code works fine. The second, which is supposed to be nothing but a proxy to the first service fails.
The second service that is having the problem runs on jetty. The servlet that is doing the proxying is an extension of org.mortbay.servlet.ProxyServlet - the only overrides are on init and proxyHttpUrl to do a little bit of URL tweaking. I know that the second service doesn't pass the response into the first because I can shut down the first and the second still gives me that error back.
My question is, am I missing configuration pieces to enable "custom" (i.e. not in the javax.ws.rs package) http methods?
First off, that proxy servlet code is very old, from jetty-6 unless I am mistaken. We have released jetty-9 now, and the last three versions of jetty have come from eclipse so the ProxyServlet you ought to be using is the org.eclipse.jetty.servlets.ProxyServlet class.
Now, from jetty-7 on we added some customization to that proxy servlet so you could modify much more of the client exchange...and you might need to make use of that to get additional http methods working. It could be that the http-client only excepts standard http methods in which case we would have to fix that up for your use case (open a bug at bugs.eclipse.org under RT/Jetty if that is the case).
I want to configure a self-written JCA 1.6 inbound resource adapter (RA). My big problem is that the RA needs to get access to some (dynamic) configuration data living in the application that uses the RA.
Now I know that this is against the original idea of the whole JCA idea but unfortunately I cannot change this design as quickly as I'd like/have to.
The data I need to get to the RA is
the port it's supposed to listen on,
the license used for the whole application (the feature the RA supplies requires extra licensing)
additional configuration data stored in a db
I've come up with four ideas:
Use the asadmin create-resource-adapter-config. Due to the fact that glassfish doesn't seem to restart apps depending on the RA, we need to restart the application after this. While this attempt is suitable for the port, it won't fit for the other data.
Use administered objects to give my application a means to pass data in to the RA. This idea is mentioned here. I guess this does it, but the spec states in chapter 13.4.2.3 that
Note, administered objects are not used for setting up asynchronous message
deliveries to message endpoints. The ActivationSpec JavaBean is used to hold all
the necessary activation information needed for asynchronous message delivery
setup.
But I cannot get any dynamic data to the ActivationSpec object (neither through a DeploymentDescriptor nor through annotations). Or did I miss something here? :-)
Use JDBC directly to access the data (also grabbed the idea from here). While this is presumably the best idea, it does not work for the mentioned licensing data as it is not stored in the db.
The last idea I had was to put a method in the MessageDrivenBean (through my interface) that is used to fetch data from within the RA. That method could be called from the RA and would supply the data. But: I just think that is quite abusive as it couples the RA to the app.
Dear community, what are your thoughts on this one? I'm afraid it's not so easy to find answers to these questions, so I'd be quite happy about opinions!
Thanks and cheers,
Julius
In the ra.xml there is the possibility to define config-properties. In Websphere these then show up as editable fields in a table of custom properties for the selected resource adapter. I'm working on a similar problem, I also need to pass hostname / port info to an RA. Unfortunately I haven't figured out how to read the contents of these fields from within the RA however.
The solution I finally came up with is to use the #ConfigProperty annotation. This means I use option one of my question above.
So my ResourceAdapter class looks like this:
public class Hl7ResourceAdapter implements ResourceAdapter {
#ConfigProperty
private Integer port = null;
// Rest from ResourceAdapter interface omitted here...
// Use port here to open socket...
}
The #ConfigProperty fields can now be set through either
a resource-adapter-config
the ra.xml deployment descriptor
Now in order to reconfigure these settings I use glassfish's REST interface to change these settings programmatically (one could also use the asadmin create-resource-adapter-config command). I circumvent the problem, that glassfish does not restart the application that uses the resource adapter by simply restarting it myself through REST. (To be precise: I disable the application and then reenable it to get around another bug in glassfish)
A few additional notes:
We deploy the resource adapter's .rar file into the .ear of the application using it.
We have a separate application outside glassfish (standalone) that calls the REST interface to do such things as restart the resource adapter application etc. It is obvious that an application cannot restart itself properly.
Hope this helps. kutuzof, will this get you any further?