I am newbie to jMeter(using 2.11.20151206). I am trying to publish jms Messages to my weblogic's queue. I have already copied needed jars(weblogic.jar, wlclient.jar & wljmsclient.jar) to jmeter's lib directory. And, restarted the Jmeter many times. But still Jms publishers are not shown in list of samplers.
Is there any other jars needed ? or anything needes to be done.
got the answer from here !. In addition, jmeter also needs javax.jms.jar.
Related
I am working on setting up a Lagom application in production. I have tried contacting Lightbend for ConductR license but haven't heard back in ages. So, now I am looking for an alternative approach. I have multiple questions.
Since the scale of the application is pretty small right now, I think using a static service locator works for me right now (open to other alternatives). Also, I am using MySQL as my event store instead of the default configuration of Cassandra (Reasons not relevant to this thread).
To suppress Cassandra and Lagom's Service Locator, I have added the following lines to my build.sbt:
lagomCassandraEnabled in ThisBuild := false
I have also added the following piece to my application.conf with service1-impl module.
lagom.services {
service1 = "http://0.0.0.0:8080"
}
For the dev environment, I have been able to successfully run my application using sbt runAll in a tmux session. With this configuration, there is no service locator running on the default 8000 port but I can individually hit service1 on 8080 port. (Not sure if this is the expected behaviour. Comments?)
I ran sbt dist to create a zip file and then unzipped it and ran the executable in there. Interestingly, the zip was created within the service1-impl folder. So, if I have multiple modules (services?), will sbt dist create individual zip files for each of the service?
When I run the executable created via sbt dist, it tries to connect to Cassandra and also launches a service locator and ignores the static service locator configuration that I added. Basically, looks like it ignores the lines I added to build.sbt. Anyone who can explain this?
Lastly, if I were to have 2 services, service1 and service2, and 2 nodes in the cluster with node 1 running service1 and node 2 running both the services, how would my static service locator look like in the application.conf and since each of the service would have its own application.conf, would I have to copy the same configuration w.r.t. static service locator in all the application.confs?
Would it be something like this?
lagom.services {
service1 = "http://0.0.0.0:8080"
service1 = "http://1.2.3.4:8080"
service2 = "http://1.2.3.4:8081"
}
Since each specific actor would be spawned on one of the nodes, how would it work with this service locator configuration?
Also, I don't want to run this in a tmux session in production. What would be the best way to finally run this code in production?
You can get started with ConductR in dev mode immediately, for free, without contacting sales. Instructions are at: https://www.lightbend.com/product/conductr/developer
You do need to register (read: provide a valid email) and accept TnC to access that page. The sandbox is free to use for dev mode today so you can see if ConductR is right for you quickly and easily.
For production, I'm thrilled to say that soon you'll be able to deploy up to 3 nodes in production if you register w/Lightbend.com (same as above) and generate a 'free tier' license key.
Lagom is opinionated about microservices. There's always Akka and Play if those opinions aren't shared by a project. Part of that opinion is that deployment should be easy. Good tools feel 'right' in the hand. You are of course free to deploy the app as you like, but be prepared to produce more polyfill the further from the marked trails you go.
Regarding service lookup, ConductR provides redirection for HTTP service lookups for use with 'withFollowRedirects' on Play WS [1]
Regarding sbt dist, each sub-project service will be a package. You can see this in the Chirper example [2] on which sbt dist generates chirp-impl.zip, friend-impl.zip, activity-stream-impl, etc as seen in the Chirper top level build.sbt file.
As that ConductR is the clean and lighted path, you can reference how it does things in order to better understand how to replace Lagom's deployment poly w/ your own. That's the interface Lagom knows best. Much of ConductR except the core is already OSS so can try github if the docs don't cover something.
Disclosure: I am a ConductR-ing Lightbender.
http://conductr.lightbend.com/docs/1.1.x/ResolvingServices
git#github.com:lagom/activator-lagom-java-chirper.git
I had written a custom ejb component with schedular attached to it. In the scheduled ejb method, I am calling the RabbitMQ methods to dequeue the messages. The whole thing works with in eclipse while debugging the individual java file. but the same while build and deployed on the Wildfly server, throws the "Caused by: java.lang.NoClassDefFoundError: com/rabbitmq/client/ConnectionFactory. Seems like a classpath issue but even adding the dependent jars in the manifest file doesn't help. I am blocked with this issue. Could anyone help me on this?
"
I converted the project to dynamic web project and added the Rabbit mq client libraries to the web-inf\lib folder. Now when I tried to deploy to the Wildfly server, its detecting the external assemblies and I am able to create the connection factory. Not sure its the right way to solve the issue.
We have two JEE6 apps that we're getting ready to deploy. Part of the app will run out the cloud and produce messages to a JMS Queue. The other half of the app will run on our servers and consume the messages from the JMS Queue. Both parts of the app run as separate wars and are deployed to Apache TomEE 1.6 (which is awesome btw) and are written using latest JEE and CDI specs.
Message durability is our main concern, but we are willing to make the assumption that the cloud application will have 100% up time and deal with the exception cases by hand. The local app will be restarted frequently, as we are improving it's design and making a lot of changes.
After reading the ActiveMQ documentation, I'm pretty sure what we want is a store-and-forward architecture. What's a little hazy with their documentation is how the properties http://activemq.apache.org/vm-transport-reference.html translate to creating this sort of architecture.
The final challenge is the local broker needs to be very fast. Not only will it consume messages off the remote queue, it has several queues it writes and reads locally. Fortunately, any queues produced on the local broker don't need to be consumed anywhere but locally. The messages must be durable though. ...and if I manage to do that, I need to figure out how to run bi-directional SSL!!
The TLDR is I two things: example URL configurations to get me started, or advice on what options in ActiveMQ would be better than what I've said above. Thank you!
After 8 hours of grueling experimentation, it turns out this isn't that difficult. It's just documented or very clear... and I had ipv6 get enabled on one of the hosts which caused all sorts of problems.
On the "cloud" server, you'll use this
<Resource
id="MyJmsResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(tcp://0.0.0.0:61617,network:static:tcp://ground.server.com:61617)?persistent=false
ServerUrl = vm://localhost
</Resource>
On your "ground" server,
<Resource
id="MyJmsResourceAdapter"
type="ActiveMQResourceAdapter">
BrokerXmlConfig = broker:(tcp://0.0.0.0:61617,network:static:tcp://cloud.server.com:61617)?persistent=false
ServerUrl = vm://localhost
</Resource>
Finally, disable ipv6 in your JAVA_OPTS in Apache TomEE. You can do this by creating a setenv.sh in bin/ and putting the following:
export JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true"
Now... to figure out SSL. Hope this helps someone!
Can any one provide example on Apache Apollo Queue producer and consumer from JAVA ?
Earlier I was using Apache ActiveMQ but now I want to migrate.
There are several examples in the Apollo distribution. The ones you want to look at are located in the following distribution directories:
examples/openwire/java
examples/stomp/java
examples/mqtt/java
examples/amqp/java
If you are using the protocol supported by Apollo then I dont see any changes required in the producer and consumer if they are already sending messages to ActiveMQ. Except the broker url if that has changed.
you will need to get the following jar files:
https://people.apache.org/~rgodfrey/qpid-java-amqp-1-0-client-jms.html and the javax.jms one. After that it's pretty simple to use the examples that come with apollo.
I start a listener from the bin folder using:
java -cp example/geronimo-jms_1.1_spec-1.1.jar:example/javax.jms-3.1.2.2.jar:example/qpid-amqp-1-0-client-0.22.jar:example/qpid-amqp-1-0-client-jms-0.22.jar:example/qpid-amqp-1-0-common-0.22.jar:. example.Listener topic://event
and similar for the Producer.
I just want to go 'live' with the setup that is currently working beautifully in testing. I've downloaded the standalone OpenEJB server and put my EJBs in the /apps directory.
The output in the logs suggests the standalone server may not support non-JMS adapters:
Deployment 'SocketMDB' has message listener interface com.example.TCPMessageEndpoint but this MDB container only supports interface javax.jms.MessageListener
Note the other modules, including the RA itself seem to startup successfully. The only issue seems to be with creating consumers of non-JMS messages.
What else might I try to look at or configure? Thanks!
In the testing scenario we wrap all the modules we find in the classpath up into an EAR and deploy that. To mimic that environment, try putting your rar and ejbs into an EAR file and drop that into the apps/ directory. You should get the same results as with an embedded scenario.
I've ended up just driving an embedded OpenEJB container for further testing. Will try to post new results here when I have them.