I'm trying to use Ice4j, but there are no tutorials for it or anything. I have tried looking at the source code, but everything goes somewhere else and nothing is explained.
I've read the IcePseduTcp test and I want to implement my own but the problem is the test creates both local and remote agents together and then has them interact with each other. How do I separate the two, so that I have two programs, one that acts as the local controlling agent, and the other acts as the remote agent, and then have the local agent discover the remote agent?
The function Ice.transferRemoteCandidates uses both Agents, but how do I use the first agent to find the other?
addRemoteCandidateToAgent with addLocalCandidateToContentList will help you.
With addLocalCandidateToContentList, you build YOUR local ContentList (the data that need to be sent to the remote peer/server, and he will use it like in addRemoteCandidateToAgent).
Look over here: http://stellarbuild.com/blog/article/ice4j-networking-tutorial-part-1
I think that tutorial will explain how to connect the two agents. At least he uses SDP which doesn't need control.
If you want a SIP tutorial perhaps try: http://blog.sharedmemory.fr/en/2014/06/22/gsoc-2014-ice4j-tutorial/
Related
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.
I have created a web based application using JSP and Servlets and the application uses an SQL Server DB as its backend.
The architecture is like this:
I have all my business logic in a jar file
I have created my views using JSPs and am using servlets to interact with my business logic jar
The jar connects to the database to persist and hydrate information, which is relayed to the JSP by my servlets.
My web application runs on a remote Tomcat server.
Now, I have been given a new requirement. I have to create a command line interface, where I should be able to specify a list of commands and hit enter (or alternatively, create a set of commands and save it in a .bat file or something, and run it), so that my application performs the necessary actions. Basically, I have to create a command line interface, which can be used along with the GUI i already have (JSPs).
I am totally new to this. Can anyone throw light on where and how I can start?
Any little help is greatly appreciated.
EDIT
This is what my web application does. User can see a list of test scripts (written in Selenium WebDriver). He can choose script(s), choose a host on where to run them from, and click "Run", and the test executes on the said machines.
Now, I want a command line interface, which will eliminate the need for the GUI. Let's say, I simply want the user to be able to type a command like "execute My_Script_1", and the script should be executed.
The test scripts, the selenium drivers, everything reside on the App server.
My command line interface should be able to work on Windows command prompt.
Thank you.
Are you using Spring?
Can you specify, what exactly your CLI should do?
You may do, what Thomas said.
You also may use template engines like Velocity.. To form your output.
Use some kind of JavaCurses-like library to make your output... Look well.
Specifying commands...
Hm.. think about your business logic what exactly you are showing to user.
Remember webapp ui is webapp ui. Console ui is different. And user expects different behaviour
So commands like
show goods category="for kids"
Will be great.
Also don't forget about different help commands
yourJarName.jar --help / -h and etc
If your are want to write application with interactive mode... think about help command there.
You say you have your business logic in a JAR.
Why not starting another project with this JAR as a dependency and build it as an executable jar ?
Then simply use System.in and System.out to interact with the user.
EDIT :
So your application is hosted. Do you have an API like REST or SOAP or any other ?
Then you can build a client reading a string that the user has written, parsing it and calling the right service in your API.
I see two options:
Create a client-side CLI that generates the same data your server
receives. In other words, you don't modify your server code, and you create a
client-side CLI module (with jQuery for example) that parses the command lines and sends
exactly the same thing your actual GUI sends.
Set up a text area in your web app (decorated as a CLI) that reacts
on each Enter key pressed, and sends the line(s) to your server. On
your server, you can create a utility class (say CLIParser.java for
instance), and use Args4j to parse the received command,
validate it and run it.
Have you looked at Primefaces terminal? http://www.primefaces.org/showcase/ui/misc/terminal.xhtml
You data structure looks simple enough. Also you mentioned you designed your application the way the business logic is separated from the front end.
In this case you may consider exposing your business logic as a REST based WebService. It should not be that hard since you have layered structure in your application.
Looks like a few methods:
list scripts - returns a list of available scripts list hosts
returns a list of available hosts run script(scriptName, hostAddress)
runs script scriptName on a host with on address hostAddress possibly returns the results if your application supports this
All three look like a good candidates for GET methods.
You may consider Jersey or Resteasy or another framework.
You can find plenty tutorials for both of them. Take a look for example here.
From your command line application you can make calls to your web service in different ways. Just because I used to work with Jersey JAX-RS implementation most of the time, I found use of Jersey client(the latest stable version) the most convenient. Here you can find a short tutorial how you can do it from your command line application with Jersey client. JBoss also has a client API as a part of their framework(also fully certified JAX-RS implementation). You may even decide not to use any client API and do all the work manually utilizing HttpURLConnection, but I would not recommend. There is no big difference in using client API or do all the work manually with HttpURLConnection for the simple service, but you never know when your application becomes not that simple because of new requirements your client could not think of at the beginning.
Hope that helps
I'd like to profile network overhead of my RMI-based application. For instance, I'd be interesting to know how many bytes a stub transferred over the network, or how many method calls were done through it. I can't find anything in the RMI API to hook into, though. Is this possible at all?
I am not particularly fond of RMI and found JSon-based, Thrift and even XML-RPC easier to work with. However, sometimes we don't have a choice.
There is a microbenchmark suite for RMI, as well as object serialization, in the "test" tree of the jdk7/jdk repository, see:
jdk/test/java/rmi/reliability/benchmark
The script:
jdk/test/java/rmi/reliability/scripts/create_benchmark_jars.ksh
shows how to create two JAR files which is used in the benchmarking. You can pass command-line parameters to each each instance for specific settings such repetitions per run, etc. (One instance of the jar will run as the client and the other as the server, which is also configured from a command line parameter.)
I haven't played much with this myself - usually trusting existing benchmarks, for example:
http://daniel.gredler.net/2008/01/07/java-remoting-protocol-benchmarks
...or using tools such as (I haven't looked much at the last two):
JMeter (http://jmeter.apache.org/), Soap-stone (http://soap-stone.sourceforge.net/) or
JVM-serialisers (https://github.com/eishay/jvm-serializers/wiki/)
I have a small test class that I want to run on a particular jvm that's already up and running (basically it's an web application running on Tomcat) . The reason I want to do this is I want to execute a small test class (with the main method and all) within that jvm so that I get the same environment (loaded and initialized classes) for my test class.
Is it possible to indicate that ,say through a jvm parameter, that it should not initialize a new vm to execute my class but instead go and execute on the remote vm and show me the result here, on my console. So the local jvm acts as a kind of thin proxy ?
I am not aware in case there are some tools that should make this possible .Also heard somewhere that java 6 jvm comes with an option like this , is that true ?
Please help me.
Thanks,
After reading this question and the answers, I decided to roll my own little utility: remoteJunit
It is lightweight and dynamically loads classes from the client to the server JVM. It uses HTTP for communication.
You might want to take a look at btrace. It allows you to run code in an already started JVM provided you don't change the state of the variables inside that JVM. With this kind of tracing, you might be able solve your problem in a different way. Not by running extra code in form of a new class but by adding safe code to and existing class running inside a JVM.
For instance, you might System.out.println the name of the file when there is a call to File.exists.
You might find JMX useful. Register an MBean in the server process. Invoke it with visualvm (or jconsole). (tutorial) Never tried it myself, mind.
RMI would also do the magic.
http://java.sun.com/javase/6/docs/technotes/guides/rmi/index.html
Make your web application start an RMI registry and register your service
beans there.
Then in other JVM you can run a program that queries the RMI registry
started by your web application for the services you want to verify
and you are done.
I assume "small test class" is basically some debugging code you want to run to monitor your real application, which is deployed remotely on a Tomcat. If this is the case, you should connect your Eclipse debugger remotely to the Tomcat instance, so you can set a breakpoint at interesting locations and then use the Display view of Eclipse to run any arbitrary code you might need to perform advanced debugging code. As java supports Hot Code Replacement using the debug mechanism, you can also change existing code on the remote side with new code at runtime.