I am writing a simple p2p application using Java 7 and tomp2p. The problem is that peers need to bootstrap to other peers in the same network and in order for that to work, the ports have to be set correctly and the broadcast messages need to be sent and received properly.
I would like to know the best setup for testing the application (or any distributed application) using a single machine (since I do not always have multiple machines to experiment with).
First, I simply tried running two instances of my application in different terminals (and this worked), but as soon as I tested it in a real network with two machines, the peers of my network were not able to find each other anymore.
Therefore, I am now running Ubuntu 12.04 as a host OS and a virtual machine (virtualbox) with a Fedora 17 image. However, for my application to work, the host and the VM need to appear as if they were in the same network, but somehow I could not figure out the right setup for this to work (this is due to some NAT issues).
Does anybody have experience with testing a distributed application on a single system, and can give me some hints about the setup and the virtual machines used?
Thanks in advance,
r0f1
Related
On my Linux server, I run various Java applications under cgroups with the call of cgexec. These applications also communicate to each other under a given port.
However, I am also interested to reduce the bandwidth of this communication to simulate a real network here. For this, mininet seems to be the tool of choice.
Now my question is the following:
How can I run a process under a cgroup AND on a mininet virtual host?
Are there any probles with this config/idea? Like side-effects?
I am using Spring Boot mail and ActiveMQ to build an email system. I followed this example project. Because our application QPS is small one server is enough to handle the requests. In the example project ActiveMQ, sender, and receiver are all on the same server. Is this a good practice for small application? Or I should put ActiveMQ, sender, and receiver on three separate machines?
It's depends...
The size of the application is irrelevant. It depends more on your requirements for availability, scalability and data safety.
If you have everything on the same machine you have a single point of risk. If the machine crash you lost everything on that machine. But this setup is the most
simple one (also for maintenance) and the change that the server will crash is low. Modern machines are able to handle a big load.
If you have a really high load and/or a requirement for guaranteed delivery you should use multiple systems with producers that sends messages to an ActiveMQ cluster (also distributed over multiple machines). The consumers, also on more than one machine. Use also load balancers to connect/interface to the machines.
You can also have a setup in the middle of both example setups (simple and
complex).
If you are able to reproduce all the messages (email messages in your example), and the load is not so high, I will advise you to put it simple all on the same machine.
The short answer is it depends. The longn answer is measure it. The use of small application criteria is flawed. You can have both on the same server if your server have all the resources required by your application and message queue broker, and not impacting the performance of end user.
I would suggest run your performance tests to test your criteria then decide your target environment setup.
The simplest setup is everything on the same box. If this one box has enough CPU and disk space, why not ? One (performance) advantage is that nothing needs to go over the network.
If you are concerned about fault-tolerance, replicate that whole setup on a second machine.
Scenario 1: I have a test server that gets OS reinstalls on a frequent basis. Is there any way to add a program to the server that will remain and execute even if the OS is reinstalled? (I know it's a stretch, but had to ask)
Scenario 2: I have another server running ESXi 5.1 (which I admit, I know nothing about) how (or can) I run a program at the OS level (not as a VM)? Reason being, I need to get information specific to the server and not the VM's such as ip,MAC address, etc that my program gathers with Runtime.exec().
I have a PXE server setup with kickstart files that work great for linux, but not sure if I can do it with ESX or not, anyone ever try to PXE boot ESX like this? On linux, I run my program via crontab, and in the past did it with rc.local. Any suggestions would be appreciated, even if it is a link to potential resources you have had luck with in similar situations.
1) The program must run within an OS, or the JVM will have been designed to run without an OS. I don't believe there is a JVM which will use an OS if it is there but not care if it is not.
You can do this with virtual machines. You can have the OS run/stop/start/reinstall in a virtual machine, while an application is running on the bare machine or in another virtual machine.
2) When you application runs, it is at the OS level. The distinction is largely an illusion. You can get the IP and MAC addresses in normal Java. If you nee to get something else, you can use JNA/JNI/JNR.
I have not heard of ESX before.
I need to determine the list of JVMs running on a remote machine, and once that is done, to connect to each of the JVMs using JMX. I am a newbie and have gone through the following concepts:
1. using jps and jstat: I read that these commands may not be available in the future jdk versions.
2. using the java class "virtualmachine().list" . The problem with this though is that it helps you fetch the list of JVMs but only for the local machine. I do not know how to connect to a remote machine and then obtain this list.
Can anyone please suggest how to use either "virtualmachine().list" or any other method to obtain a list of JVMs on a remote machine ?
The problem is that all the methods(including the way jconsole works) that I have studied to connect to a remote JVM are focused to a SPECIFIC JVM where I need to provide the port number(of the JVM process). But I need a list of all the JVMs running. How can I do this ? Is it even possible ?
One option would be to launch a small java application on the remote machine and have it run virtualmachine().list or similar and then send back the information or make it accessible using JMX. This application could be running all the time, or you could maybe launch it remotely.
Some other ideas mentioned here: Get System Information of a Remote Machine (Using Java).
You could add a java-agent or some other common component to each of the remote JVMs and have them "phone-home" their JMXServiceURLs to a central JVM clearing house. Other than that, I think your only options are derived broadly from monex0's suggestion.
I have a cluster of 32 servers and I need a tool to distribute a Java service, packaged as a Jar file, to each machine and remotely start the service. The cluster consists of Linux (Suse 10) servers with 8 cores per blade. The application is a data grid which uses Oracle Coherence. What is the best tool for doing this?
I asked something similar once, and it seems that the Java Parallel Processing Framework might be what you need:
http://www.jppf.org/
From the web site:
JPPF is an open source Grid Computing
platform written in Java that makes it
easy to run applications in parallel,
and speed up their execution by orders
of magnitude. Write once, deploy once,
execute everywhere!
Have a look at OpenMOLE: http://www.openmole.org/
This tool enables you to distribute a computing workflow to several resources: from multicores machines, to clusters and computing grids.
It is nicely documented and can be controlled through groovy code or a GUI.
Distributing a jar on a cluster should be very easy to do with OpenMOLE.
Is your service packaged as an EJB? JBoss does a fairly good job with clustering.
Use Bit Torrent. Using Peer to Peer sharing style on clusters can really boost up your deployment speed.
It depends on which operating system you have and how security is setup on your network.
If you can use NFS or Windows Share, I suggest you put the software on an NFS drive which is visible to all machines. That way you can run them all from one copy.
If you have remote shell or secure remote shell you can write a script which runs the same command on each machine e.g. start on all machines, or stop on all machines.
If you have windows you might want to setup a service on each machine. If you have linux you might want to add a startup/shutdown script to each machine.
When you have a number of machines, it may be useful to have a tool which monitors that all your services are running, collects the logs and errors in one place and/or allows you to start/stop them from a GUI. There are a number of tools to do this, not sure which is the best these days.