Emulating serial bus with JMS? - java

I'm currently working on meter-bus project and my testing environment includes com0com, hub4com, rxtx and mixed real & virtual devices.
Since I've collected enough data I want to move away from the serial stuff and go for a pure virtual tcp/ip testing environment.
So far I've written a small broker of my own which works fine for a small/tiny setup but I'm planning on doing a full scale test and I don't want to reinvent the wheel. I thought of using JMS here but I haven't been doing much Java work the past 4 years so I don't have a clue which provider to chose or if JMS is the right choice here at all.
Some numbers I came up with simulating 9600 baud (may not be accurate):
Devices : 100-250
Messages: 17000+ per sec
MsgSize : max. 300 byte , avg. about 40 byte
Rtt: max. 30 msec
Most providers can handle the messages but I'm unaware of the time constraint. I hope someone can provide me some reference infos. Please also put into consideration that I can lower the baud rate which increases rtt and lowers msg count.
Not meeting the rtt constraint would mimic faulty wiring in my case :)
I'm open to any suggestions may it be design/implementation hints or pointing out existing projets/software that fit this purpose.

As provider you can use ActiveMQ: http://activemq.apache.org

Related

Netty options for real-time distribution of small messages to a large number of clients?

I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM.
I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages?
Any insight you could offer would be greatly appreciated.
usually, if see outOfMemory you can use a thread dump tool to dump the threads. Or use something like jvirtualvm and jconsole to find out which class doesn't get GCed and keep eating your memory.
2Gigs is not big for 64 bit machines nowadays.Try to turn that a number bigger to about 3 or 4 G to see if you don't hit OOM.
If you find you can handle 10k connections easily in LAN, try to add a small delay in your netty handler. Check what happens.
You might want to look into load balancing approach. It is used to distribute the workload across the distributed system using both hardware and software. The details of what is suitable for your system depend on several factors which includes hardware upgrade, etc. Certainly, 2GB of RAM is fairly small to server 10k users and you will need to increase this limit.
You don't say whether the subscription stream is constant or bursty. You also don't say whether there is a minimum number of messages / second the client must support.
Given that I don't know anything about Redis, are any of the following practical?
For the messages you don't care about, if channel.isWritable() == false, discard immediately. Unfortunately I don't know of a way to cancel messages that are in Netty's send buffer. You wouldn't be able to cancel messages that have been passed to the TCP send buffer anyway so it's not really something to rely on.
Slow reception from the subscription to the rate of the slowest client.
Determine which clients can't keep up (maybe use the write timeout handler) and move them to a separate subscription which can be slowed down. Duplicate the published messages to both subscriptions.
Can you split the messages to send to the clients across different subscriptions. If a client can't keep up unsubscribe it from the unimportant messages.
If your average send rate is higher than the client can support over time then there isn't really a solution other than negotiating a change in requirements to reduce the maximum allowable throughput.

How to improve the performance of a stock data transfer application?

This is a question which I have worked for several years, but now I still don't get a good solution.
My application has two part:
The first one is running in a server which is called "ROOT server". It will receive the realtime stock data from HKEx(Securities and futures exchange in Hong Kong), and broadcast them to 5 other children servers. It will append a timestamp to each data item when broadcasting.
The second ones are running in the "children" servers. They will receive the stock data from ROOT server, parse each of them, and get the important information. At last, they will send them in a new text format to the clients. The clients may be hundreds to thousands, they can register for some kind of stocks, and get the realtime information of them.
The performance is the most important thing. In the past several years, I tried all kinds of solutions I know to make it faster. The "faster" here means, the first one will receive and send the data to the children servers as fast as it can, and the children servers will receive and parse and send the data to the clients as fast as they can.
For now, when the data speed is 200K from HKEx and there are 5 children servers, the first one application will have 10ms latency for each data item in average. And the second one is not easy to test, it depends on the clients count.
What I'm using:
OpenSUSE 10
Sun Java 5.0
Mina 2.0
The server hardware:
4-core CPU (I don't know the type)
4G ram
I'm considering how to improve the performance.
Do I need to use a concurrent framework as akka
try another language, e.g. Scala? C++?
use the real-time java system?
your advices...
Need your help!
Update:
The applications have logged some important information for analysis, but I don't find any bottlenecks. The HKEx will provide more data in the next year, I don't think my application will be fast enough.
One of my customer have tested our application and another company's, but ours didn't have advantage in speed. I just want to find a way to make it faster.
How is the first application running
The first application will receive the stock data from HKEx and broadcast them to several other servers. The steps are:
It connects HKEx
logins
reads the data. The data is in binary format, each item has a head, which is 2 bytes of integer which means the length of body, then body, then next item.
put them into a hashmap in memory. Key is the sequence of the item, value is the byte array.
log the sequence of each received item into disk. Use log4j's buffer appender.
a daemon thread try to read the data from hashmap, and inserts them into postgresql in every 1 minute. (this is just used to backup the data)
when clients connect to this server, it accepts them and try to send all the data from hashmap from memory. I used thread pool in mina, the acceptor and senders are in different threads.
I think the logic is very simple. When there are 5 clients, I monitored the speed of transfer is only 1.5M/s at most. I used java to write a simplest socket program, and found it can be 10M/s.
Actually, I've spent more than 1 year trying all kinds of solutions on this application, just to make it faster. That why I feel desperate. Do I need to try another language than Java?
about 10ms latency
When the application received a data from HKEx, I will record the timestamp for it. When the root server broadcast the data to the children servers, it will append the timestamp to the data.
when children server get the data, it will send a message to root server to get the current timestamp, then compare them.
So, the 10ms latency contains:
root server got the data ---> the child server got the data
child server send a request for root server's timestamp ---> root server got it
But the 2nd one is very small that we can ignore it.
The first thing to do to find performance bottlenecks is to find out where most of the time is spent. A way to determine this is to use a profiler.
There are open source profilers available such as http://www.eclipse.org/tptp/, or commercial profilers such as Yourkit Java Profiler.
One easy thing to do could be to upgrade the JVM to Java SE6 or Java 7. General JVM performance improved a lot at version 6. See the Java SE 6 Performance White Paper for more details.
If you have checked everything, and found no obvious performance optimizations, you may need to change the architecture to get better performance. This would obviously be most fruitful if you could at least identify where your application is spending time - sounds like there are several major components:
The HK Ex server (out of your control)
The network between the Exchange and your system
The "root" server
The network between the "root" and the "child" servers
The "child" servers
The network between "child" servers and the client
The clients
To know where to spend your time, money and energy, I'd at least want to see an analysis of those components, how long each component takes (min, max, avg), and what the specification is of each resource.
Easiest thing to change is hardware - bigger servers, more memory etc., or better bandwidth. Can you see if any of those resources are constrained?
Next thing to look at is to change the communication protocol to be more efficient - how do clients receive the stocks? Can you reduce data size? 1.5M for only 5 clients sounds a lot...
Next, you might look at some kind of quality of service solution - provide dedicated hardware for "premium" customers, with reduced resource contention, more servers, more bandwidth - this will probably require changes to the architecture.
Next, you could consider changing the architecture - right now, your clients "pull" data from the client servers. You could, instead, "push" data out - that way, you shave off the polling interval on the client end.
At the very end of the list, I'd consider a different technology stack; Java is a fine programming language, but if absolute performance is a key priority, C/C++ is still faster. Clearly, that's a huge change, and a well-written Java app will be faster than a poorly written C/C++ app (and far more stable).
To trace the source of the delay I would add timing data to your end to end process. You can do this using an external log, or by adding meta data to your messages.
What you want to get is a timestamp at key stages in your application 3-5 is enough to start with. Normally I would use System.nanoTime() because I am looking for micro-second delays, but in your case System.currentTimeMillis() is likely to be enough, esp if you average over many samples (you will still get 0.1 ms accuracy on an average, with Ubuntu)
Compare time stamps for the same messages as it passes through your system and look for the highest average delay. Once you have found this try breaking this interval into more stages to zoom in on the problem.
I would analyse any stage which has a verage delay over over 1 ms for your situation.
If clients are updating every minute, there might not be a good technical reason to do this, but you don't want to be seen as being slow and your traders at a disavantage even if in reality it won't make a difference.

How to get link speed programmatically?

I am working on an app and it is almost finished except only one thing: I don't know how to get link speed and place it in the status bar.I am new to Java so if somebody could help me I would be very grateful.
P.S. Sorry for bad English.
As the repliers suggest, your question is not very clear. You could be referring to the link connection speed (i.e up to 54 Mbps with good signal reception Wifi or up to 7.2 Mbps with full speed HSDPA) which depends on:
The network interface you are using at a time. Some phones allow tethering which means you can have both Wifi and Mobile Data Link (GPRS/3G/HSDPA) active at the same time, or on automatic switch (if your Wifi connection drops your phone will switch to Mobile network automatically if activated).
The connection speed negotiated at a time. Depending on signal quality/carrier network configuration (some have max. speed limited)/mobile data contract (monthly bandwith quota exceeded normally means defaulting to GPRS speed).
In this case I am afraid there is no standard Java API methods to know it, but the Android API the needed functionality:
For WiFi link speed check WifiInfo.getLinkSpeed()
For Mobile Data Link I am afraid that you can only check TelefonyManager.getNetworkType() to determine the current Mobile Data Link type. You should then aproximate to actual speed by link type (i.e. for GPRS up to 128 kbps, for EDGE up to 236.8 kpbs, for 3G up to 2 Mbps, for HSDPA up to 7.2 Mbps). Take into consideration that this is only an aproximation. Your could be conneting using HSDPA but your carrier limiting the top speed to 2 Mbps.
In the other case that you refer the current (Download/Upload) data link speed this is only available at a high level, actually measuring not the link speed but the speed between your phone and a server, which can be determinted not only by your link speed but also by many other factors (all the links between your phone an server, the server itself, etc.). You could just measure "HTTP level speed" which means HTTP data speed (leaving out overhead traffic for data packets), since normally only HTTP connections are supported in every scenario (your carrier could be hiding you behind a proxy that filters everything out but HTTP traffic).
If you are using 8 level API an interesting feature called TrafficStats is also available. This lets you know the sent/received packets at a low level exchanged by your phone over the Mobile Data Link, which may offer just the information you where looking for (use these measurements with elapsed times and you can easily measure current/average used data link speed).
You cannot tell directly. You must ask the underlying operating system. For OS X you can parse the output from "/sbin/ifconfig" on the appropriate network port.
You can also write extension using JNI, and ask connection speed using C. It's just in case if you don't want to parse output from other application, but please keep in mind that this solution isn't portable.

How often should network traffic/collisions cause SNMP Sets to fail?

My team has a situation where an SNMP SET will fail once every two weeks or so. Since this set happens automatically, we don't necessarily notice it immediately when it fails, and this can result in an inconsistent configuration and associated wailing and gnashing of teeth. The plan is to fix this by having our software automatically retry the SET when it fails.
The problem is, we aren't sure why the failure is happening. My (extremely limited) knowledge of SNMP isn't particularly helpful in diagnosing this problem, so I thought I'd ask StackOverflow for some advice. We think that every so often a spike in network traffic will cause the SET to fail. Since SNMP uses UDP for communication, I would think it would be relatively easy for a command to be drowned out if traffic was high for a short period of time. However, I have no idea how common this is. We have a small network with a single cisco router and there are less than a dozen SNMP controlled devices on that network. In addition to the SNMP traffic, there are some status web pages being loaded from the various devices. In case it makes a difference, I believe we are using the AdventNet SNMP API version 4.0.4 for Java.
Does it sound reasonable that there will be some SET commands dropped occasionally, or should we be looking for other causes?
SNMP was designed to be unreliable. It uses UDP as its transport protocol. Routers will drop SNMP packets when they've got high priority work to do. So yes, it sounds very reasonable that SET commands are dropped occasionally :)
First upgrade to the newest version of the SNMP library if there is one.
Then you can set up a retry mechanism: verify each SET with a GET. If this fails, queue the SET for a later attempt. This requires an elaborate queuing mechanism: a later SET for the same setting should be queued after, or over, an existing queued SET.
Another option is to synchronize the entire state every hour; use GET for a setting, if it has changed, SET it. Changes that do not make it through for over 3 hours can be reported using an alerting system.
There are many more options, but if you have just 1 failure per week average, I'd go with the simplest one: Verify a SET with a GET, retry for 5 times, if it still fails, email.

Load Testing Multithreaded Java Application 1400 TPS Required

I need to write a MultiThreaded Java Application that will be used to load test the MMS Server. Transactions starts when the MMS server indicates to my MultiThreaded Java Application that a MMS has arrived on the server and then i need to download the attachment that is part of the of the MMS from the MMS server using the protocol supported by the MMS Server. Once is successfully download the attachment, then it marks the completion of the Transaction, Since its a load testing application for the MMS Server, the expected TPS is above 1400 TPS, hence i need to provide the hardware requirements for this application, I feel that i need a horizontal scaling along with a load balancer and a network connectivity in GBPS to download attachments. If i have 2 boxes, then each box has to handle 700 TPS , is it feasible for a multi threaded java application deployed on a Solaris box to acheive this performance of 700 TPS. Please let me know your thoughts from a architecture, hardware and it will be helpful if i can get suggestion on which Solaris hardware needs to be considered. I have Solaris T5220 in my mind.
Thanks a lot in advance for all your help.
I doubt that you'll need such a big machine. This depends on a lot of different factors though, of which quality of code probably is the most important one.
Regarding network usage, you should really come up with a number of KB an average attachment will have. For 10 KB attachments, 1400 TPS would mean 14,000 KB or 14 MB per second. For 1 MB it would be 1.4 GB per second - quite a difference, isn't it?
For 1.4 GB per second, you could also get some serious problems to store it somewhere - if this is a requirement at all.
The processing itself shouldn't be too much of a problem (but again, depends on a multitude of different factors).
The best thing you could do is to use any free hardware (or virtual machine) you can grab and run some tests. Just see what numbers you get and decide where to go from there.

Categories

Resources