How to get link speed programmatically? - java

I am working on an app and it is almost finished except only one thing: I don't know how to get link speed and place it in the status bar.I am new to Java so if somebody could help me I would be very grateful.
P.S. Sorry for bad English.

As the repliers suggest, your question is not very clear. You could be referring to the link connection speed (i.e up to 54 Mbps with good signal reception Wifi or up to 7.2 Mbps with full speed HSDPA) which depends on:
The network interface you are using at a time. Some phones allow tethering which means you can have both Wifi and Mobile Data Link (GPRS/3G/HSDPA) active at the same time, or on automatic switch (if your Wifi connection drops your phone will switch to Mobile network automatically if activated).
The connection speed negotiated at a time. Depending on signal quality/carrier network configuration (some have max. speed limited)/mobile data contract (monthly bandwith quota exceeded normally means defaulting to GPRS speed).
In this case I am afraid there is no standard Java API methods to know it, but the Android API the needed functionality:
For WiFi link speed check WifiInfo.getLinkSpeed()
For Mobile Data Link I am afraid that you can only check TelefonyManager.getNetworkType() to determine the current Mobile Data Link type. You should then aproximate to actual speed by link type (i.e. for GPRS up to 128 kbps, for EDGE up to 236.8 kpbs, for 3G up to 2 Mbps, for HSDPA up to 7.2 Mbps). Take into consideration that this is only an aproximation. Your could be conneting using HSDPA but your carrier limiting the top speed to 2 Mbps.
In the other case that you refer the current (Download/Upload) data link speed this is only available at a high level, actually measuring not the link speed but the speed between your phone and a server, which can be determinted not only by your link speed but also by many other factors (all the links between your phone an server, the server itself, etc.). You could just measure "HTTP level speed" which means HTTP data speed (leaving out overhead traffic for data packets), since normally only HTTP connections are supported in every scenario (your carrier could be hiding you behind a proxy that filters everything out but HTTP traffic).
If you are using 8 level API an interesting feature called TrafficStats is also available. This lets you know the sent/received packets at a low level exchanged by your phone over the Mobile Data Link, which may offer just the information you where looking for (use these measurements with elapsed times and you can easily measure current/average used data link speed).

You cannot tell directly. You must ask the underlying operating system. For OS X you can parse the output from "/sbin/ifconfig" on the appropriate network port.

You can also write extension using JNI, and ask connection speed using C. It's just in case if you don't want to parse output from other application, but please keep in mind that this solution isn't portable.

Related

Controlling WIFI\Cellular data update frequency on Android

I am building a constantly living RESTful (well, just client server) mobile app which should be always connected to the internet on my Android device.
I'd like to configure the network threshold, so that when the app goes to idle state, instead of pinging the server from the mobile Android device (that runs on battery all the day long) every 50ms say, it would ping it every second [1000ms].
I think that after lots of digging I came across something, (after taking a look at some config file I once saw somewhere on IBM's doc pages) which is Java Mission Control - JMC, but I did not find the place where I can actually config anything relevant to these parameters (not that I succeeded to understand by far what JMC might be able to config in general...).
How would you save the battery's life in such a scenarios with a constant Cellular Data/WiFi usage? Maybe praying for mercy can help...
Can I indeed approach it through some Java Mission Control (JMC) configuration?
Java Mission Control doesn't configure java applications, it just collect data about their behavior. http://www.oracle.com/missioncontrol
I found GCM - Google Cloud Messaging, which allows to send notifications to different platforms - iOS and Android ones alike. Those are allow you to take the initiation and establish a connection. By default messaging API won't kill your battarey.

How to stop hack/DOS attack on web API

My website has been experiencing a denial of service/hack attack for the last week. The attack is hitting our web API with randomly generated invalid API keys in a loop.
I'm not sure if they are trying to guess a key (mathematically impossible as 64bit keys) or trying to DOS attack the server. The attack is distributed, so I cannot ban all of the IP address, as it occurs from hundreds of clients.
My guess is that it is an Android app by the IPs, so someone has some malware in an Android app, and use all the installs to attack my server.
Server is Tomcat/Java, currently the web API just responds 400 to invalid keys, and caches IPs that have made several invalid key attempts, but still needs to do some processing for each bad request.
Any suggestions how to stop the attack? Is there any way to identify the Android app making the request from the HTTP header?
Preventing Brute-Force Attacks:
There is a vast array of tools and strategies available to help you do this, and which to use depends entirely on your server implementation and requirements.
Without using a firewall, IDS, or other network-control tools, you can't really stop a DDOS from, well, denying service to your application. You can, however, modify your application to make a brute-force attack significantly more difficult.
The standard way to do this is by implementing a lockout or a progressive delay. A lockout prevents an IP from making a login request for X minutes if they fail to log in N times. A progressive delay adds a longer and longer delay to processing each bad login request.
If you're using Tomcat's authentication system (i.e. you have a <login-constraint> element in your webapp configuration), you should use the Tomcat LockoutRealm, which lets you easily put IP addresses on a lockout once they make a number of bad requests.
If you are not using Tomcat's authentication system, then you would have to post more information about what you are using to get more specific information.
Finally, you could simply increase the length of your API keys. 64 bits seems like an insurmountably huge keyspace to search, but its underweight by modern standards. A number of factors could contribute to making it far less secure than you expect:
A botnet (or other large network) could make tens of thousands of attempts per second, if you have no protections in place.
Depending on how you're generating your keys and gathering entropy,
your de facto keyspace might be much smaller.
As your number of valid keys increases, the number of keys that need
to be attempted to find a valid one (at least in theory) drops
sharply.
Upping the API key length to 128 (or 256, or 512) won't cost much, and you'll tremendously increase the search space (and thus, the difficulty) of any brute force attack.
Mitigating DDOS attacks:
To mitigate DDOS attacks, however, you need to do a bit more legwork. DDOS attacks are hard to defend against, and its especially hard if you don't control the network your server is on.
That being said, there are a few server-side things you can do:
Installing and configuring a web-application firewall, like mod_security, to reject incoming connections that violate rules that you define.
Setting up an IDS system, like Snort, to detect when a DDOS attack is occurring and take the first steps to mitigate it
See #Martin Muller's post for another excellent option, fail2ban
Creating your own Tomcat Valve, as described here, to reject incoming requests by their User-Agents (or any other criterion) as a last line of defense.
In the end, however, there is only so much you can do to stop a DDOS attack for free. A server has only so much memory, so many CPU cycles, and so much network bandwidth; with enough incoming connections, even the most efficient firewall won't keep you from going down. You'll be better able to weather DDOS attacks if you invest in a higher-bandwidth internet connection and more servers, or if you deploy your application on Amazon Web Services, or if you bought one of many consumer and enterprise DDOS mitigation products (#SDude has some excellent recommendations in his post). None of those options are cheap, quick, or easy, but they're what's available.
Bottom Line:
If you rely on your application code to mitigate a DDOS, you've already lost
If it's big enough you just can't stop it alone. You can do all the optimisation you want at the app level, but you'll still go down. In addition to app-level security for prevention (as in FSQ's answer) you should use proven solutions leaving the heavy lifting to professionals (if you are serious about your business). My advise is:
Sign-up for CloudFlare or Incapsula. This is day to day for them.
Consider using AWS API gateway as the second stage for your API requests. You'll enjoy filtering, throttling, security,auto-scaling and HA for your API at Amazon scale. Then you can forward the valid requests to your machines (in or outside amazon)
Internet --> CloudFlare/Incapsula --> AWS API Gateway --> Your API Server
0,02
PS: I think this question belongs to Sec
Here are a couple ideas. There are a number of strategies in addition, but this should get you started. Also realize that amazon gets ddos'd on a frequent basis and their systems tend to have a few heuristics that harden them (and therefore you) from these attacks, particularly if you are using Elastic load balancing, which you should be using anyway.
Use a CDN -- they often have ways of detecting and defending against ddos. Akamai, mastery, or amazons own cloud front.
Use iptables to blacklist offensive ips. The more tooling you have around this, the faster you can blok/unblock
Use throttling mechanisms to prevent large numbers of requests
Automatically deny requests that are very large (say greater than 1-2mb; unless you have a photo uploading service or similar) before they get to your application
Prevent cascading failures by placing a limit on the total number of connections to other components in your system; for example, dont let your database server become overloaded by opening a thousand connections to it.
The best way is to prevent the access to your services entirely for those IP addresses who have failed let's say 3 times. This will take most of the load from your server as the attacker gets blocked before Tomcat even has to start a thread for this user.
One of the best tools to achieve this is called fail2ban (http://www.fail2ban.org). It is provided as a package in all major linux distributions.
What you have to do is basically log the failed attempts into a file and create a custom filter for fail2ban. Darryn van Tonder has a nice example on how to write your own filter on his blog: https://darrynvt.wordpress.com/tag/custom-fail2ban-filters/
If D-DOS is attack is severe, application level checks does not work at all. Entire bandwidth will be consumed by D-DOS clients and your application level checks won't be triggered. Practically your web service does not run at all.
If you have to keep your application safe from severe D-DOS attacks, you do not have any other option except relying on third party tools by paying money. One of the Clean pipe provider ( who sends only good traffic) tools I can bank on from my past experience : Neustar
If D-DOS attack is mild in your website, you can implement application level checks. For example, below configuration will restrict maximum number of connections from single IP as quoted in Restrict calls from single IP
<Directory /home/*/public_html> -- You can change this location
MaxConnPerIP 1
OnlyIPLimit audio/mpeg video
</Directory>
For more insight into D-DOS attack, visit Wiki link. It provides list of preventive & responsive tools which includes : Firewalls, Switches, Routers, IPs Based Prevention, D-DOS based defences
and finally
Clean pipes (All traffic is passed through a "cleaning center" or a "scrubbing center" via various methods such as proxies, tunnels or even direct circuits, which separates "bad" traffic (DDoS and also other common internet attacks) and only sends good traffic beyond to the server)
You can find 12 distributors of Clean pipes.
For a targeted and highly distributed DOS attack the only practical solution (other than providing the capacity to soak it up) is to profile the attack, identify the 'tells' and route that traffic to a low resource handler.
Your question has some tells - that the request is invalid, but presumably there is too much cost in determining that. That the requests originate from a specific group of networks and that presumably they occur in bursts.
In your comments you've told us at least one other tell - the user agent is null.
Without adding any additional components, you could start by tarpitting the connection - if a request matching the profile comes in, go ahead and validate the key, but then have your code sleep for a second or two. This will reduce the rate of requests from these clients at a small cost.
Another solution would be to use log failures matching the tell and use fail2ban to reconfigure your firewall in real time to drop all packets from the source address for a while.
No, its unlikely you will be able to identify the app without getting hold of an affected device.

Emulating serial bus with JMS?

I'm currently working on meter-bus project and my testing environment includes com0com, hub4com, rxtx and mixed real & virtual devices.
Since I've collected enough data I want to move away from the serial stuff and go for a pure virtual tcp/ip testing environment.
So far I've written a small broker of my own which works fine for a small/tiny setup but I'm planning on doing a full scale test and I don't want to reinvent the wheel. I thought of using JMS here but I haven't been doing much Java work the past 4 years so I don't have a clue which provider to chose or if JMS is the right choice here at all.
Some numbers I came up with simulating 9600 baud (may not be accurate):
Devices : 100-250
Messages: 17000+ per sec
MsgSize : max. 300 byte , avg. about 40 byte
Rtt: max. 30 msec
Most providers can handle the messages but I'm unaware of the time constraint. I hope someone can provide me some reference infos. Please also put into consideration that I can lower the baud rate which increases rtt and lowers msg count.
Not meeting the rtt constraint would mimic faulty wiring in my case :)
I'm open to any suggestions may it be design/implementation hints or pointing out existing projets/software that fit this purpose.
As provider you can use ActiveMQ: http://activemq.apache.org

How to improve the performance of a stock data transfer application?

This is a question which I have worked for several years, but now I still don't get a good solution.
My application has two part:
The first one is running in a server which is called "ROOT server". It will receive the realtime stock data from HKEx(Securities and futures exchange in Hong Kong), and broadcast them to 5 other children servers. It will append a timestamp to each data item when broadcasting.
The second ones are running in the "children" servers. They will receive the stock data from ROOT server, parse each of them, and get the important information. At last, they will send them in a new text format to the clients. The clients may be hundreds to thousands, they can register for some kind of stocks, and get the realtime information of them.
The performance is the most important thing. In the past several years, I tried all kinds of solutions I know to make it faster. The "faster" here means, the first one will receive and send the data to the children servers as fast as it can, and the children servers will receive and parse and send the data to the clients as fast as they can.
For now, when the data speed is 200K from HKEx and there are 5 children servers, the first one application will have 10ms latency for each data item in average. And the second one is not easy to test, it depends on the clients count.
What I'm using:
OpenSUSE 10
Sun Java 5.0
Mina 2.0
The server hardware:
4-core CPU (I don't know the type)
4G ram
I'm considering how to improve the performance.
Do I need to use a concurrent framework as akka
try another language, e.g. Scala? C++?
use the real-time java system?
your advices...
Need your help!
Update:
The applications have logged some important information for analysis, but I don't find any bottlenecks. The HKEx will provide more data in the next year, I don't think my application will be fast enough.
One of my customer have tested our application and another company's, but ours didn't have advantage in speed. I just want to find a way to make it faster.
How is the first application running
The first application will receive the stock data from HKEx and broadcast them to several other servers. The steps are:
It connects HKEx
logins
reads the data. The data is in binary format, each item has a head, which is 2 bytes of integer which means the length of body, then body, then next item.
put them into a hashmap in memory. Key is the sequence of the item, value is the byte array.
log the sequence of each received item into disk. Use log4j's buffer appender.
a daemon thread try to read the data from hashmap, and inserts them into postgresql in every 1 minute. (this is just used to backup the data)
when clients connect to this server, it accepts them and try to send all the data from hashmap from memory. I used thread pool in mina, the acceptor and senders are in different threads.
I think the logic is very simple. When there are 5 clients, I monitored the speed of transfer is only 1.5M/s at most. I used java to write a simplest socket program, and found it can be 10M/s.
Actually, I've spent more than 1 year trying all kinds of solutions on this application, just to make it faster. That why I feel desperate. Do I need to try another language than Java?
about 10ms latency
When the application received a data from HKEx, I will record the timestamp for it. When the root server broadcast the data to the children servers, it will append the timestamp to the data.
when children server get the data, it will send a message to root server to get the current timestamp, then compare them.
So, the 10ms latency contains:
root server got the data ---> the child server got the data
child server send a request for root server's timestamp ---> root server got it
But the 2nd one is very small that we can ignore it.
The first thing to do to find performance bottlenecks is to find out where most of the time is spent. A way to determine this is to use a profiler.
There are open source profilers available such as http://www.eclipse.org/tptp/, or commercial profilers such as Yourkit Java Profiler.
One easy thing to do could be to upgrade the JVM to Java SE6 or Java 7. General JVM performance improved a lot at version 6. See the Java SE 6 Performance White Paper for more details.
If you have checked everything, and found no obvious performance optimizations, you may need to change the architecture to get better performance. This would obviously be most fruitful if you could at least identify where your application is spending time - sounds like there are several major components:
The HK Ex server (out of your control)
The network between the Exchange and your system
The "root" server
The network between the "root" and the "child" servers
The "child" servers
The network between "child" servers and the client
The clients
To know where to spend your time, money and energy, I'd at least want to see an analysis of those components, how long each component takes (min, max, avg), and what the specification is of each resource.
Easiest thing to change is hardware - bigger servers, more memory etc., or better bandwidth. Can you see if any of those resources are constrained?
Next thing to look at is to change the communication protocol to be more efficient - how do clients receive the stocks? Can you reduce data size? 1.5M for only 5 clients sounds a lot...
Next, you might look at some kind of quality of service solution - provide dedicated hardware for "premium" customers, with reduced resource contention, more servers, more bandwidth - this will probably require changes to the architecture.
Next, you could consider changing the architecture - right now, your clients "pull" data from the client servers. You could, instead, "push" data out - that way, you shave off the polling interval on the client end.
At the very end of the list, I'd consider a different technology stack; Java is a fine programming language, but if absolute performance is a key priority, C/C++ is still faster. Clearly, that's a huge change, and a well-written Java app will be faster than a poorly written C/C++ app (and far more stable).
To trace the source of the delay I would add timing data to your end to end process. You can do this using an external log, or by adding meta data to your messages.
What you want to get is a timestamp at key stages in your application 3-5 is enough to start with. Normally I would use System.nanoTime() because I am looking for micro-second delays, but in your case System.currentTimeMillis() is likely to be enough, esp if you average over many samples (you will still get 0.1 ms accuracy on an average, with Ubuntu)
Compare time stamps for the same messages as it passes through your system and look for the highest average delay. Once you have found this try breaking this interval into more stages to zoom in on the problem.
I would analyse any stage which has a verage delay over over 1 ms for your situation.
If clients are updating every minute, there might not be a good technical reason to do this, but you don't want to be seen as being slow and your traders at a disavantage even if in reality it won't make a difference.

Java reliable UDP

Please suggest java library, that implements reliable udp. It will be used for a game server to communicate to clients and to other servers.
PS Maybe you can suggest tech that will be more productive to work with for such task(game server)? But this must work on linux.
Edit: It's an action type game, so it needs to talk to server as fast as possible.
Edit 2: I found Enet which was used for a FPS game, but it's C++, will there be an overhead if I call it many times a second?
These are the libraries/frameworks I know of that implement something like reliable UDP:
Mobile Reliable UDP (MR-UDP)
MR-UDP aims at providing reliable communication based on UDP from/to mobile nodes (MNs), with least possible overhead. It extends a Reliable UDP (R-UDP) protocol with mobility-tolerating features, such as the ability to handle intermit-tent connectivity, Firewall/NAT traversal and robustness to switching of IP addresses or network interfaces (e.g. Cellular to WiFi, and vice-versa).
UDT-Java
Java implementation of UDP-based Data Transfer (UDT)
UDT is a reliable UDP based application level data transport protocol for distributed data intensive applications over wide area high-speed networks. UDT uses UDP to transfer bulk data with its own reliability control and congestion control mechanisms. The new protocol can transfer data at a much higher speed than TCP does. UDT is also a highly configurable framework that can accommodate various congestion control algorithms.
JNetRobust
Fast, reliable & non-intrusive message-oriented virtual network protocol for the JVM 1.6+.
It resides between the transport and the application layer.
Characteristics:
reliability of transmitted data
received, unvalidated data is available immediately
the package is bigger than UDP's package, but smaller than TCP's package
no flow control
no congestion control
Disclaimer: I'm the author of JNetRobust, it's new and still in alpha.
There is an java implementation of RUDP (Reliable UDP) protocol (RFC908, RFC1151)
http://sourceforge.net/projects/rudp/?source=dlp
You may find you don't need reliable messaging for all message types. For example, if you are repeatedly sending the status of things like players, and a few packets are lost it may not even matter.
There are reliable high performance UDP based libraries which support Java. One of these is 29West's LBM. It is not cheaper because it is very hard to get this right. Even with a professional product you may need a dedicated network for UDP to minimize loss.
For the purpose of a game I suggest you use a JMS service like ActiveMQ which runs wherever you can run Java. You should be able send 10K messages per second with a few milli-seconds latency.
When people say something must be as fast as possible, this can mean just about anything. For some people this means 10 ms, 1 ms, 100 us, 10 us, 1 us is acceptable. Some network routers support passing packets with a 600 ns latency. The lower the latency the greater the cost and the greater the impact on the design. Assuming you need more speed than you need can impact the design and cost unnecessarily.
You have to be realistic seeing that you have a human interface. A human cannot respond faster than about 1/20 of a second or about 50 ms. If you keep the messaging to less than 5 ms, a human will not be able to tell the difference.
Libjitsi has SCTP over UDP, which breaks everything up into packets like UDP but guarantees reliable delivery, like TCP. See https://github.com/jitsi/libjitsi/blob/master/src/org/jitsi/sctp4j/Sctp.java
UDP is by definition not a reliable service. It does not guarantee a high quality of service. You can, however, use TCP.

Categories

Resources