I am building a constantly living RESTful (well, just client server) mobile app which should be always connected to the internet on my Android device.
I'd like to configure the network threshold, so that when the app goes to idle state, instead of pinging the server from the mobile Android device (that runs on battery all the day long) every 50ms say, it would ping it every second [1000ms].
I think that after lots of digging I came across something, (after taking a look at some config file I once saw somewhere on IBM's doc pages) which is Java Mission Control - JMC, but I did not find the place where I can actually config anything relevant to these parameters (not that I succeeded to understand by far what JMC might be able to config in general...).
How would you save the battery's life in such a scenarios with a constant Cellular Data/WiFi usage? Maybe praying for mercy can help...
Can I indeed approach it through some Java Mission Control (JMC) configuration?
Java Mission Control doesn't configure java applications, it just collect data about their behavior. http://www.oracle.com/missioncontrol
I found GCM - Google Cloud Messaging, which allows to send notifications to different platforms - iOS and Android ones alike. Those are allow you to take the initiation and establish a connection. By default messaging API won't kill your battarey.
Related
We have a Cloud Foundry (Java) application running on IBM Bluemix and we are looking for a way of health check for it. We mainly would like to monitor memory usage (both CF instance memory and JVM heap). We know that Auto-Scaling can do a similar thing but we think it keeps memory usages for recent 2 hours. (Please correct us if we are misunderstanding.) We prefer to monitor memory usage at least recent 24 hours. Any suggestions or comments must be appreciated. Thank you.
From a platform standpoint, you don't have a lot of options:
You can configure an HTTP-based health check for your app. Instead of just monitoring the port your application is listening to, this will actually send an HTTP request and check that it gets a valid response. If it does not, then your application will get automatically restarted by the platform. This does not keep track of any of the metrics that you listed. It's just purely a check to determine if your application is still alive.
Ex: cf set-health-check my-super-cool-app http --endpoint /health
https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html
You can connect to the firehose and pull metrics. This will include the container metrics of CPU, RAM & Disk usage. The firehose is just a method to obtain this information though, the whole problem of storage and pretty graphs is one that you'd still need to solve.
The firehose plugin is the best example of doing this: https://github.com/cloudfoundry-community/firehose-plugin
Beyond the platform itself, you might want to look at an APM (application performance monitoring) tool. I'm not going to list examples here. There are many which you can find with a quick Internet search. Some even integrate nicely with the Java buildpack, those are listed here.
This is probably the solution you want as it will give you all sorts of metrics, including the ones you mentioned above.
Hope that helps!
I have C applications that will run on multiple machines at different sites.
Now I want to control and monitor these C applications. For that I am thinking about Java Web Application using Servlet/JSP.
I am thinking that C applications will connect to Java Web application over TCP. In my web application, I am thinking to implement manager which communicates with C applications over TCP. I will start manager when web application starts as separate thread. And manager will communicate to servlet requests via Context and Session. So whenever user do something on browser, I want to use functionalities of my manager at server, with ServetContext an Session as interface.
So this is what I am thinking. So, I want to know if there is better approach, or I am doing anything wrong? Can anyone please suggest me better solution?
EDIT
Current workflow: whenever I need to start / stop C application, I have to SSH remote machine puTTY terminal, type long commands, and start / stop it. Whenever there is some issue, I have to scroll long long log files. There couple of other things like live status of what application is doing/processing all things at every second, that I can't log always in log file.
So I find these workflow difficult. And things like live status I can't monitor.
Now I want to have web application interface to it. I can modify my C application and implement web application from scratch.
New Workflow to implement: I want to start / stop C application from web page. I want to view logs and live status reports / live graphs on web page (monitoring what C application is doing). I want to monitor machine status also on web page.
The web interface I thinking to design in Java using JSP/servlets.
So, I will modify my C application so it can communicate with with web application.
Question:
Just need guidelines / best practices for making new workflow.
EDIT 2
Sorry for confusion between controller or manager. Both are same thing.
My thoughts:
System will consist of C applications running at different sites, Java controller and Java web app running parallely in Tomcat server, and DB.
1) C applications will connect to controller over TCP. So, controller here becomes server and C applications client.
2) C applications will be multithreaded, will receive tasks from controller and spawns new thread to perform that task. When controller tells to stop task, C application will stop thread of that task. Additionally, C applications will send work progress (logs) every second to controller.
3) Controller receives task commands from web application (as both running parallelly in Tomcat server, both in same instance on JVM), and web application will receive commands from user over HTTP.
4) The work progress (logs) received every second from C applications to controller, controller will then insert logs in DB for later analysis (need to consider if it is good insert logs in MySQL RDBMS, may be needed to do lot of inserts, may be 100 or 1000 every second, forever). Web application may also request recent 5 minute logs from controller and send to user over HTTP. If user is monitoring logs, then web application will have to retrieve logs every second from controller and send to user over HTTP.
5) User monitoring C application tasks, will see progress in graph, updated every second. Additionally text lines of logs of info/error events that may happen occasionally in C applications.
6) C applications will be per machine, which will execute any task user sends from web browser. C applications will be running as service in machine, which will start on machine startup, will connect to server, and will stay connected to server forever. Can be running idle if no tasks to perform.
It is a valid approach, I believe sockets is how most distributed systems communicate, and more often than not even different services on the same box communicate that way. Also I believe what you are suggesting for the java web service is very typical and will work well (It will probably grow in complexity beyond what you are currently thinking, but the archetecture you describe is a good start).
If your C services are made to also run independantly of the management system then you might want to reverse it and have the management system connect to the services (Unless your firewall prevents it).
You will certainly want a small, well-defined protocol. If you are sending lots of fields you could even make everything you send JSON or xml since they will already have parsers to validate the format.
Be careful about security! On the C side ensure that you won't get any buffer overflows and if you parse the information yourself, be strict about throwing away (and logging!) data that doesn't look right. On Java the buffer overruns aren't as much of a problem but be sure that you log packets that don't fit your protocol exactly to detect both bugs and intrusions.
Another solution that you might consider--Your systems all share a database already you could send commands and responses through the DB (Assuming the command/responses are not happening too often). We don't do this exactly, but we share a variable table in which we place name/value pairs indicating different aspects of our systems performance and configuration (it's 2-way), this is probably not optimal but has been amazingly flexible since it allows us to reconfigure our system at runtime (the values are cached locally in each service and re-read/updated every 30 seconds).
I might be able to give you more info if I knew more specifics about what you expected to do--for instance, how often will your browser update it's fields, what kind of command signals or data requests will be sent and what kind of data do you expect back? Although you certainly don't have to post that stuff here, you must consider it--I suggest mocking up your browser page to start.
edits based on comments:
Sounds good, just a couple comments:
2) Any good database should be able to handle that volume of data for logging but you may want to use a good cache on top of your DB.
5) You will probably want a web framework to render the graph and manage updates. There are a lot and most can do what you are saying pretty easily, but trying to do it all yourself without a framework of some sort might be tough. I only say this because you didn't mention it.
6) Be sure you can handle dropped connections and reconnecting. When you are testing, pull the plug on your server (at least the network cable) and leave it out for 10 minutes, then make sure when you plug it back in you get the results you expect (Should the client automatically reconnect? Should it hold onto the logs or throw them away? How long will it hold onto logs?)
You may want to build in a way to "Reboot" your C services. Since they were started as a service, simply sending a command that tells them to terminate/exit will generally work since the system will restart them. You may also want a little monitoring loop that restarts them under certain criteria (like they haven't gotten a command from the server for n minutes). This can come in handy when you're in california at 10am trying to work with a C service in Austraillia at 2am.
Also, consider that an attacker can insert himself between your client and server. If you are using an SSL socket you should be okay, but if it's a raw socket you must be VERY careful.
Correction:
You may have problems putting that many records into a MySQL database. If it is not indexed and you minimize queries against it you may be okay. You can achieve this by keeping the last 5 minutes of all your logs in memory so you don't have to index your database and by grouping inserts or having a very well tuned cache.
A better approach might be to forgo the database and just use flat log files pre-filtered to what a single user might want to see, so if the user asks for the last 5 minutes "WARN" and "DEBUG" messages from a machine you could just read the logfile from that machine into memory, skipping all but warn/debug messages, and display those. This has it's own problems but should be more scalable than an indexed database. This would also allow you to zip up older data (that a user won't want to query against any more) for a 70-90% savings in disk space.
Here are my recommendations on your current design and since you haven't defined a specific scope for this project:
Define a protocol to communicate between your C apps and your monitor app. Probably you don't need the same info from all the C apps in the same format or there are more important metrics for some C apps than others. I would recommend using plain JSON for this and to define a minimum schema to fulfill in order for both C to produce the data and Java for consume and validate it.
Use a database to store the results of monitoring your C apps. The generic option would be using a RDBMS, probably open source like MySQL or PostgreSQL, or if you (or your company) can get the licenses go for SQL Server or Oracle or another one. This in case you need to maintain a history of the results, and you can clear the data periodically.
Probably you want/need to have the latest results from monitoring available in a sort of cache (because in this time performance is critical), so you may use an in-memory database like Hazelcast or Redis, or just a simple cache like EhCache or Infinispan. Storing the data in an external element is better than storing it in plain ServletContext because these technologies are aware of multi threading and support ACID, which is not the primary use case for ServletContext but seems necessary for the monitor.
Separate the monitor that will receive the data from the C apps from the web app. In case the monitor fails or it takes too much time to perform some operations, the Web application will still be available to work without having the overhead to receive and manage the data from the C apps. In the other hand, if the web app starts to be slower (due to problems in the implementation of the app or something that should be discovered using a profiler) then you may restart it, and by doing this your monitor should continue gathering the data from the C apps and store them in your data source.
For the threads in the monitor app, since it seems it will be based on Java, use ExecutorService rather than creating and managing the threads manually.
For this part:
User monitoring C application tasks, will see progress in graph, updated every second. Additionally text lines of logs of info/error events that may happen occasionally in C applications
You may use Rx Java to not update your view (JSP, Facelet, plain HTML or whatever you will use) or another reactive programming model like Play Framework to read the data continuously from database (and cache if you use it) and update the view in a direct way for the users of the web app. If you don't want to use this programming model, then at least use push technology like comet or WebSockets. If this part is not that important, then use a simple refresh timer as explained here: How to reload page every 5 second?
For this part:
C applications will be per machine, which will execute any task user sends from web browser
You could reuse the protocol to communicate the C apps using JSON to the monitor and another thread in each C app to translate the action and execute it.
We are trying to move one of our web-services (Java) to the cloud from a development server, here are the details:
There is a PHP front-end, connecting to a Java-based web-service that is connected to a MySQL database (all requests to the database are sent from the web-service, the php part is communicating with the Java back-end only, no direct connection to the database).
Start Point
Dev Server - CentOS (cPanel), 765MB-1.5GB RAM, 4CPU, Tomcat 7
*the software is running fast, no speed issues, logs show normal CPU and memory usage
Scenario #1
PHP front-end on Elastic Beanstalk and Java web-service with database on Elastic Beanstalk
*the software is about 80% slower, logs show normal CPU and memory usage
Scenario #2
PHP front-end on VPS (same company/location with Jelastic) and Java web-service with database on Jelastic
*the software is about 70% slower, logs show normal CPU and memory usage
Scenario #3
PHP front-end on VPS, Java web-service with database on Elastic Beanstalk and Jelastic (swithing)
*the software is about 70-80% slower, logs show normal CPU and memory usage on both cloud environments
What I figured out, no matter where the PHP front-end is located, that will load fast, nothing to search here.
As soon as the Java back-end is moved from the VPS to the cloud (doesn't matter if Amazon or Jelastic), the whole software slows down extremely. Based on the logs and since we tried with two providers, this doesn't seem like a resource issue.
It cannot be a connection issue since we tried to have the PHP and Java in the same environment (Scenario #1).
It is either the Java web-service slowing down extremely (for unknown reason as logs show low resource usage) or it could be the connection between the Java application and the database (I doubt since in the first scenario, all three components are on Amazon, same environment, location).
Anyone ever had such an issue before? Any ideas? Thank you!
(note, I have zero experience with cloud hosting)
It might be related to specific parameters in configuration files, mostly for DB. Please double check that they are the same in each test.
Also it is not clear how you measure performance and what "slower" exactly means. And you have not specified size of resources on Jelastic and EB. Please double check that resources are equal as well.
For high performance Java cloud backend, you can try Jelastic implementation by Elastx - see the performance research that CloudSpectator did on them (they also used Amazon and Rackspace cloud in the study): http://blog.jelastic.com/wp-content/uploads/2013/09/Elastx-Fueld-by-SolidFire-9-5-13+Jelastic.pdf
Also, I do not know who your current Jelastic provider is, but if you contact them by clicking Help / Contact Support in Jelastic dashboard, I am sure that they will be happy to troubleshoot the issue! If this does not help - please ping me offline.
What you are measuring is CPU an memory. Since both give normal results, and your application is communicating over the network, I'd suspect network latency to be the culprit. Next thing to look into would be for example disk I/O performance, which can slow down your application like having a handbrake.
I'm developing a test app with GWT+Java-AppEngine and the deploys are so heavy and slow.
I read about minimize permutations or parallel compilation of GWT, but my internet connection is not so good and I think that I'm uploading heavy files to App Engine Server.
How can I optimize this? Can I check where is the bottleneck?
The reason that I need several deployments is because I'm using Google API's through OAuth and I can't set localhost as a callback (I do?).
I am not entirely sure about your scenario, so I will try to guess your intentions.
For development purposes, you really should be working on the local server, it comes with all the API's and stubs for things like user login and what not. That is instantaneous. Once you are happy with your local app and it is time to upload, then if AppEngine overlord decides to take time due to AppSize/Slow Connection/Service outage/random act of diety, there is little one can do.
Considering that one doesn't deploy every hour, I think your time would be better spent on the app, instead of tuning the upload time.
I am assuming you are already following http://code.google.com/appengine/docs/java/gettingstarted/uploading.html
I personally have dabbled with appengine but the python version, and it may take a few mins but once upload is complete, you are good to go.
maybe you could get your local machine a dyndns hostname and make it accessible from the internet ? –
I think what Bastian meant was as follows (assuming Dev server can actually serve domains - i am not sure about that)
Have your domain host (example.com) maintain an 'A' record pointing to your development machine IP address [hence when you do example.com, your dev machine responds as the server]
This means if you setup DNS records to point to ghs.google.com or whatever, you will have to change them (DNS records take a while to propagate depending on host)
Once you are happy, and you want to test on google, you still have to 'upload' before you can try it on appspot.com and ofcourse change DNS entries again so example.com works off google servers.
Too much work in my opinion. Better to use dev server in local machine.
Have a break while you are uploading. Have a KitKat to kill time :)
I am working on an app and it is almost finished except only one thing: I don't know how to get link speed and place it in the status bar.I am new to Java so if somebody could help me I would be very grateful.
P.S. Sorry for bad English.
As the repliers suggest, your question is not very clear. You could be referring to the link connection speed (i.e up to 54 Mbps with good signal reception Wifi or up to 7.2 Mbps with full speed HSDPA) which depends on:
The network interface you are using at a time. Some phones allow tethering which means you can have both Wifi and Mobile Data Link (GPRS/3G/HSDPA) active at the same time, or on automatic switch (if your Wifi connection drops your phone will switch to Mobile network automatically if activated).
The connection speed negotiated at a time. Depending on signal quality/carrier network configuration (some have max. speed limited)/mobile data contract (monthly bandwith quota exceeded normally means defaulting to GPRS speed).
In this case I am afraid there is no standard Java API methods to know it, but the Android API the needed functionality:
For WiFi link speed check WifiInfo.getLinkSpeed()
For Mobile Data Link I am afraid that you can only check TelefonyManager.getNetworkType() to determine the current Mobile Data Link type. You should then aproximate to actual speed by link type (i.e. for GPRS up to 128 kbps, for EDGE up to 236.8 kpbs, for 3G up to 2 Mbps, for HSDPA up to 7.2 Mbps). Take into consideration that this is only an aproximation. Your could be conneting using HSDPA but your carrier limiting the top speed to 2 Mbps.
In the other case that you refer the current (Download/Upload) data link speed this is only available at a high level, actually measuring not the link speed but the speed between your phone and a server, which can be determinted not only by your link speed but also by many other factors (all the links between your phone an server, the server itself, etc.). You could just measure "HTTP level speed" which means HTTP data speed (leaving out overhead traffic for data packets), since normally only HTTP connections are supported in every scenario (your carrier could be hiding you behind a proxy that filters everything out but HTTP traffic).
If you are using 8 level API an interesting feature called TrafficStats is also available. This lets you know the sent/received packets at a low level exchanged by your phone over the Mobile Data Link, which may offer just the information you where looking for (use these measurements with elapsed times and you can easily measure current/average used data link speed).
You cannot tell directly. You must ask the underlying operating system. For OS X you can parse the output from "/sbin/ifconfig" on the appropriate network port.
You can also write extension using JNI, and ask connection speed using C. It's just in case if you don't want to parse output from other application, but please keep in mind that this solution isn't portable.