I'm developing a test app with GWT+Java-AppEngine and the deploys are so heavy and slow.
I read about minimize permutations or parallel compilation of GWT, but my internet connection is not so good and I think that I'm uploading heavy files to App Engine Server.
How can I optimize this? Can I check where is the bottleneck?
The reason that I need several deployments is because I'm using Google API's through OAuth and I can't set localhost as a callback (I do?).
I am not entirely sure about your scenario, so I will try to guess your intentions.
For development purposes, you really should be working on the local server, it comes with all the API's and stubs for things like user login and what not. That is instantaneous. Once you are happy with your local app and it is time to upload, then if AppEngine overlord decides to take time due to AppSize/Slow Connection/Service outage/random act of diety, there is little one can do.
Considering that one doesn't deploy every hour, I think your time would be better spent on the app, instead of tuning the upload time.
I am assuming you are already following http://code.google.com/appengine/docs/java/gettingstarted/uploading.html
I personally have dabbled with appengine but the python version, and it may take a few mins but once upload is complete, you are good to go.
maybe you could get your local machine a dyndns hostname and make it accessible from the internet ? –
I think what Bastian meant was as follows (assuming Dev server can actually serve domains - i am not sure about that)
Have your domain host (example.com) maintain an 'A' record pointing to your development machine IP address [hence when you do example.com, your dev machine responds as the server]
This means if you setup DNS records to point to ghs.google.com or whatever, you will have to change them (DNS records take a while to propagate depending on host)
Once you are happy, and you want to test on google, you still have to 'upload' before you can try it on appspot.com and ofcourse change DNS entries again so example.com works off google servers.
Too much work in my opinion. Better to use dev server in local machine.
Have a break while you are uploading. Have a KitKat to kill time :)
Related
I am using Spring Boot mail and ActiveMQ to build an email system. I followed this example project. Because our application QPS is small one server is enough to handle the requests. In the example project ActiveMQ, sender, and receiver are all on the same server. Is this a good practice for small application? Or I should put ActiveMQ, sender, and receiver on three separate machines?
It's depends...
The size of the application is irrelevant. It depends more on your requirements for availability, scalability and data safety.
If you have everything on the same machine you have a single point of risk. If the machine crash you lost everything on that machine. But this setup is the most
simple one (also for maintenance) and the change that the server will crash is low. Modern machines are able to handle a big load.
If you have a really high load and/or a requirement for guaranteed delivery you should use multiple systems with producers that sends messages to an ActiveMQ cluster (also distributed over multiple machines). The consumers, also on more than one machine. Use also load balancers to connect/interface to the machines.
You can also have a setup in the middle of both example setups (simple and
complex).
If you are able to reproduce all the messages (email messages in your example), and the load is not so high, I will advise you to put it simple all on the same machine.
The short answer is it depends. The longn answer is measure it. The use of small application criteria is flawed. You can have both on the same server if your server have all the resources required by your application and message queue broker, and not impacting the performance of end user.
I would suggest run your performance tests to test your criteria then decide your target environment setup.
The simplest setup is everything on the same box. If this one box has enough CPU and disk space, why not ? One (performance) advantage is that nothing needs to go over the network.
If you are concerned about fault-tolerance, replicate that whole setup on a second machine.
We have a Cloud Foundry (Java) application running on IBM Bluemix and we are looking for a way of health check for it. We mainly would like to monitor memory usage (both CF instance memory and JVM heap). We know that Auto-Scaling can do a similar thing but we think it keeps memory usages for recent 2 hours. (Please correct us if we are misunderstanding.) We prefer to monitor memory usage at least recent 24 hours. Any suggestions or comments must be appreciated. Thank you.
From a platform standpoint, you don't have a lot of options:
You can configure an HTTP-based health check for your app. Instead of just monitoring the port your application is listening to, this will actually send an HTTP request and check that it gets a valid response. If it does not, then your application will get automatically restarted by the platform. This does not keep track of any of the metrics that you listed. It's just purely a check to determine if your application is still alive.
Ex: cf set-health-check my-super-cool-app http --endpoint /health
https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html
You can connect to the firehose and pull metrics. This will include the container metrics of CPU, RAM & Disk usage. The firehose is just a method to obtain this information though, the whole problem of storage and pretty graphs is one that you'd still need to solve.
The firehose plugin is the best example of doing this: https://github.com/cloudfoundry-community/firehose-plugin
Beyond the platform itself, you might want to look at an APM (application performance monitoring) tool. I'm not going to list examples here. There are many which you can find with a quick Internet search. Some even integrate nicely with the Java buildpack, those are listed here.
This is probably the solution you want as it will give you all sorts of metrics, including the ones you mentioned above.
Hope that helps!
I'm looking into syncronizing some devices, and I am having a local solution requiring to work without internet.
I have used the System.currentTimeMillis(); but it's good only for one device as system clocks arent totally sync even with automatic time update and same timezones. So I was wondering, what can I use as I cannot use NTP server/internet?
Is the fastest way to get it by setting up a server on a pc, connect via cable to the router and query it for it's timestamp?
Or use a android device as a server and query that?
Is there any other solution? open to everything within java/c++ and wlan/wifi (no internet)
Update,
Have been searching for some hours.. And found that many(almost everyone) have the same issue without NTP. Giving it more thought, I will just create that server solution on a lan connection and fetch a timestamp.. Still open for ideas though :)
you can implement all sort of solutions with distributed memory, but it's exhausting, and you'll be introduced with many different issues...
synchronizing all servers with NTP servers is the way to go.
I am building a constantly living RESTful (well, just client server) mobile app which should be always connected to the internet on my Android device.
I'd like to configure the network threshold, so that when the app goes to idle state, instead of pinging the server from the mobile Android device (that runs on battery all the day long) every 50ms say, it would ping it every second [1000ms].
I think that after lots of digging I came across something, (after taking a look at some config file I once saw somewhere on IBM's doc pages) which is Java Mission Control - JMC, but I did not find the place where I can actually config anything relevant to these parameters (not that I succeeded to understand by far what JMC might be able to config in general...).
How would you save the battery's life in such a scenarios with a constant Cellular Data/WiFi usage? Maybe praying for mercy can help...
Can I indeed approach it through some Java Mission Control (JMC) configuration?
Java Mission Control doesn't configure java applications, it just collect data about their behavior. http://www.oracle.com/missioncontrol
I found GCM - Google Cloud Messaging, which allows to send notifications to different platforms - iOS and Android ones alike. Those are allow you to take the initiation and establish a connection. By default messaging API won't kill your battarey.
We are trying to move one of our web-services (Java) to the cloud from a development server, here are the details:
There is a PHP front-end, connecting to a Java-based web-service that is connected to a MySQL database (all requests to the database are sent from the web-service, the php part is communicating with the Java back-end only, no direct connection to the database).
Start Point
Dev Server - CentOS (cPanel), 765MB-1.5GB RAM, 4CPU, Tomcat 7
*the software is running fast, no speed issues, logs show normal CPU and memory usage
Scenario #1
PHP front-end on Elastic Beanstalk and Java web-service with database on Elastic Beanstalk
*the software is about 80% slower, logs show normal CPU and memory usage
Scenario #2
PHP front-end on VPS (same company/location with Jelastic) and Java web-service with database on Jelastic
*the software is about 70% slower, logs show normal CPU and memory usage
Scenario #3
PHP front-end on VPS, Java web-service with database on Elastic Beanstalk and Jelastic (swithing)
*the software is about 70-80% slower, logs show normal CPU and memory usage on both cloud environments
What I figured out, no matter where the PHP front-end is located, that will load fast, nothing to search here.
As soon as the Java back-end is moved from the VPS to the cloud (doesn't matter if Amazon or Jelastic), the whole software slows down extremely. Based on the logs and since we tried with two providers, this doesn't seem like a resource issue.
It cannot be a connection issue since we tried to have the PHP and Java in the same environment (Scenario #1).
It is either the Java web-service slowing down extremely (for unknown reason as logs show low resource usage) or it could be the connection between the Java application and the database (I doubt since in the first scenario, all three components are on Amazon, same environment, location).
Anyone ever had such an issue before? Any ideas? Thank you!
(note, I have zero experience with cloud hosting)
It might be related to specific parameters in configuration files, mostly for DB. Please double check that they are the same in each test.
Also it is not clear how you measure performance and what "slower" exactly means. And you have not specified size of resources on Jelastic and EB. Please double check that resources are equal as well.
For high performance Java cloud backend, you can try Jelastic implementation by Elastx - see the performance research that CloudSpectator did on them (they also used Amazon and Rackspace cloud in the study): http://blog.jelastic.com/wp-content/uploads/2013/09/Elastx-Fueld-by-SolidFire-9-5-13+Jelastic.pdf
Also, I do not know who your current Jelastic provider is, but if you contact them by clicking Help / Contact Support in Jelastic dashboard, I am sure that they will be happy to troubleshoot the issue! If this does not help - please ping me offline.
What you are measuring is CPU an memory. Since both give normal results, and your application is communicating over the network, I'd suspect network latency to be the culprit. Next thing to look into would be for example disk I/O performance, which can slow down your application like having a handbrake.