Restart terracotta after adding/modifying domain classes in Grails app - java

We have a Grails app, and we are using Terracotta for caching. We have noticed that every time we add some fields in existing domain classes or add new domain classes, the app crashes with "unexpected end of block of data..." and we need to restart terracotta to get things running again.
The architecture we have is:
- Two servers behind a load balancer, running a grails app instance each
- A separate DB server
- Terracotta running on one of the web servers
Are we missing something there? Is there anything we can do to avoid having these downtimes on every domain modifying deployment?
UPDATE: Seems like a Terracotta issue: http://forums.terracotta.org/forums/posts/list/5065.page
Version 3.5 should fix this issue. Let's just wait and hope!
Thanks,
Iraklis

We use Terracotta for caching as well and never haven't gotten this error before. We have a similar set up as yours, two web servers behind a load balancer, but with the difference that Terracotta runs in a different set of servers, we have a cluster where one of the server is the master, not sure if this what makes the difference though but at least is an idea to try.

Related

What is a better way to change variable in runtime server?

We maintain our server once a week.
Sometimes, the customer wishes that we change some settings which is already cached in server.
My colleague always write some JSP code to change these settings which are stored in the memory.
Is it a good method to use this kind of methodology?
If our project is not a Web container, which tools can help me?
Usually, in my experience, the server configuration is not stored only in memory of server:
What happens that after a configuration change, the server has been restarted / just went down for some system reason?
What happens if you have more than one instance of the same server to work on (a cluster of servers in other words)?
So, usually, people opt for various "externalized configuration" options that can range from "file-based" configuration + redeploy the whole cluster upon each configuration change, to configuration management servers (like Consul, etc.d, etc). There are also some solutions that came from (and used in) a java world: Apache Zookeeper, Spring cloud config server to name a few, there are others. In addition, sometimes, it's convenient to store the configurations in a database.
Now to your question: If your project is not a web container and you don't care that configuration will "disappear" after a server restart and you're not running a distributed cluster of servers, then, using JSP indeed doesn't seem appropriate in this case.
Maybe you should take a look at JMX - Java management extensions, that have a built-in solution so that you probably will be able to get rid of a web container (which seems to be not used by your team anyway other than for JSP modifications that you've described).
You basically need in memory cache, there are multiple solutions found in answers which include creating your own implementation or using existing java library. You can also get data from database and add cache over the database layer.

Is there a way to offload EclipseLink Cache in Glassfish for EJB to Redis or another outside server for load balancing?

I have an EJB packaged in an EAR and deployed to Glassfish.
Currently we just use Glassfish/Eclipselink for caching.
But our server is starting to come under heavy loads and I want to set it up behind a load balancer on AWS.
The problem is, I don't want my cache to be out of sync for automatically spun up instances. I want the instances to be completely automatic.
I know you can set Glassfish up in a cluster, but as far as I know that isn't automatic. I would have to manage it myself. I want to fully automate everything.
It would be awesome if the Glassfish instances could be completely independent of each other, and I could use Redis or another server like that to offload the cache. That way the cache would be in one place, the Glassfish instances could spin up and down automatically and it would never matter, I wouldn't have to register them with a Glassfish cluster. I could also use the same Redis cache for the front end of the application. Glassfish is running the business layer accessible by API calls. The front end web is running separately. I was going to set up a Redis cache for that also, but if they could both share the same cache, that would be awesome.
Any ideas?
I can only answer on basis of a conceptual level, since I don't know the used products in detail.
Regardless if you add another level of caching, you need to care about the data consistency within your application.
In a cluster setup, a local non-distributed cache is no problem. The consistancy coordination solves this, e.g. via JMS. You need to explore how to setup the consistancy coordination across your cluster.

Configuration management server for java enterprice application

We have an java enterprise application that is supposed to run on cluster of servers. The application consists of different WARs hosted by some web containers running on these servers.
Now we have a lot of different configurations for this application, to name a few:
Relational DB host/port, credentials and so forth
Non Relational DB configurations - stuff like mongo, redis and so forth
Internal lookup configurations (how to obtain a web service in SOA architecture, stuff like that).
Logging related configuration, log4j.xml
Connection pooling configurations
Maybe in future some internal settings for smart load balancing, maybe Multi Tenancy support
Add to this multiple environments, test/staging/production/development and what not, having different hosts/ports for all aforementioned examples and we and up with a dozen of configuration files.
As I see it, all these things are not something related directly to the business layer of the application, but rather can be considered "generic" for all applications, at least in the java enterprise world.
So I'm wondering whether exists some solution for dealing with configuration management of this kind???
Basically I'm looking for the following abilities:
Start my war on any of my servers in cluster with a host/port of this configuration server.
The war will "register" itself and "download" all the needed configurations. Of course it will have adapters to apply this configuration.
This way, all my N wars in different JVMs in cluster start (they're all share-nothing architecture, so I consider them as independent pieces of deployment)
Now, if I want to change some setting, like, setting the log level of some logger to DEBUG, I go to the management console UI of this configuration server and apply the change.
Since this management center knows about all the wars (as they were registered), it should notify them about the setting change. I want to be able to change settings for one specific WAR or cluster wide. If one of the web servers that hosts the application gets restarted it will again ask for configuration and will get the configuration including the DEBUG level of that logger.
I'm not looking for solution based on deployment systems like puppet, chef and so forth since I want to change my settings during the runtime as well.
So far I couldn't find any descent ready solution for this. Of course I can craft something like that by myself, but I don't want to reinvent the wheel, So I'm asking for advice here, any help will be appreciated.
Thanks in advance

Alternative for EJB timer for weblogic over a cluster

Recently I have come across a requirement where in I have to provide a custom jar to applications and this jar would contain threads that would query a database periodically and fetch messages(records) for that particular application which use them. So for example of app A uses this jar, then the threads in the jar would fetch all messages only for app A.
The database is a shared db between apps.
This works fine for standalone apps but for apps deployed over a cluster in an enterprise application server (weblogic in my case), this fails since all nodes on the cluster run in their own JVM and each one spawns a listener thread for the same app. So there can be conditions where in two threads run at the same time and fetch same records and there would be double processing. Cannot use synchronization since that will lead to performance bottle necks.
I cant use singleton timer EJBS. Have heard about the workmanager but not sufficient examples over the net. I am using the spring core framework.
If any of you could give any suggestions, it would be great.
Thanks.
First of all please stop thinking threads if you're dealing with JavaEE, it's supposed to provide higher level of abstraction for higher level of mindsets.
JavaEE 7 provides ManagedScheduledExecutorService
Quartz works great in that scenario - only one node in your JavaEE cluster is going to execute the job

Java Google App Engine inconsistent data lose after restarting dev server

I am using Java GAE. So far, i'm just scafolding my data objects and i'm seeing an interesting issue.
The records that i am playing around with are getting updated properly as long as my dev server is running up. The second that the my dev server gets restarted, i lose all of my changes.
That would be not alarming if i lost all of my records, but, there was a point of time where my data persisted through the server restart. I'm worried that i would lose production data if i launched without fixing this potential bugs?
ANy idea on wher ei should look?
The datastore is persisted between instances as described here. The Java SDK doesn't have any functionality to clear the datastore for you, so you, or something working on your behalf (eg, your build process) must be deleting it.
Sounds like local development environment problem. Check the location of local_db.bin and ensure your build process does not touch the database file. Maybe the restart happens before the data has been persisted? The local development datastore is not stable like local relational databases. E.g. after upgrading appengine sdk versions the old local datastore might not work at all.
How are you starting the dev server? Make sure you're not providing "c" or "clear" as a flag, which does erase all the persisted data.
How long is it before the dev server persists the data to disk. Do you see the log messages when the data is persisted?

Categories

Resources