I would like to store some data in a static variables and I want all the webservices deployed on the same JBOSS7 to reach those data. I thought a standalone JBOSS runs in a single JVM and all the services run in the same JVM so they can access a static variable.
However I noticed that I got a NullPointerException when my webservice try to get the data.
This is my storage class:
public enum OneJvmCacheImpl {
INSTANCE;
private ConcurrentHashMap<String, Object> values = new ConcurrentHashMap<String, Object>();
public <T> T get(String key, Class<T> type) {
return type.cast(values.get(key));
}
...
}
OneJvmCacheImpl.INSTANCE.get(...);
Can you please advise me why I cannot access the values from my webservice?
Thanks,
V.
If you by deployments mean separate war files, the static variables will not be visible to the other webservices in other war files as they are loaded by different classloaders. Each war has it's own classloader, and hence it's own "class instance" of the class. You could perhaps solve it by moving the class in question to a place where it's shared amongst the deployments, but I would suggest that you solve it otherwise anyway, either by using the database or a distributed cache.
jBoss definitely won't allow you to share static variables across different deployments. That would be a huge security issue, what if I deploy a war next to yours and start changing your static variables...
You need to persist such values in something else like a database, memcache or shared file.
Related
I have a common class in which I am having a static map as :
private static Map<String,List<Logging>> loggingResponseMap ;
public static Map<String, List<Logging>> getLoggingResponseMap() {
return Logging.loggingResponseMap;
}
public static void setLoggingResponseMap(Map<String, List<Logging>> loggingResponseMapObj) {
Logging.loggingResponseMap = loggingResponseMapObj;
}
I am setting the value in this map in one micrservice and trying to access it in other microservice but instead of getting the data I am getting null in other other microservice.
What could be the reason ?? Is it possible to access the static map across microservice ?
Thanks
No, you will not be able to access any variables, not just static variables, of one microservice in an another microservice. This is not specific to spring-boot, it is the same for any java program. You will have to load the data to the variable separately in each service.
If you are looking to avoid repeated code to the load data in each microservice, you can move the code to a common program and add the common program as as dependency in all the microservices where you want to load the data.
I am using the PostContextCreate part of the life cycle in an e4 RCP application to create the back-end "business logic" part of my application. I then inject it into the context using an IEclipseContext. I now have a requirement to persist some business logic configuration options between executions of my application. I have some questions:
It looks like properties (e.g. accessible from MContext) would be really useful here, a straightforward Map<String,String> sounds ideal for my simple requirements, but how can I get them in PostContextCreate?
Will my properties persist if my application is being run with clearPersistedState set to true? (I'm guessing not).
If I turn clearPersistedState off then will it try and persist the other stuff that I injected into the context?
Or am I going about this all wrong? Any suggestions would be welcome. I may just give up and read/write my own properties file.
I think the Map returned by MApplicationElement.getPersistedState() is intended to be used for persistent data. This will be cleared by -clearPersistedState.
The PostContextCreate method of the life cycle is run quite early in the startup and not everything is available at this point. So you might have to wait for the app startup complete event (UIEvents.UILifeCycle.APP_STARTUP_COMPLETE) before accessing the persisted state data.
You can always use the traditional Platform.getStateLocation(bundle) to get a location in the workspace .metadata to store arbitrary data. This is not touched by clearPersistedState.
Update:
To subscribe to the app startup complete:
#PostContextCreate
public void postContextCreate(IEventBroker eventBroker)
{
eventBroker.subscribe(UIEvents.UILifeCycle.APP_STARTUP_COMPLETE, new AppStartupCompleteEventHandler());
}
private static final class AppStartupCompleteEventHandler implements EventHandler
{
#Override
public void handleEvent(final Event event)
{
... your code here
}
}
Can anyone suggest any design pattern to dynamically differentiate the memcahce instances in java code?
Previously in my application there is only one memcache instance configured this way
Step-1:
dev.memcached.location=33.10.77.88:11211
dev.memcached.poolsize=5
Step-2:
Then i am accessing that memcache in code as follows,
private MemcachedInterface() throws IOException {
String location =stringParam("memcached.location", "33.10.77.88:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then i am invoking that memcache as follows in code using above MemcachedInterface(),
Step-3:
MemcachedInterface.getSoleInstance();
And then i am using that MemcachedInterface() to get/set data as follows,
MemcachedInterface.set(MEMCACHED_CUSTS, "{}");
resp = MemcachedInterface.gets(MEMCACHED_CUSTS);
My question is if i introduce an new memcache instance in our architechture,configuration is done as follows,
Step-1:
dev.memcached.location=33.10.77.89:11211
dev.memcached.poolsize=5
So, first memcache instance is in 33.10.77.88:11211 and second memcache instance is in 33.10.77.89:11211
until this its ok...but....
how to handle Step-2 and Step-3 in this case,To get the MemcachedInterface dynamically.
1)should i use one more interface called MemcachedInterface2() in step-2
Now the actual problem comes in,
I am having 4 webservers in my application.Previoulsy all are writing to MemcachedInterface(),but now as i will introduce one more memcache instance ex:MemcachedInterface2() ws1 and ws2 should write in MemcachedInterface() and ws3 and ws4 should write in ex:MemcachedInterface2()
So,if i use one more interface called MemcachedInterface2() as mentioned above,
This an code burden as i should change all the classes using WS3 and WS4 to Ex:MemcachedInterface2() .
Can anyone suggest one approach with limited code changes??
xmemcached supports constistent hashing which will allow your client to choose the right memcached server instance from the pool. You can refer to this answer for a bit more detail Do client need to worry about multiple memcache servers?
So, if I understood correctly, you'll have to
use only one memcached client in all your webapps
since you have your own wrapper class around the memcached client MemcachedInterface, you'll have to add some method to this interface, that enables to add/remove server to an existing client. See the user guide (scroll down a little): https://code.google.com/p/xmemcached/wiki/User_Guide#JMX_Support
as far as i can see is, you have duplicate code running on different machines as like parallel web services. thus, i recommend this to differentiate each;
Use Singleton Facade service for wrapping your memcached client. (I think you are already doing this)
Use Encapsulation. Encapsulate your memcached client for de-couple from your code. interface L2Cache
For each server, give them a name in global variable. Assign those values via JVM or your own configuration files via jar or whatever. JVM: --Dcom.projectname.servername=server-1
Use this global variable as a parameter, configure your Service getInstance method.
public static L2Cache getCache() {
if (System.getProperty("com.projectname.servername").equals("server-1"))
return new L2CacheImpl(SERVER_1_L2_REACHIBILITY_ADDRESSES, POOL_SIZE);
}
good luck with your design!
You should list all memcached server instances as space separated in your config.
e.g.
33.10.77.88:11211 33.10.77.89:11211
So, in your code (Step2):
private MemcachedInterface() throws IOException
{
String location =stringParam("memcached.location", "33.10.77.88:11211 33.10.77.89:11211");
MemcachedClientBuilder builder = new XMemcachedClientBuilder(AddrUtil.getAddresses(location));
}
Then in Step3 you don't need to change anything...e.g. MemcachedInterface.getSoleInstance(); .
You can read more in memcached tutorial article:
Use Memcached for Java enterprise performance, Part 1: Architecture and setup
http://www.javaworld.com/javaworld/jw-04-2012/120418-memcached-for-java-enterprise-performance.html
Use Memcached for Java enterprise performance, Part 2: Database-driven web apps
http://www.javaworld.com/javaworld/jw-05-2012/120515-memcached-for-java-enterprise-performance-2.html
I try to run cucumber tests in a JRuby environment. I configured the cucumber rake task to startup an embedded Vert.x application server in another thread but in the same JVM.
During the application startup, an embedded instance of Neo4j is initialized.
So finally, there are Cucumber, Vert.x and Neo4j all running in the same JVM (tada!).
At the end of some test scenarios, I would like to check if certian data has been placed in the database base. And since the Neo4j docs say...
The EmbeddedGraphDatabase instance can be shared among multiple threads. Note however that you can’t create multiple instances pointing to the same database.
...I try to get the already initialized Neo4j instance and use it for these checks. To make that happen, I wrote the following factory.
public class ConcurrentGraphDatabaseFactory {
private static HashMap<String, GraphDatabaseService> databases = new HashMap<String, GraphDatabaseService>();
public static synchronized GraphDatabaseService getOrCreateDatabase(String path, String autoIndexFields) {
System.out.println("databases: " + databases.toString());
if (databases.containsKey(path)) {
return databases.get(path);
} else {
final GraphDatabaseService database = new GraphDatabaseFactory().newEmbeddedDatabaseBuilder(path).
setConfig(GraphDatabaseSettings.node_keys_indexable, autoIndexFields).
setConfig(GraphDatabaseSettings.node_auto_indexing, GraphDatabaseSetting.TRUE).
newGraphDatabase();
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
database.shutdown();
}
});
databases.put(path, database);
return database;
}
}
}
This factory should ensure that only on instance per path is initialized. But if the function getOrCreateDatabase accessed the second time, the internal databases HashMap is still empty. That cause the code to initialize a second Neo4j instance on the same data, which fails with
NativeException: java.lang.IllegalStateException: Unable to lock store
It's all running in the same JVM, but it seems, that the different threads have separated memory.
What am I doing wrong here?
Are you sure you are only running one single neo4j instance from all threads? Otherwise, several neo4j instances will fight on locking the store files. Neo4j is thread safe, but not doing several embedded instances on the same store, for scaling it, you use the High Availability setup, see http://docs.neo4j.org/chunked/snapshot/ha.html
I've spend some time on the problem and finally found a solution.
The verticles in Vert.x create strictly isolated environments. This causes a second version of my factory (see the code above) to be initialized. And the second factory tries to initialized a second Neo4j instance.
The solution was, to separate the Neo4j code into a dedicated storage verticle and write test code that accesses that verticle via the event bus.
I am a JSF beginner and have developed a mini application that is working fine. the problem is, when more than one user logs in, the application won't work. Only one user can log in and work. what part of application probably have to be checked??? the only static variable in my application are the beans managed name as follows:-
public static final String MANAGED_NAME = "catBean";
public static final String MANAGED_NAME = "appBean";
etc etc.
Do I need to change the static keyword? or where possibly can error be in my application. this is way too primitive question but owing to my very little knowledge accepted........:)
You need to take sessionScoped bean to maintain session data
It seems you are using static fields which are shared and they are not at object level they are associated with class