Assume I have 2 OSGi services. One of them is memory cache of DB records. Another service is set of CRUD operations on these DB records. During modification I would like to rebuild existing cache. How one service can force another be MODIFIED? Something like to send org.osgi.framework.ServiceEvent.MODIFIED event.
(please note that it is simplified example of real business case and I don't really place cache as a service)
UPDATE to make question more clear - I need exactly same function as ServiceRegistration#setProperties provides. Unfortunately ServiceRegistration should not leave bound of Bundle.
Why do you want to solve this using services?
Simply send an event using EventAdmin from the CRUD bundle that says that data is modified. So the cache can listen to these events and act accordingly. The advantage of the eventing solution is that the crud service does not really have to know there is a cache it just sends the event to whoever is interested.
Please, please don't try to do this.
Only the provider bundle for the service knows what implementation is behind it... that is why only the provider has access to the registration details.
The cache provider should detect for itself whether the underlying data has changed, and refresh the cache appropriately. No other bundle(s) can do this because they have no idea where the cache provider gets its data, they can only see the public service interface.
Related
I have the following situation, i have a microservice architecture with an api gateway and multiple downstream services, some of these have an independent session and this causes my system to throw expired session exception on random service calls.
Since we cannot rewrite these services from scratch we started by introducing hazelcast so that all services can share the same session.
the problem now is that when a service writes an object of a class that other services don't have in their classpath a deserialization exception is thrown.
to solve this i was thinking that if only the attributes that get accessed from a service get deserialized i could probably avoid the exception and everything should work fine.
Do you know if this is at all possible with spring session, or maybe can suggest another solution that would allow me solve the initial problem?
here is a sample project to reproduce my problem: https://github.com/deathcoder/hazelcast-shared-session
I believe I got what's happening: Spring-Session-Hazelcast by default store session updates locally until request completed & when request completed, before returning the response, send everything to the cluster using EntryProcessor. EntryProcessor requires object classes available on the member who stores that session record and since data is distributed, it's possible some other member stores a session created in another instance. According to what you're saying, not all nodes are identical, don't contain all classes & this causes serialization exception.
What you can do, you can use User Code Deployment feature to deploy those missing classes to other members: https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#member-user-code-deployment-beta
If you're changing object that you're storing in the session, you can set class-cache-mode to OFF to prevent not caching them but sending with each operation.
Please try & let me know if this solves your problem.
I would try to avoid sessions in the API layer in the first place. They scale poorly. And synchronizing sessions is even less scalable.
Try using access tokens instead, for example a JWT token. A token should contain enough user identity information to load the resources necessary to process the transaction (the resources can be cached).
As for the other state in the session - microservices are self-contained from the process perspective, so all intermediate results should be persisted to the database. But of course I don't know the details of your particular application, so this is just a general thought.
We have been trying to implement transaction in Pop3 and came across the documentation of Transaction Synchronization in release 4.2.0.RELEASE
http://docs.spring.io/spring-integration/docs/latest-ga/reference/html/mail.html#mail-tx-sync
But they are iterating through the folder mails to delete a particular message before committing the transaction.Is there any implicit way to delete the mails by id or does Spring Integration provide any sync factory to handle the email transaction internally.
Email is not transactional; the cited documentation shows the ability to synchronize some action when a transaction commits. But the action taken on a non-transactional resource is not really transactional.
Since the framework can't anticipate what a user might want to do, it provides nothing other than the hooks to enable such user actions.
The documentation simply shows one such action that might be taken another action might be to move the email to another folder (when using IMAP).
We are running 4 instances of tomcat server in 2 geo-locations with DNS-based load balancer. We are providing some export services which tasks require a lot of time to complete. So the user typically submits a request to one of our servers and that is responsible for whole processing.
The problem shows up when the user is later requesting for progress information on a randomly chosen instance.
I was thinking about sharing progress information across all instances such as a spring authentication is shared using Redis. I've tried to use spring session bean with aop proxy for sharing progress information, but it seems to be a bad idea. I did some research and debugging that shows the bean is stored in Redis and can be accessed by all instances, but the state is no longer updated. I believe that's because the thread is not able update session information after the original request is return to caller.
Another solution I could think of, is to use our MySQL database for storing such information, but i'm afraid of huge overload caused by continual updating of progress information.
Any ideas to solve my issue?
Thanks in advance,
Michal
OK, I've finally resolved my issue. I was thinking hard about persisting a progress information in session, that was not available when processing an asynchronous task, and I've completely overlooked the fact it is not a good idea:-)
My new solution is as simple as it could be. The unique task id is generated when user requests for task processing and returned to client. Progress information is continuously updated in redis under defined task key, so that the task id could be later used when client is requesting a task state. There is no need for using session because the redis instances are synchronized by replication itself.
Thanks everyone for comments!
Regards,
Michal
What is best practice to validate if a user can be authenticated against a MarkLogic server (version 7.0.4) by using the Java Client API (2.0.4) for a login dialog securing a Spring web application?
With my current approach (see source code in gist) I am implementing a AbstractUserDetailsAuthenticationProvider from Spring Security ("classical" approach with HTTP sessions) where I do create a MarkLogic DatabaseClient instance, after which a simple query (testQuery, L. 46 in MarkLogicConnections) gets executed to see wether a result can be retrieved. From this result is is decided wether the login is granted or not.
I am wondering if there does exist a more elegant solution, but couldn't find anything in the MarkLogic documentation.
You could use that opportunity to retrieve any user-specific data you're storing in the database.
If that isn't desirable, maybe there's no need to verify the user credentials at all? You could let that happen lazily on the first necessary query. And you should be prepared to handle database errors everywhere, in any case.
If you do need a non-lazy verification and don't want any data, that call to suggest() might be more expensive than you'd like. If so you might consider other options. A call to getErrorFormat ought to be fairly cheap. Opening a transaction and then rolling it back should be cheap too, but it requires the rest-writer or rest-admin role. If nothing else works you could write an extension that implements a noop XQuery, probably just ().
I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB:
Clients <= HTTP => [ RESTlet <= HTTP => CouchDB ]
I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data.
My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data.
Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data.
So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that?
Or what is a good strategy to keep the cache(s) up-to-date in general?
Thanks in advance.
To me it looks like you've grown up far enough to use some "professional" cache solution (e.g. EHCache). All distributed caches allow new data replication & invalidation among different nodes so your problem is already solved.
A distributed in-memory cache like memcached might be what you are looking for. You can configure object age, cache size and also expose callbacks to remove specific objects from the cache (like when the information is stale).