I'm looking at using PooledConnectionFactory in a Tomcat application, where in a Tomcat POST handler I want to drop a message into a queue to be picked up by a single remote consumer. AMQ pools both Connection and Session objects, and I'm trying to understand when I should use one over the other.
The approach I'm considering is to have a single Connection and set MaximumActiveSessionPerConnection to match my Tomcat threads, and the POST handler would borrow and return Sessions from the connection. Does this sound reasonable, or are there are advantages to pooling Connections instead?
If it matters, I'm not using Spring or other web app frameworks, just Tomcat. I'm persisting messages to disk in AMQ.
Both approaches should be functionally equivalent, and the difference in the code to do one vs. the other should be relatively small.
In terms of performance I don't think it will really matter as your bottleneck will be on the consuming side not on the producing side since you have a single consumer and potentially many concurrent producers.
Personally, I would prefer letting the pool do all the work and just writing the application as if it is creating a connection and session every time it sends a message (which would obviously be a huge anti-pattern without a pool).
Related
I want to push realtime notifications(DTO Object for the logged-in user) to the client-side by continuously querying the database. I am using Server Side Events to achieve the same. However, I am facing few issues in achieving that. I am using EventSource API in javascript.
POLLING INSIDE AN INFINITE LOOP
Since my data lies in the database, I constantly need to run queries to fetch the latest entries and use executor.execute(()->{ while(true) {emitter.send(data)} } Thread.sleep(5000)) until the user logs out. (a)Querying Database in an infinite loop and (b) Creating new ExecutorService objects is causing JDBC pool exhaustion exception and ultimately freezing the application.
USING SPRING BOOT #Scheduled
This doesn't work either as I need logged in user_id which I can't get inside #Scheduled annotated method using SpringContextHolder.getAuthentication as this Cron is not initiated by the user.
Am I doing wrong here by choosing SSE instead of Web Sockets or is there any way around to implement Server Side for this particular use case?
Please help/guide me.
If you want to push event to your client, you better have the event concept on your backend as well rather than polling. If you want to poll your database, you better let the client do it. SSE or websocket does not matter in that decision.
CDI events may be a suitable solution for your need.
Create an EntityForLoggedInUsersChanged event class.
Inject an Event<EntityForLoggedInUsersChanged> in your service that alter entities related to loggedin users. fire the event when they do so.
Create a service that will #Observe those events, construct the dto that you want to push, obtain the channel to the relevant users, and push it.
Surprisingly, yes, this is a valid pattern.
Polling every 5 seconds might use less overall resources, than having the database send push notifications (even assuming the DB supports that).
And compared to having the client make, say an AJAX call, every 5 seconds, which requires setting up a DB connection each time, it might also be more efficient (at the expense of keeping the SSE socket open all the time).
Creating new ExecutorService objects is causing JDBC pool exhaustion exception and ultimately freezing the application.
Is the pool exhaustion coming from having just one user, polling every 5 seconds? Or does it come from having lots of users, each keeping a database connection open?
If the latter, make the pool big enough to support the maximum number of simultaneously connected users you want to allow.
If the former, you either have to release the resource after polling, before doing the 5000ms sleep, or open the resource once, outside the loop, and then find a way to just re-run the query inside the infinite loop.
(Sorry I'm not familiar with ExecutorService or Spring; it may be that it is just a too high-level abstraction around querying a database, and you need to use lower-level functions?)
By the way, SSE vs. Web Sockets wouldn't make a difference here. Web Sockets give you a more complicated protocol in exchange for it being a two-way connection instead of one-way, but everything else is fairly much the same. I.e. you still have a dedicated socket between client and your web service, and you still have an infinite loop polling the database.
We are using Spring's CachingConnectionFactory to handle tens of millions of messages per day in production with our application and it works well.
We're looking to drop the amount of concurrent connections to Solace, however, until they are needed as we are sharing our ESB infrastructure with numerous other applications. Is there a lazy extension of this Spring factory which achieves what we need?
The CachingConnectionFactory already does lazy creation of connections, and it's the responsibility of the app to explicitly close unused Sessions to return them to the pool, as outlined in the Spring docs.
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jms/connection/CachingConnectionFactory.html
If this is for message consumers, it is preferable to let the listener container itself handle appropriate caching instead of a CachingConnectionFactory. The DefaultMessageListenerContainer supports dynamic scaling.
I think what needs to be clarified here is the definition of "idle" - does it mean no messages are being consumed or produced for the remainder of the application's life? Or just periodic inactivity, which may or may not be predictable in terms of duration of inactivity and/or when it occurs? Moreover, as noted by the previous answer, lazy resource management refers to the creation of connections - not destroying them when "idle" - which could mean any number of things as noted earlier.
In general, message consumers usually will not be able to predict when their connection will be idle as messages can be received at any time. For producers, you may have a better idea when your connection will be idle, although it is generally not worth the overhead of re-creating a new connection for every publish, as would be done using JmsTemplate without CCF / SCF. In either case, the following may assist in resource management:
If the app performs periodic batch-type work and does not need to produce or consume data between runs with long delays in between, it may make sense to conserve resources either by explicitly managing resources (e.g. destroy / re-create CCF) or shutting down and restarting when needed - Spring Cloud Task may fit the bill, or perhaps a cron job.
Minimize the number of #JmsListener callbacks if possible, as each one will translate to a connection. Solace queues support multiple subscriptions as well as wildcards, so it may be possible to consolidate subscriptions onto fewer queues. If ordering is not an issue, a concurrency argument can be passed to #JmsListener to allow for round-robin processing among multiple consumers on the same connection.
Offload connections from the shared Solace broker to a dedicated broker for your application, and set up a VPN bridge or DMR (Dynamic Message Routing) to the shared broker.
The PubSub+ software broker supports a wide range of connection scaling tiers, from 100 to 200K, so you can select a tier that provides sufficient capacity. This can either be done on your dedicated broker or the shared broker, or both, depending on your requirements and constraints.
I would like to design a web based application. Required functionality includes sending messages from my system to the remote system. In addition, my EJB system will also respond to messages from the remote system.
Which type of enterprise beans should I use? Should I use stateless session beans, message driven bean, or both?
As you might be know that MDB is asynchronous and as per my concerns chat application must a asynchronous as why CLIENT should wait for your response.
And if your application gets millions of request for the message then stateless session will not help in the performance so its better to use MDB.
Revert me in case of any concern.
Message driven beans are ideal for external integrations whereby the connection between two machines may be interrupted for periods of time. By using messages instead of relying on 100% uptime with server-server connections, failure modes can be embraced as a part of the process instead of fought against with workarounds and special cases.
While messages can induce latency, they can actually have higher throughput due when combined with queueing systems such as ActiveMQ.
I'm looking for opinion from you all. I have a web application that need to records data into another web application database. I not prefer to use HTTP request GET on 2nd application because of latency issue. I looking for fast way to save records on 2nd application quickly, I came across the idea of "fire and forget" , will JMS suit for this scenario? from my understanding JMS will guarantee message delivery, guarantee whether message will be 100% deliver is not important as long as can serve as many requests as possible. Let say I need to call at least 1000 random requests per seconds to 2nd application should I use JMS? HTTP request? or XMPP instead?
I think you're misunderstanding networking in general. There's positively no reason that a HTTP GET would have to be any slower than anything else, and if HTTP takes advantage of keep alives it's faster that most options.
JMX isn't a protocol, it's a specification that wraps many other protocols including, possibly, HTTP or XMPP.
In the end, at the levels where Java will operate, there's either UDP or TCP. TCP has more overhead by guarantees delivery (via retransmission) and ordering. UDP offers neither guaranteed delivery nor in-order delivery. If you can deal with UDP's limitations you'll find it "faster", and if you can't then any lightweight TCP wrapper (of which HTTP is one) is just about the same.
Your requirements seem to be:
one client and one server (inferred from your first sentence),
HTTP is mandatory (inferred from your talking about a web application database),
1000 or more record updates per second, and
individual updates do not need to be acknowledged synchronously (you are willing to use "fire and forget" approach.
The way I would approach this is to have the client threads queue the updates internally, and implement a client thread that periodically assembles queued updates into one HTTP request and sends it to the server. If necessary, the server can send a response that indicates the status for individual updates.
Batching eliminates the impact of latency on the client, and potentially allows the server to process the updates more efficiently.
The big difference between HTTP and JMS or XMPP is that JMS and XMPP allow asynchronous fire and forget messaging (where the client does not really know when and if a message will reach its destination and does not expect a response or an acknowledgment from the receiver). This would allow the first app to respond fast regardless of the second application processing time.
Asynchronous messaging is usually preferred for high-volume distributed messaging where the message consumers are slower than the producers. I can't say if this is exactly your case here.
If you have full control and the two web applications run in the same web container and hence in the same JVM, I would suggest using JNDI to allow both web applications to get access to a common data structure (a list?) which allows concurrent modification, namely to allow application A to add new entries and application B to consume the oldest entries simultaneously.
This is most likely the fastest way possible.
Note, that you should keep the information you put in the list to classes found in the JRE, or you will most likely run into class cast exceptions. These can be circumvented, but the easiest is most likely to just transfer strings in the common data structure.
I am building a small website for fun/learning using a fairly standard Web/Service/Data Access layered design.
For the Data Access Layer, what is the best way to handle creating Connection objects to call my SQL stored procedures and why? Bearing in mind I am writing a lot of the code by hand (I know I could be using Hibernate etc to do a lot of this for me)...
1) Should I create one static instance of the Connection and run all my querys through it or will this cause concurrency problems?
2) Should I create a Connection instance per database call and accept the performance overhead? (I will look into connection pooling at a later date if this is the case)
You should use one Connection per thread. Don't share connections across threads.
Consider using Apache DBCP. This is a free and standard way of configuring database connections and drawing them from a pool. It's the method used by high-performance web servers like Tomcat.
Furthermore, if you're using DBCP, since it's a pool (read: cached), there's little penalty to creating/closing connections frequently.
The standard way is to set up a DataSource. All application servers are able to do so via their admin console. The pool is then accessible by it's JNDI name (e.g. "jdbc/MyDB").
The data source should, in fact, be a connection pool (and usually is). It caches connections, tests them before passing to the application and does a lot of other important functions.
In your code you:
resolve JNDI name and cast it into DataSource
get a connection from the data source
do your work
close the connection (it goes back to the pool here)
You can set up the pool yourself (using any of freely available pool implementation), but it really doesn't make any sense if you're using an application server.
P.S.
Since it's a web application a good way to make sure you have closed your connection after the request is to use HttpFilter. You can set up one in web.xml. When the request comes, acquire the connection, put it into ThreadLocal. During the request, get the connection from ThreadLocal, but never close it. After the request, in the filter, close the connection.