Redisson is spamming redis - java

Spring boot app using hibernate and redis as cache.
Problem is - redis have up-to 80K op/sec when app get 250 rps.
here is spring configuration for redis (from https://redisson.org/glossary/spring-cache.html)
#Bean
public CacheManager cacheManager(RedissonClient redissonClient) {
return new RedissonSpringCacheManager(redissonClient) {
#Override
public Cache getCache(String cacheName) {
Cache cache = super.getCache(cacheName);
return new RedisCacheWrapper(cache);
}
#Override
protected CacheConfig createDefaultConfig() {
CacheConfig cacheConfig = new CacheConfig(ttl, maxIdleTime);
cacheConfig.setMaxSize(maxSize);
return cacheConfig;
}
};
}
hibernate just #Cache annotation on cache required entity
#Cache(region = HIBERNATE_MY_REGION, usage = CacheConcurrencyStrategy.READ_WRITE)
public class MyEntity{
...
}
and next libs
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson-hibernate-53</artifactId>
</dependency>
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson</artifactId>
</dependency>
i checked app in local and opening some pages (no any hard work under the hood, just getting by id and external id) and there are profiling from redisinsignt
For now moved redis to cluster mode and up multiple slave redis modes to be able handle so many op/sec.

Related

Write to Postgres with apache beam (GCP)

We are using apache beam in our google cloud platform and implemented a dataflow streaming job that writes to our postgres database. However, we noticed that once we started using two JdbcIO.write() statements next to each other, our streaming job starts throwing errors like these:
Operation ongoing in step JdbcIO.WriteVoid/ParDo(Write) for at least 35m00s without outputting or completing in state process
at jdk.internal.misc.Unsafe.park (Native Method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:194)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2081)
at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst (LinkedBlockingDeque.java:581)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject (GenericObjectPool.java:439)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject (GenericObjectPool.java:356)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection (PoolingDataSource.java:134)
at org.apache.beam.sdk.io.jdbc.JdbcIO$WriteVoid$WriteFn.executeBatch (JdbcIO.java:1438)
at org.apache.beam.sdk.io.jdbc.JdbcIO$WriteVoid$WriteFn.processElement (JdbcIO.java:1387)
at org.apache.beam.sdk.io.jdbc.JdbcIO$WriteVoid$WriteFn$DoFnInvoker.invokeProcessElement (Unknown Source)
This only occurs approximately 30 minutes after deployment. It is able to process 10.000 elements just fine until those 30-ish minutes later. On average the throughput ranges from 50 elements/second to 120 elements/second.
The queries are not that heavy either, just a simple delete and insert statement.
We think that the connections are stuck and not released for the other elements but we don't know how to fix it though.
Here's the code:
public void writeToPostgres(PCollection<TimestampedValue<KV<String, Duration>>> collection) {
collection
.apply(Filter.by(Postgres::filter1))
.apply(JdbcIO.<TimestampedValue<KV<String, Duration>>>write()
.withDataSourceProviderFn(JdbcIO.PoolableDataSourceProvider.of(getDataSourceConfiguration()))
.withStatement("DELETE FROM table1 where field1 = ?::UUID and field2=?")
.withPreparedStatementSetter((element, statement) -> {
statement.setString(1, element.getValue().getKey());
Instant timestamp = element.getTimestamp();
statement.setTimestamp(2, new Timestamp(timestamp.getMillis()));
})
.withBatchSize(1)
.withRetryStrategy(DEADLOCK_DETECTED_RETRY_STRATEGY));
collection
.apply(Filter.by(Postgres::filter2))
.apply(
JdbcIO.<TimestampedValue<KV<String, Duration>>>write()
.withDataSourceProviderFn(JdbcIO.PoolableDataSourceProvider.of(getDataSourceConfiguration()))
.withStatement("INSERT INTO table1 (field1, field2) \n" +
"VALUES (?::UUID, ?) \n" +
"ON CONFLICT ON CONSTRAINT someconstraint\n" +
"DO UPDATE SET field2 = excluded.field2")
.withPreparedStatementSetter((element, statement) -> {
Instant eventTime = element.getTimestamp();
Timestamp now = Timestamp.from(now());
statement.setString(1, element.getValue().getKey());
statement.setTimestamp(2, new Timestamp(eventTime.getMillis()));
})
.withBatchSize(1)
.withRetryStrategy(DEADLOCK_DETECTED_RETRY_STRATEGY)
);
}
...
private DataSourceConfiguration getDataSourceConfiguration() {
return DataSourceConfiguration.create(ValueProvider.StaticValueProvider.of("org.postgresql.Driver"), jdbcUrlProvider)
.withUsername(usernameProvider)
.withPassword(passwordProvider);
}
How can I fix this?
We were able to find a fix but I consider this to be more of a workaround because we didn't find anything within the DataSourceProvider of JdbcIO. We basically copied the PoolableDataSourceProvider of JdbcIO and used the HikariDataSource instead because it seems to improve performance anyway.
First, we add the hikariCP dependency in out pom file
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>5.0.0</version>
</dependency>
Here's how the HikariDataSourceProvider looks like:
public static class HikariDataSourceProvider implements SerializableFunction<Void, DataSource> {
private static final ConcurrentHashMap<HikariDataSourceConfig, DataSource> instances = new ConcurrentHashMap<>();
private final HikariDataSourceConfig config;
private HikariDataSourceProvider(HikariDataSourceConfig config) {
this.config = config;
}
public static SerializableFunction<Void, DataSource> of(HikariDataSourceConfig hikariDataSourceConfig) {
return new HikariDataSourceProvider(hikariDataSourceConfig);
}
#Override
public DataSource apply(Void input) {
return instances.computeIfAbsent(
config,
ignored -> {
HikariDataSource hikariDataSource = new HikariDataSource();
hikariDataSource.setJdbcUrl(config.getJdbcUrlProvider().get());
hikariDataSource.setUsername(config.getUsernameProvider().get());
hikariDataSource.setPassword(config.getPasswordProvider().get());
hikariDataSource.setAutoCommit(false);
return hikariDataSource;
});
}
}
...
#Data
#Builder
public static class HikariDataSourceConfig implements Serializable {
private final ValueProvider<String> jdbcUrlProvider;
private final ValueProvider<String> usernameProvider;
private final ValueProvider<String> passwordProvider;
}
The #Data and #Builder are lombok annotations.
The PTransform would look something like this:
JdbcIO.<TimestampedValue<KV<String, Duration>>>write()
.withDataSourceProviderFn(HikariDataSourceProvider.of(getDataSourceConfig()))
.withStatement("...
We also removed the .withBatchSize(1) line so it doesn't bottleneck the process. We tried just removing this line first without the HikariDataSource but that alone did not solve this issue.
The streaming job can now handle the statements and is stable. The error no longer occurs.

Configuring flapdoodle embedded mongo with Mongodb version 4 and replica

I am currently working on a spring boot application 2.0.3.RELEASE. I want to configure Flapdoodle MongoDb with MongoDb version 4.0 and I also want to set a single mongo instance and create replicas for it.
So far i haven't figured out the process of creating cluster and replicas using flapdoodle.
I am using
MongodConfigBuilder().version(Version.Main.DEVELOPMENT)
.replication(new Storage(null, null, 0))
.build();
i have read many questions here related to this configuration but none of them is related to my problem. eg
How to configure two instance mongodb use spring boot and spring data
The flapdoodle configuration has an implemetation for this but i am not sure how to access it.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/blob/master/src/main/java/de/flapdoodle/embed/mongo/tests/MongosSystemForTestFactory.java
Is there any way to configure it in my test class before application starts.
thanks
I had to start the Embedded mongo with replicaset in web Integration tests, used below code
#Configuration
public class MongoConfig {
public static int mongodPort;
public static String defaultHost = "localhost";
static {
try {
mongodPort = Network.getFreeServerPort();
} catch (IOException e) {
e.printStackTrace();
}
}
#Bean
public IMongodConfig prepareMongodConfig() throws IOException {
IMongoCmdOptions cmdOptions = new MongoCmdOptionsBuilder()
.useNoPrealloc(false)
.useSmallFiles(false)
.master(false)
.verbose(false)
.useNoJournal(false)
.syncDelay(0)
.build();
IMongodConfig mongoConfigConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net(mongodPort, Network.localhostIsIPv6()))
.replication(new Storage(null, "testRepSet", 5000))
.configServer(false)
.cmdOptions(cmdOptions)
.build();
return mongoConfigConfig;
}
}
and before calling my controller I enabled the DB with replica set using below code
Public class ITtest {
public void setSystemProperty() {
System.setProperty("spring.data.mongodb.port", String.valueOf(MongoConfig.mongodPort));
System.setProperty("spring.data.mongodb.host", MongoConfig.defaultHost);
}
public static boolean isReplicaSetRun = false;
public static void setupMongoReplica() {
if (! isReplicaSetRun) {
System.out.println("Starting db on port: " +MongoConfig.mongodPort);
MongoClient client = new MongoClient(MongoConfig.defaultHost, MongoConfig.mongodPort);
client.getDatabase("admin").runCommand(new Document("replSetInitiate", new Document()));
client.close();
isReplicaSetRun = true;
}
}
#Test
#Order(1)
public void testParallel() {
setSystemProperty();
setupMongoReplica();
// call web controller
}
}
If want to run application, then enabling of replicaset can be done in implementation of ApplicationListener

activemq-all "5.15.3" does not work with Spring 5

I am updating Spring from 4.x.x to Spring 5.0.3. The project uses ActiveMQ version 5.15.3. When I try to deploy the application with the newest version of Spring I get this error:
Caused by: java.lang.NoSuchMethodError: org.springframework.web.servlet.handler.AbstractHandlerMapping.obtainApplicationContext()Lorg/springframework/context/ApplicationContext;
at org.springframework.web.servlet.handler.AbstractHandlerMapping.detectMappedInterceptors(AbstractHandlerMapping.java:269)
at org.springframework.web.servlet.handler.AbstractHandlerMapping.initApplicationContext(AbstractHandlerMapping.java:243)
at org.springframework.web.servlet.handler.SimpleUrlHandlerMapping.initApplicationContext(SimpleUrlHandlerMapping.java:102)
at org.springframework.context.support.ApplicationObjectSupport.initApplicationContext(ApplicationObjectSupport.java:120)
at org.springframework.web.context.support.WebApplicationObjectSupport.initApplicationContext(WebApplicationObjectSupport.java:77)
at org.springframework.context.support.ApplicationObjectSupport.setApplicationContext(ApplicationObjectSupport.java:74)
at org.springframework.context.support.ApplicationContextAwareProcessor.invokeAwareInterfaces(ApplicationContextAwareProcessor.java:121)
at org.springframework.context.support.ApplicationContextAwareProcessor.postProcessBeforeInitialization(ApplicationContextAwareProcessor.java:97)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:409)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1620)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
... 53 more
I noticed that ActiveMQ has Spring version "4.3.9" as a dependency. This version does not have the method "obtainApplicationContext" in "AbstractHandlerMapping" and hence the problem. Is there a way exclude the Spring libraries from the activemq-all bundle?
I thought this was my problem too but I eventually got my Spring webapp deployed on TomEE to successfully connect and use ActiveMQ hosted and running internally to that Tomcat container.
I'm using Spring 5.0.3-RELEASE and activemq-client 5.15.3. I didn't need everything in the maven shaded uber jar activemq-all.
#Configuration
public class MyConfig {
#Bean
public SingleConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
((ActiveMQConnectionFactory) connectionFactory)
// See http://activemq.apache.org/objectmessage.html why we set trusted packages
.setTrustedPackages(new ArrayList<String>(Arrays.asList("com.mydomain", "java.util")));
return new SingleConnectionFactory(connectionFactory);
}
#Bean
#Scope("prototype")
public JmsTemplate jmsTemplate() {
return new JmsTemplate(connectionFactory());
}
#Bean
public Queue myQueue() throws JMSException {
Connection connection = connectionFactory().createConnection();
connection.start();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue queue = session.createQueue("message-updates");
return queue;
}
}
#Component
public class MyQueueImpl implements MyQueue {
#Inject
private JmsTemplate jmsTemplate;
#Inject
private Queue myQueue;
#PostConstruct
public void init() {
jmsTemplate.setReceiveTimeout(JmsTemplate.RECEIVE_TIMEOUT_NO_WAIT);
}
#Override
public void enqueue(Widget widget) {
jmsTemplate.send(myQueue, new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
return session.createObjectMessage(widget);
}
});
}
#Override
public Optional<Widget> dequeue() {
Optional<Widget> widget = Optional.empty();
ObjectMessage message = (ObjectMessage) jmsTemplate.receive(myQueue);
try {
if (message != null) {
widget = Optional.ofNullable((Widget) message.getObject());
message.acknowledge();
}
} catch (JMSException e) {
throw new UncategorizedJmsException(e);
}
return widget;
}
}
Thanks Matthew K above. I found that too. ActiveMQ-all have packed a version of spring (currently that 4.x version) inside. There are some none-backwards compatible changes between that and spring v.5. I came across a new method in one of the other spring classes myself. It can cause this kind of issue (no such method exception in my case).
I had this issue with activeMQ 5.15.4 and spring 5.0.7. In the end I solved it with using the finer grained jars instead. I had to use all these: activemq-broker,activemq-client,activemq-pool,activemq-kahadb-store,activemq-spring

How to start H2 TCP server on Spring Boot application startup?

I'm able to start the H2 TCP server (database in a file) when running app as Spring Boot app by adding following line into the SpringBootServletInitializer main method:
#SpringBootApplication
public class NatiaApplication extends SpringBootServletInitializer {
public static void main(String[] args) {
Server.createTcpServer().start();
SpringApplication.run(NatiaApplication.class, args);
}
}
But if I run the WAR file on Tomcat it doesn't work because the main method is not called. Is there a better universal way how to start the H2 TCP server on the application startup before beans get initialized? I use Flyway (autoconfig) and it fails on "Connection refused: connect" probably because the server is not running. Thank you.
This solution works for me. It starts the H2 server if the app runs as Spring Boot app and also if it runs on Tomcat. Creating H2 server as a bean did not work because the Flyway bean was created earlier and failed on "Connection refused".
#SpringBootApplication
#Log
public class NatiaApplication extends SpringBootServletInitializer {
public static void main(String[] args) {
startH2Server();
SpringApplication.run(NatiaApplication.class, args);
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
startH2Server();
return application.sources(NatiaApplication.class);
}
private static void startH2Server() {
try {
Server h2Server = Server.createTcpServer().start();
if (h2Server.isRunning(true)) {
log.info("H2 server was started and is running.");
} else {
throw new RuntimeException("Could not start H2 server.");
}
} catch (SQLException e) {
throw new RuntimeException("Failed to start H2 server: ", e);
}
}
}
Yup, straight from the documentation, you can use a bean reference:
<bean id = "org.h2.tools.Server"
class="org.h2.tools.Server"
factory-method="createTcpServer"
init-method="start"
destroy-method="stop">
<constructor-arg value="-tcp,-tcpAllowOthers,-tcpPort,8043" />
There's also a servlet listener option that auto-starts/stops it.
That answers your question, but I think you should probably be using the embedded mode instead if it's deploying along with your Spring Boot application. This is MUCH faster and lighter on resources. You simply specify the correct URL and the database will start:
jdbc:h2:/usr/share/myDbFolder
(straight out of the cheat sheet).
There's a caveat that hasn't been considered in the other answers. What you need to be aware of is that starting a server is a transient dependency on your DataSource bean. This is due to the DataSource only needing a network connection, not a bean relationship.
The problem this causes is that spring-boot will not know about the h2 database needing to be fired up before creating the DataSource, so you could end up with a connection exception on application startup.
With the spring-framework this isn't a problem as you put the DB server startup in the root config with the database as a child. With spring boot AFAIK there's only a single context.
To get around this what you can do is create an Optional<Server> dependency on the data-source. The reason for Optional is you may not always start the server (configuration parameter) for which you may have a production DB.
#Bean(destroyMethod = "close")
public DataSource dataSource(Optional<Server> h2Server) throws PropertyVetoException {
HikariDataSource ds = new HikariDataSource();
ds.setDriverClassName(env.getProperty("db.driver"));
ds.setJdbcUrl(env.getProperty("db.url"));
ds.setUsername(env.getProperty("db.user"));
ds.setPassword(env.getProperty("db.pass"));
return ds;
}
For WAR packaging you can do this:
public class MyWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {
#Override
protected Class<?>[] getRootConfigClasses() {
return null;
}
#Override
protected Class<?>[] getServletConfigClasses() {
Server.createTcpServer().start();
return new Class[] { NatiaApplication.class };
}
#Override
protected String[] getServletMappings() {
return new String[] { "/" };
}
}
You can do like this:
#Configuration
public class H2ServerConfiguration {
#Value("${db.port}")
private String h2TcpPort;
/**
* TCP connection to connect with SQL clients to the embedded h2 database.
*
* #see Server
* #throws SQLException if something went wrong during startup the server.
* #return h2 db Server
*/
#Bean
public Server server() throws SQLException {
return Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", h2TcpPort).start();
}
/**
* #return FlywayMigrationStrategy the strategy for migration.
*/
#Bean
#DependsOn("server")
public FlywayMigrationStrategy flywayMigrationStrategy() {
return Flyway::migrate;
}
}

spring bean startup/shutdown order configuration (start h2 db as server)

I'd like to create configuration/bean to automatically start H2DB in my development profile. I'd like to have it running as a tcp server. It's needed to be started before any DataSource configuration. Can someone tell me how to achieve this?
Wha have I done is
#Profile("h2")
#Component
public class H2DbServerConfiguration implements SmartLifecycle {
private static final Logger logger = LoggerFactory.getLogger(H2DbServerConfiguration.class);
private Server server;
#Override
public boolean isAutoStartup() {
return true;
}
#Override
public void stop(Runnable callback) {
stop();
new Thread(callback).start();
}
#Override
public void start() {
logger.debug("############################################");
logger.debug("############################################");
logger.debug("STARTING SERVER");
logger.debug("############################################");
logger.debug("############################################");
try {
server = Server.createTcpServer("-web", "-webAllowOthers", "-webPort", "8082").start();
} catch (SQLException e) {
throw new RuntimeException("Unable to start H2 server", e);
}
}
#Override
public void stop() {
logger.debug("############################################");
logger.debug("############################################");
logger.debug("STOPPING SERVER");
logger.debug("############################################");
logger.debug("############################################");
if (server != null)
if (server.isRunning(true))
server.stop();
}
#Override
public boolean isRunning() {
return server != null ? server.isRunning(true) : false;
}
#Override
public int getPhase() {
return 0;
}
}
but this isn't an option for me because component is created after datasource (I have liquibase setup so it's too late) and Phase is still the same that means FIFO order and I'd like to be FILO.
Mix #Profile and #Component seams to me a bad idea. Profiles are designed to work with Configuration (documentation)
Do you really need profile? In my opinion it makes sense if you have several possible configurations, one based on H2, and if you want be able to switch between these configurations (typically at start time by setting a properties...)
Manage the H2 server with a bean (documentation) seams correct to me (as suggested by Stefen). Maybe you will prefer annotations... If you want a spring profile, then you will need a Configuration object too. It will simply load the H2 server bean (in my opinion it's better to manage the H2 server lifecycle with a bean than with a context/config).
Create your server as a bean :
#Bean(initMethod = "start", destroyMethod = "stop")
Server h2Server() throws Exception {
return Server.createTcpServer("-tcp","-tcpAllowOthers","-tcpPort","9192");
}
Now you can configure spring to create other beans (e.g the datasource)
after the bean h2Server using #DependsOn
#DependsOn("h2Server")
#Bean
DataSource dataSource(){
...
}
Hi, what about using spring boot? It has automatically configured datasource so I don't want to reconfigure it.
You are right, to use the above approach you have to create your own datasource in order to annotate it with #DependsOn .
But it looks like this is not really necessary.
In one of my projects I am creating the h2Server as a bean as described.
I use the datasource created by spring, so without any #DependsOn.
It works perfectly. Just give it a try.
Your solution with SmartLifecycle does not work, because it creates the server on ApplicationContext refresh, which happens after all beans (including the datasource ) were created.

Categories

Resources