How to parse complex YAML into Java object hierarchy (Spring Boot configuration) - java

I need to define service connection properties which will be used to configure RestTemplates used to make REST endpoint calls to Spring Boot services.
Note: In the future a service discovery mechanism will likely be involved, but I have been instructed NOT to pursue that avenue for the time being.
Service connection properties:
connection scheme - http/https
host name
host port
connection timeout
request timeout
socket timeout
default keep alive
default max connections per route
max total connections
Possible configuration (YAML):
services-config:
global:
scheme: http|https
connectionTimeout: <timeout ms>
requestTimeout: <timeout ms>
socketTimeout: <timeout ms>
defaultKeepAlive: <timeout ms>
defaultMaxConnPerRoute: <count>
maxTotalConnections: <count>
services:
- id: <ID> [?]
name: <name> [?]
host: localhost|<host>
port: <port>
- id: <ID> [?]
name: <name> [?]
host: localhost|<host>
port: <port>
...
The intent is to place this information into an application.yml file so that it can be accessed within the Spring application.
Is there a way to pull this information in YAML format into a Java object hierarchy like the following:
class : ServicesConfig
global : GlobalConnectionProperties
services : map<String, ServiceConnectionProperties>
class : GlobalConnectionProperties
scheme : String (maybe enum instead)
connectionTimeout : int
requestTimeout : int
socketTimeout : int
defaultKeepAlive : int
defaultMaxConnPerRoute : int
maxTotalConnections : int
class : ServiceConnectionProperties
id : String
name : String
host : String
port : int
UPDATE:
Attempted the following and achieved partial success:
ServiceConfiguration.java
#Configuration
public class ServiceConfiguration {
private ServicesConfig servicesConfig;
#Autowired
public ServiceConfiguration(ServicesConfig servicesConfig) {
this.servicesConfig = servicesConfig;
System.out.println(servicesConfig);
}
}
ServicesConfig.java:
#ConfigurationProperties(value = "services-config")
#EnableConfigurationProperties(
value = {GlobalConnectionProperties.class, ServiceConnectionProperties.class})
public class ServicesConfig {
private GlobalConnectionProperties globalProps;
private Map<String, ServiceConnectionProperties> services;
public GlobalConnectionProperties getGlobalProps() {
return globalProps;
}
public void setGlobalProps(GlobalConnectionProperties globalProps) {
this.globalProps = globalProps;
}
public Map<String, ServiceConnectionProperties> getServices() {
return services;
}
public void setServices(Map<String, ServiceConnectionProperties> services) {
this.services = services;
}
#Override
public String toString() {
return "ServicesConfig [globalProps=" + globalProps + ", services=" + services + "]";
}
}
application.yml:
services-config:
global:
scheme: http
connectionTimeout: 30000
requestTimeout: 30000
socketTimeout: 60000
defaultKeepAlive: 20000
defaultMaxConnPerRoute: 10
maxTotalConnections: 200
services:
- id: foo
name: foo service
host: 192.168.56.101
port: 8090
- id: bar
name: bar service
host: 192.168.56.102
port: 9010
The GlobalConnectionProperties and ServiceConnectionProperties classes have been omitted as they just contain properties, getter/setter methods and toString().
When I run a service, the following appears on the console:
ServicesConfig [globalProps=null, services={0=ServiceConnectionProperties [id=foo, name=foo service, host=192.168.56.101, port=8090], 1=ServiceConnectionProperties [id=bar, name=bar service, host=192.168.56.102, port=9010]}]
I'm actually surprised that the list items were parsed and inserted into the map. I'd like the "id" to be the key instead of a default integer. The odd thing is that the "global" properties have not made it into the map.
What do I need to correct to get the global object items read in?
Is there a way to key the map entries by the item "id" property?

What do I need to correct to get the global object items read in?
Binding of properties follows Java Bean conventions so in ServicesConfig you need to rename the field and getters/setters from globalProps to global.
https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-external-config-typesafe-configuration-properties as linked in the comment has more details.
Is there a way to key the map entries by the item "id" property?
For the map entries your yaml syntax is incorrect. See the syntax using : and ? at https://bitbucket.org/asomov/snakeyaml/wiki/Documentation#markdown-header-type-safe-collections.
Your map would be
services:
? foo
: { id: foo,
name: foo service,
host: 192.168.56.101,
port: 8090 }
? bar
: { id: bar,
name: bar service,
host: 192.168.56.102,
port: 9010}

Related

How can I reduce the Lambda Cold Start time in my code?

I am developing a REST API for a mobile app. The mobile app is expected to have millions of users, and used on daily basis.
I am using the AWS Lambda, API Gateway, Amazon RDS (MySQL) technologies for this. In addition, I am using the CloudFormation file to configure everything.
I noticed that each function here has a cold start time of 3 seconds to 3.8 seconds. This needs to be reduced as much as possible.
HikariCPDataSource
import java.sql.Connection;
import java.sql.SQLException;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class HikariCPDataSource {
private static HikariConfig config = new HikariConfig();
private static HikariDataSource ds;
static {
config.setJdbcUrl("jdbc:mysql://gfgf.ffgfg.us-east-1.rds.amazonaws.com:3306/aaaa");
config.setUsername("admin");
config.setPassword("admin123");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
ds = new HikariDataSource(config);
}
public static Connection getConnection() throws SQLException {
return ds.getConnection();
}
private HikariCPDataSource(){}
}
GetAllAccountTypesLambda
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.peresiaapp.beans.AccountingType;
import java.util.ArrayList;
import java.util.List;
import java.sql.*;
public class GetAllAccountTypesLambda {
ObjectMapper objectMapper = new ObjectMapper();
static final String QUERY = "SELECT * from accounting_type";
static Connection conn = null;
static {
try {
conn = HikariCPDataSource.getConnection();
} catch (SQLException e) {
e.printStackTrace();
}
}
public APIGatewayProxyResponseEvent getAllAccountTypes(APIGatewayProxyResponseEvent request)
throws JsonProcessingException, ClassNotFoundException {
List<AccountingType> list = new ArrayList<>();
AccountingType acc = new AccountingType();
try (Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(QUERY);) {
// Extract data from result set
while (rs.next()) {
// Retrieve by column name
acc.setIdaccountingType(rs.getInt("idaccounting_Type"));
acc.setType(rs.getString("type"));
list.add(acc);
}
} catch (SQLException e) {
e.printStackTrace();
}
String writeValueAsString = objectMapper.writeValueAsString(list);
return new APIGatewayProxyResponseEvent().withStatusCode(200).withBody(writeValueAsString);
}
}
template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
aaaa-restapi
Sample SAM Template for aaaa-restapi
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 100
Resources:
GetAllAccountTypesLambda:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: aaaa-restapi
Handler: com.peresiaapp.dao.accountingtype.GetAllAccountTypesLambda::getAllAccountTypes
Runtime: java11
MemorySize: 1024
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /accounttype
Method: get
Role: !GetAtt LambdaRole.Arn
VpcConfig:
SecurityGroupIds:
- sg-041f2459dcd921e8e
SubnetIds:
- subnet-0381dfdfd
- subnet-c4ddf54cb
GetAllRolesLambda:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: aaaa-restapi
Handler: com.peresiaapp.dao.accountingtype.GetAllRolesLambda::getAllRoles
Runtime: java11
MemorySize: 1024
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /roles
Method: get
Role: !GetAtt LambdaRole.Arn
VpcConfig:
SecurityGroupIds:
- sg-041f2459dcd921e8e
SubnetIds:
- subnet-0381sds2d
- subnet-c4d5sdsb
LambdaRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: root
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:DeleteNetworkInterface
- ec2:DescribeInstances
- ec2:AttachNetworkInterface
Resource: '*'
*UPDATE
Couple of comments suggests me of provisioned concurrency. I did try. Did not see much of a difference. However, if any of you can explain what is below that is 900 available is, that would be great. I have hundreds of functions, does this also mean I have to spend a HUGE amount of money if I turned on concurrency? Because figures in pricing page seems to be different and that is okay with me - https://aws.amazon.com/lambda/pricing/
Enable tiered compilation to stop at level 1
Test various memory settings to find the most optimal. CPU increases with memory allocated and you are most likely constrained on CPU - AWS Lambda Power Tuning

AWS XRay service map components are disconnected

I'm using open telemetry to export trace information of the following application:
A nodejs kafka producer sends messages to input-topic. It uses kafkajs instrumented with opentelemetry-instrumentation-kafkajs library. I'm using the example from AWS OTEL for NodeJS example. Here is my tracer.js:
module.exports = () => {
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.ERROR);
// create a provider for activating and tracking with AWS IdGenerator
const attributes = {
'service.name': 'nodejs-producer',
'service.namespace': 'axel'
}
let resource = new Resource(attributes)
const tracerConfig = {
idGenerator: new AWSXRayIdGenerator(),
plugins: {
kafkajs: { enabled: false, path: 'opentelemetry-plugin-kafkajs' }
},
resource: resource
};
const tracerProvider = new NodeTracerProvider(tracerConfig);
// add OTLP exporter
const otlpExporter = new CollectorTraceExporter({
url: (process.env.OTEL_EXPORTER_OTLP_ENDPOINT) ? process.env.OTEL_EXPORTER_OTLP_ENDPOINT : "localhost:55680"
});
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(otlpExporter));
tracerProvider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
// Register the tracer with X-Ray propagator
tracerProvider.register({
propagator: new AWSXRayPropagator()
});
registerInstrumentations({
tracerProvider,
instrumentations: [new KafkaJsInstrumentation({})],
});
// Return a tracer instance
return trace.getTracer("awsxray-tests");
}
A Java application that reads from input-topic and produces to final-topic. Also instrumented with AWS OTEL java agent. Java app is launched like below:
export OTEL_TRACES_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_PROPAGATORS=xray
export OTEL_RESOURCE_ATTRIBUTES="service.name=otlp-consumer-producer,service.namespace=axel"
export OTEL_METRICS_EXPORTER=none
java -javaagent:"${PWD}/aws-opentelemetry-agent.jar" -jar "${PWD}/target/otlp-consumer-producer.jar"
I'm using otel/opentelemetry-collector-contrib that has AWS XRay exporter:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
awsxray:
region: 'eu-central-1'
max_retries: 10
service:
pipelines:
traces:
receivers: [otlp]
exporters: [awsxray]
I can see from log messages and also from XRay console that traces are being published (with correct parent trace ids). NodeJS log message:
{
traceId: '60d0c7d4cfc2d86b2df8624cb4bccead',
parentId: undefined,
name: 'input-topic',
id: '3e289f00c4499ae8',
kind: 3,
timestamp: 1624295380734468,
duration: 3787,
attributes: {
'messaging.system': 'kafka',
'messaging.destination': 'input-topic',
'messaging.destination_kind': 'topic'
},
status: { code: 0 },
events: []
}
and Java consumer with headers:
Headers([x-amzn-trace-id:Root=1-60d0c7d4-cfc2d86b2df8624cb4bccead;Parent=3e289f00c4499ae8;Sampled=1])
As you see parent and root ids match each other. However the service map is constructed in a disconnected way:
What other configuration I'm missing here to compile a correct service map?

Issue with custom logback appenders for DropWizard

I'm trying to add a custom LogAppender to be used in drop-wizard. From my understanding, you add an appender:
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.core.Appender;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;
import com.google.cloud.logging.logback.LoggingAppender;
import io.dropwizard.logging.AbstractAppenderFactory;
import io.dropwizard.logging.async.AsyncAppenderFactory;
import io.dropwizard.logging.filter.LevelFilterFactory;
import io.dropwizard.logging.layout.LayoutFactory;
#JsonTypeName("stack-driver-console")
public class StackDriverAppenderFactory extends AbstractAppenderFactory {
private String appenderName = "CLOUD";
private boolean includeContextName = true;
#JsonProperty
public String getName() {
return this.appenderName;
}
#JsonProperty
public void setName(String name) {
this.appenderName = name;
}
#Override
public Appender build(
LoggerContext loggerContext,
String s,
LayoutFactory layoutFactory,
LevelFilterFactory levelFilterFactory,
AsyncAppenderFactory asyncAppenderFactory) {
setNeverBlock(true);
setTimeZone("utc");
setLogFormat("%-6level [%d{HH:mm:ss.SSS}] [%t] %logger{5} - %X{code} %msg %n");
LoggingAppender cloudAppender = new LoggingAppender();
cloudAppender.addEnhancer("com.google.cloud.logging.TraceLoggingEnhancer");
cloudAppender.addFilter(levelFilterFactory.build(Level.toLevel(getThreshold())));
cloudAppender.setFlushLevel(Level.WARN);
cloudAppender.setLog("application.log");
cloudAppender.setName(appenderName);
cloudAppender.setContext(loggerContext);
cloudAppender.start();
return wrapAsync(cloudAppender, asyncAppenderFactory);
}
}
then in the /main/resources/META-INF/services/io.dropwizard.logging.AppenderFactory
io.dropwizard.logging.ConsoleAppenderFactory
io.dropwizard.logging.FileAppenderFactory
io.dropwizard.logging.SyslogAppenderFactory
com.example.logger.StackDriverAppenderFactory
Dropwizard version: compile 'io.dropwizard:dropwizard-core:1.3.16'
config.yaml
# Logging settings.
logging:
level: DEBUG
appenders:
- type: stack-driver-console
threshold: INFO
# use the simple server factory if you only want to run on a single port
server:
applicationConnectors:
- type: http
port: 8080
adminConnectors:
- type: http
port: 8081
# the only required property is resourcePackage, for more config options see below
swagger:
resourcePackage: com.example.resources
When I put a break point in io.dropwizard.configuration.BasicConfigurationFactory:build() line 127,
final T config = mapper.readValue(new TreeTraversingParser(node), klass);
where the node is defined as:
{
"logging":{
"level":"DEBUG",
"appenders":[
{
"type":"stack-driver-console",
"threshold":"INFO"
}
]
},
"server":{
"applicationConnectors":[
{
"type":"http",
"port":8080
}
],
"adminConnectors":[
{
"type":"http",
"port":8081
}
]
},
"swagger":{
"resourcePackage":"com.example.resources"
}
}
throws an exception
com.fasterxml.jackson.databind.exc.InvalidTypeIdException: Could not resolve type id 'stack-driver-console' as a subtype of [simple type, class io.dropwizard.logging.AppenderFactory<ch.qos.logback.classic.spi.ILoggingEvent>]: known type ids = [console, file, syslog, tcp, udp] (for POJO property 'appenders')
at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: com.example.App["logging"]->io.dropwizard.logging.DefaultLoggingFactory["appenders"]->java.util.ArrayList[0])
It's not clear to my why dropwizard is having a hard time identifying the custom type. This use to work, has something changed?
I followed this example as well "https://gist.github.com/ajmath/e9f90c29cd224653c218"

InvalidServerSideConfigurationException when creating cache using XML

I'm new with terracotta. I want to create a clustered server cache but found some difficulties with configuration files.
Here is my tc-config-terracotta.xml file (with which I launch terracotta server)
<?xml version="1.0" encoding="UTF-8"?>
<tc-config xmlns="http://www.terracotta.org/config"
xmlns:ohr="http://www.terracotta.org/config/offheap-resource">
<servers>
<server host="localhost" name="clustered">
<logs>/path/log/terracotta/server-logs</logs>
</server>
</servers>
<plugins>
<config>
<ohr:offheap-resources>
<ohr:resource name="primary-server-resource" unit="MB">128
</ohr:resource>
<ohr:resource name="secondary-server-resource" unit="MB">96
</ohr:resource>
</ohr:offheap-resources>
</config>
</plugins>
</tc-config>
I used the ehcache-clustered-3.3.1-kit to launch the server.
$myPrompt/some/dir/with/ehcache/clustered/server/bin>./start-tc-server.sh -f /path/to/conf/tc-config-terracotta.xml
No problem for the server to start
2017-06-01 11:29:14,052 INFO - New logging session started.
2017-06-01 11:29:14,066 INFO - Terracotta 5.2.2, as of 2017-03-29 at 15:26:20 PDT (Revision 397a456cfe4b8188dfe8b017a5c14346f79c2fcf from UNKNOWN)
2017-06-01 11:29:14,067 INFO - PID is 6114
2017-06-01 11:29:14,697 INFO - Successfully loaded base configuration from file at '/path/to/conf/tc-config-terracotta.xml'.
2017-06-01 11:29:14,757 INFO - Available Max Runtime Memory: 1822MB
2017-06-01 11:29:14,836 INFO - Log file: '/path/log/terracotta/server-logs/terracotta-server.log'.
2017-06-01 11:29:15,112 INFO - Becoming State[ ACTIVE-COORDINATOR ]
2017-06-01 11:29:15,129 INFO - Terracotta Server instance has started up as ACTIVE node on 0:0:0:0:0:0:0:0:9510 successfully, and is now ready for work.
Here is the ehcache-terracotta.xml configuration file
<ehcache:config xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:terracotta='http://www.ehcache.org/v3/clustered'
xmlns:ehcache='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.3.xsd
http://www.ehcache.org/v3/clustered http://www.ehcache.org/schema/ehcache-clustered-ext-3.3.xsd">
<ehcache:service>
<terracotta:cluster>
<terracotta:connection url="terracotta://localhost:9510/clustered" />
<terracotta:server-side-config
auto-create="true">
<terracotta:default-resource from="primary-server-resource" />
</terracotta:server-side-config>
</terracotta:cluster>
</ehcache:service>
<ehcache:cache alias="myTest">
<ehcache:key-type>java.lang.String</ehcache:key-type>
<ehcache:value-type>java.lang.String</ehcache:value-type>
<ehcache:resources>
<terracotta:clustered-dedicated unit="MB">10
</terracotta:clustered-dedicated>
</ehcache:resources>
<terracotta:clustered-store consistency="strong" />
</ehcache:cache>
</ehcache:config>
I have a class to test the conf:
import java.net.URL;
import org.ehcache.Cache;
import org.ehcache.CacheManager;
import org.ehcache.config.Configuration;
import org.ehcache.config.builders.CacheManagerBuilder;
import org.ehcache.xml.XmlConfiguration;
public class TestTerracottaCacheManager
{
private static TestTerracottaCacheManager cacheManager = null;
private CacheManager cm;
private Cache<Object, Object> cache;
private static final String DEFAULT_CACHE_NAME = "myTest";
private String cacheName;
public static TestTerracottaCacheManager getInstance()
{
if (cacheManager == null)
{
cacheManager = new TestTerracottaCacheManager();
}
return cacheManager;
}
private TestTerracottaCacheManager()
{
// 1. Create a cache manager
final URL url =
TestTerracottaCacheManager.class.getResource("/ehcache-terracotta.xml");
System.out.println(url);
Configuration xmlConfig = new XmlConfiguration(url);
cm = CacheManagerBuilder.newCacheManager(xmlConfig);
cm.init();
intializeCache();
}
private void intializeCache()
{
// 2. Get a cache called "cache1", declared in ehcache.xml
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
Object.class, Object.class);
if (cache == null)
{
throw new NullPointerException();
}
}
public void put(Object key, Object value)
{
cache.put(key, value);
}
public Object get(String key)
{
// 5. Print out the element
Object ele = cache.get(key);
return ele;
}
public boolean isKeyInCache(Object key)
{
return cache.containsKey(key);
}
public void closeCache()
{
// 7. shut down the cache manager
cm.close();
}
public static void main(String[] args)
{
TestTerracottaCacheManager testCache = TestTerracottaCacheManager.getInstance();
testCache.put("titi", "1");
System.out.println(testCache.get("titi"));
testCache.closeCache();
}
public String getCacheName()
{
return cacheName;
}
public void setCacheName(String cacheName)
{
this.cacheName = cacheName;
}
}
I've got an exception. Here it's the stack trace:
14:18:38.978 [main] ERROR org.ehcache.core.EhcacheManager - Initialize failed.
Exception in thread "main" org.ehcache.StateTransitionException: Unable to validate cluster tier manager for id clustered
at org.ehcache.core.StatusTransitioner$Transition.failed(StatusTransitioner.java:235)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:587)
at fr.test.cache.TestTerracottaCacheManager.<init>(TestTerracottaCacheManager.java:41)
at fr.test.cache.TestTerracottaCacheManager.getInstance(TestTerracottaCacheManager.java:28)
at fr.test.cache.TestTerracottaCacheManager.main(TestTerracottaCacheManager.java:81)
Caused by: org.ehcache.clustered.client.internal.ClusterTierManagerValidationException: Unable to validate cluster tier manager for id clusteredENS
at org.ehcache.clustered.client.internal.ClusterTierManagerClientEntityFactory.retrieve(ClusterTierManagerClientEntityFactory.java:196)
at org.ehcache.clustered.client.internal.service.DefaultClusteringService.autoCreateEntity(DefaultClusteringService.java:215)
at org.ehcache.clustered.client.internal.service.DefaultClusteringService.start(DefaultClusteringService.java:148)
at org.ehcache.core.internal.service.ServiceLocator.startAllServices(ServiceLocator.java:118)
at org.ehcache.core.EhcacheManager.init(EhcacheManager.java:559)
... 3 more
Caused by: org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException: Default resource not aligned. Client: primary-server-resource Server: null
at org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException.withClientStackTrace(InvalidServerSideConfigurationException.java:43)
at org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException.withClientStackTrace(InvalidServerSideConfigurationException.java:22)
at org.ehcache.clustered.common.internal.messages.ResponseCodec.decode(ResponseCodec.java:197)
at org.ehcache.clustered.common.internal.messages.EhcacheCodec.decodeResponse(EhcacheCodec.java:110)
at org.ehcache.clustered.common.internal.messages.EhcacheCodec.decodeResponse(EhcacheCodec.java:37)
at com.tc.object.EntityClientEndpointImpl$InvocationBuilderImpl$1.getWithTimeout(EntityClientEndpointImpl.java:193)
at com.tc.object.EntityClientEndpointImpl$InvocationBuilderImpl$1.getWithTimeout(EntityClientEndpointImpl.java:175)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.waitFor(SimpleClusterTierManagerClientEntity.java:184)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.invokeInternal(SimpleClusterTierManagerClientEntity.java:148)
at org.ehcache.clustered.client.internal.SimpleClusterTierManagerClientEntity.validate(SimpleClusterTierManagerClientEntity.java:120)
at org.ehcache.clustered.client.internal.ClusterTierManagerClientEntityFactory.retrieve(ClusterTierManagerClientEntityFactory.java:190)
... 7 more
Caused by: org.ehcache.clustered.common.internal.exceptions.InvalidServerSideConfigurationException: Default resource not aligned. Client: primary-server-resource Server: null
at org.ehcache.clustered.server.EhcacheStateServiceImpl.checkConfigurationCompatibility(EhcacheStateServiceImpl.java:207)
at org.ehcache.clustered.server.EhcacheStateServiceImpl.validate(EhcacheStateServiceImpl.java:194)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.validate(ClusterTierManagerActiveEntity.java:253)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invokeLifeCycleOperation(ClusterTierManagerActiveEntity.java:203)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invoke(ClusterTierManagerActiveEntity.java:147)
at org.ehcache.clustered.server.ClusterTierManagerActiveEntity.invoke(ClusterTierManagerActiveEntity.java:57)
at com.tc.objectserver.entity.ManagedEntityImpl.performAction(ManagedEntityImpl.java:741)
at com.tc.objectserver.entity.ManagedEntityImpl.invoke(ManagedEntityImpl.java:488)
at com.tc.objectserver.entity.ManagedEntityImpl.lambda$processInvokeRequest$2(ManagedEntityImpl.java:319)
at com.tc.objectserver.entity.ManagedEntityImpl$SchedulingRunnable.run(ManagedEntityImpl.java:1048)
at com.tc.objectserver.entity.RequestProcessor$EntityRequest.invoke(RequestProcessor.java:170)
at com.tc.objectserver.entity.RequestProcessor$EntityRequest.run(RequestProcessor.java:161)
at com.tc.objectserver.entity.RequestProcessorHandler.handleEvent(RequestProcessorHandler.java:27)
at com.tc.objectserver.entity.RequestProcessorHandler.handleEvent(RequestProcessorHandler.java:23)
at com.tc.async.impl.StageQueueImpl$HandledContext.runWithHandler(StageQueueImpl.java:502)
at com.tc.async.impl.StageImpl$WorkerThread.run(StageImpl.java:192)
I think it's a problem in the XML files, but I'm not sure. Someone can help please?
Thanks
What the exception tells you is that the configuration of the clustered bits of your cache manager and cache differ between what the cluster knows and what the client ask.
The most likely explanation is that you ran your client code once with a different config, realised there was an issue or just wanted to change something. And then tried to run the client without destroying the cahche manager on the cluster or restarting the server.
You simply need to restart your server, to lose all clustered state since you want a different setup.
I've tried to reproduce your issue in my IDE, copying / pasting your 3 files.
I found an error in intializeCache():
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
Object.class, Object.class);
triggered a :
Exception in thread "main" java.lang.IllegalArgumentException: Cache 'myTest' type is <java.lang.String, java.lang.String>, but you retrieved it with <java.lang.Object, java.lang.Object>
at org.ehcache.core.EhcacheManager.getCache(EhcacheManager.java:162)
at MyXmlClient.intializeCache(MyXmlClient.java:48)
So please make sure that your xml configuration matches your Java code : you used <String, String> in XML, use <String, String> in your java code :
cache = cm.getCache(cacheName == null ? DEFAULT_CACHE_NAME : cacheName,
String.class, String.class);
Everything else worked fine !
INFO --- [8148202b7ba8914] customer.logger.tsa : Connection successfully established to server at 127.0.0.1:9510
INFO --- [ main] org.ehcache.core.EhcacheManager : Cache 'myTest' created in EhcacheManager.
1
INFO --- [ main] org.ehcache.core.EhcacheManager : Cache 'myTest' removed from EhcacheManager.
INFO --- [ main] o.e.c.c.i.s.DefaultClusteringService : Closing connection to cluster terracotta://localhost:9510
The error you gave is coming form a mismatch between the terracotta server offheap resource use in your client and the terracotta server offheap configuration ; make sure they match ! (copying / pasting your example they did !)
#AnthonyDahanne I am using ehcache-clustered-3.8.1-kit to launch the server. I have ehcache.xml my spring boot application is automatically picking my ehache.xml so I am not explicitly writing cacheManager.
<ehcache:config
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:terracotta='http://www.ehcache.org/v3/clustered'
xmlns:ehcache='http://www.ehcache.org/v3'
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.8.xsd
http://www.ehcache.org/v3/clustered http://www.ehcache.org/schema/ehcache-clustered-ext-3.8.xsd">
<ehcache:service>
<terracotta:cluster>
<terracotta:connection url="terracotta://localhost:9410/clustered"/>
<terracotta:server-side-config auto-create="true">
<!--<terracotta:default-resource from="default-resource"/>-->
<terracotta:shared-pool name="shared-pool-expense" unit="MB">100</terracotta:shared-pool>
</terracotta:server-side-config>
</terracotta:cluster>
</ehcache:service>
<ehcache:cache alias="areaOfCircleCache">
<ehcache:key-type>java.lang.String</ehcache:key-type>
<ehcache:value-type>com.db.entity.LogMessage</ehcache:value-type>
<ehcache:resources>
<!-- <ehcache:heap unit="entries">100</ehcache:heap>
<ehcache:offheap unit="MB">10</ehcache:offheap>-->
<terracotta:clustered-dedicated unit="MB">10</terracotta:clustered-dedicated>
</ehcache:resources>
</ehcache:cache>
</ehcache:config>

Reference EJBs outside the Application server

I would like to know if any of you have ever call an EJB remotely. This is my scenario:
I have a single remote interface package in its own jar file. Then there is a EJB module (another jar file) that depends on the previous one to implement the interface as a #Stateless session bean.
I have deployed the EJB module in JBOSS 5.1.0.GA. When I try calling the EJB from within Eclipse, the returned object is not recognized as being of the interface type. Below are the differents java codes.
The Business interface:
#Remote
public interface RemoteBusinessInterface
{
public CustomerResponse getCustomerData( final CustomerRequest customerRequest );
}
Implementing class package in its own jar file:
#Stateless
public class RemoteEJBBean implements RemoteBusinessInterface
{
public CustomerResponse getCustomerData( final CustomerRequest customerRequest )
{...
And The code calling the remote EJB:
public class TestRemoteEjb
{
public static void main( final String[] args )
{
try
{
InitialContext initialContext = new InitialContext();
Object ref = initialContext.lookup( "java:/CustomerServiceBean/remote" );
System.out.println( ref );
if ( ref instanceof RemoteBusinessInterface )
{
System.out.println( "RemoteBusinessInterface" );
}
else
{
System.out.println( "Not of type RemoteBusinessInterface" );
}
}
catch ( NamingException e )
{
e.printStackTrace();
}
}
}
The output reads:
Reference Class Name: Proxy for: com.tchouaffe.remote.interfaces.RemoteBusinessInterface
Type: ProxyFactoryKey
Content: ProxyFactory/remote-ejb-1.0.0-SNAPSHOT/CustomerServiceBean/CustomerServiceBean/remote
Type: EJB Container Name
Content: jboss.j2ee:jar=remote-ejb-1.0.0-SNAPSHOT.jar,name=CustomerServiceBean,service=EJB3
Type: Proxy Factory is Local
Content: false
Type: Remote Business Interface
Content: com.tchouaffe.remote.interfaces.RemoteBusinessInterface
Type: Remoting Host URL
Content: socket://127.0.0.1:3873/
Not of type RemoteBusinessInterface
I have been wondering why the returned object is of a type other than RemoteBusinessInterface.
Thanks for any help.
Edmond
Try to check the following point:
It seems to be that you are not initializing the InitialContext object.
According to JBoss 5 documentation the properties needed are:
Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY,"org.jboss.naming.NamingContextFactory");
env.put(Context.URL_PKG_PREFIXES,"org.jboss.naming:org.jnp.interfaces");
env.put(Context.PROVIDER_URL, "jnp://127.0.0.1:1099");
InitialContext context = new InitialContext(env);
Additionaly, try to check if your client code has the necessary dependencies. The documentation is not clear enough about what are the jar files, but this link can help you to identify them. You also need to include the jar with the remote interface.

Categories

Resources