Unable to do clustering in Tigase Server on CentOS6.10 64 Bit - java

I am trying to do clustering in CentOS 6.10
My versions of Jdk , JRE=
java version "1.8.0_60"
javac 1.8.0_60
This is my config.tdsl file for ip =192.168.4.109
admins = [
'admin#imrggn.com'
]
'config-type' = 'default'
debug = [ 'server' ]
'default-virtual-host' = 'imrggn.com'
dataSource () {
default () {
uri = 'jdbc:derby:tigasedb;create=true'
}
}
'audit-log' () {}
http () {
setup () {
'admin-password' = 'tigase'
'admin-user' = 'admin'
}
}
pubsub () {
trusted = [ 'http#{clusterNode}' ]
}
'sess-man' () {
'audit-log' () {}
}
'dns-resolver' {
-'tigase-resolver-class' = -'tigase.util.DNSResolverDefault'
-'tigase-primary-address' = -'192.168.4.109'
-'tigase-secondary-address' = -'192.168.4.109'
}
stun (class: tigase.stun.StunComponent) {
-'stun-primary-ip' = -'mc2.imrggn.com'
-'stun-primary-port' = 3478
-'stun-secondary-ip' = -'hey-sjain-l'
-'stun-secondary-port' = 7001
}
'cluster-mode' = true
'cluster-nodes' = [ -'mc2.imrggn.com', -'mc1.imrggn.com']
When I run Wireshark ie tcp.port==5277 , I see nothing there at all where the default port is 5277 only , hence it shows no value over there on neither of the machines
But lsof -iTCP:5277 shows java 21155 root 175u IPv6 733910 0t0 TCP *:5277 (LISTEN)
My ip = 192.168.4.109 hostname = mc2.imrggn.com
And for the other machine is My ip = 192.168.4.111 hostname = mc1.imrggn.com
What is wrong ?

The problem boils down to your database setup (`uri = 'jdbc:derby:tigasedb;create=true') - for the clustering to work (and make sense) you would have to use a database, that would be available to all cluster nodes (in this case MySQL, PostgreSQL, Microsoft SQLServer or MongoDB).
The reason is two-fold:
if you would use local DerbyDB on all nodes then all information would be local (i.e. you would have different list of users on each node
(more important) Tigase uses cluster auto-discover which utilizes the database (each node announces it's availability via shared database)

Related

Fetch the public IP of AWS Task in a cluster using Java AWS SDK

I have created a cluster, resource, service inside a stack using Java AWS SDK.
Now I need to fetch the Public IP of the task inside the cluster.
I have used ClientFormationCloud and am trying to fetch the IP using this client itself.
I have used the following code as of now:
String awsRegion = System.getenv("AWS_DEFAULT_REGION");
Region region = Region.of(awsRegion);
CloudFormationClient cfClient = CloudFormationClient.builder().region(region)
.credentialsProvider(EnvironmentVariableCredentialsProvider.create()).build();
CloudFormationWaiter waiter = cfClient.waiter();
Collection<Capability> capabilities = new ArrayList<Capability>();
capabilities.add(Capability.CAPABILITY_NAMED_IAM);
CreateStackRequest stackRequest = CreateStackRequest.builder().stackName(stackName).templateURL(location).capabilities(capabilities).roleARN(roleARN).onFailure(OnFailure.ROLLBACK).build();
CreateStackResponse cfResponse = cfClient.createStack(stackRequest);
DescribeStacksRequest stacksRequest = DescribeStacksRequest.builder().stackName(stackName).build();
DescribeStacksResponse response = cfClient.describeStacks();
WaiterResponse<DescribeStacksResponse> waiterResponse = waiter.waitUntilStackCreateComplete(stacksRequest);
waiterResponse.matched().response().ifPresent(System.out::println);
'''
EDITED
Tried the following for ECS according to the first comment
'''
AmazonECS ecsClient = AmazonECSClientBuilder.standard().withRegion(System.getenv("AWS_DEFAULT_REGION")).withCredentials(new com.amazonaws.auth.EnvironmentVariableCredentialsProvider()).build();
ListTasksRequest request = new ListTasksRequest().withCluster("ClusterName");
ListTasksResult tasks = ecsClient.listTasks(request);
List<String> taskArns = tasks.getTaskArns();
for (String taskArn : taskArns) {
System.out.println(taskArn);
DescribeTasksRequest dtr = new DescribeTasksRequest().withTasks(taskArn);
DescribeTasksResult response = ecsClient.describeTasks(dtr);
}
'''
The taskArn is getting printed: arn:aws:ecs:us-east-1:892372858130:task/ClusterName/2c72e43671ef4cc09f7816469696ee3e
But the last line of the code is giving the following error:
com.amazonaws.services.ecs.model.InvalidParameterException: Invalid identifier: Identifier is for cluster NaraCluster. Your cluster is default (Service: AmazonECS; Status Code: 400; Error Code: InvalidParameterException; Request ID: 45f8abfc-66d4-4808-8810-b6e314bb1b15; Proxy: null)

How to specify role at node level within Akka cluster?

Given the following appliction.conf :
akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole1" , "testrole2"]
seed-nodes = [
"akka://ClusterSystem#127.0.0.1:25251",
"akka://ClusterSystem#127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
To discern between the roles within an Actor I use :
void register(Member member) {
if (member.hasRole("testrole1")) {
//start actor a1
}
else if (member.hasRole("testrole2")) {
//start actor a2
}
}
edited from src (https://doc.akka.io/docs/akka/current/cluster-usage.html)
To enable role for a node I use the following config :
Within application.conf I configure the array for the roles but this appears to be at the cluster level rather than node level. In other words it does not seem possible to configure application.conf such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should note details be specified at the level of akka.cluster in application.conf ?
For each node, is it required to specify multiple application.conf configuration files?
For example, application.conf for testrole1
akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole1"]
seed-nodes = [
"akka://ClusterSystem#127.0.0.1:25251",
"akka://ClusterSystem#127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
application.conf for testrole2 :
akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole2"]
seed-nodes = [
"akka://ClusterSystem#127.0.0.1:25251",
"akka://ClusterSystem#127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
The difference between each application.conf defined above is the value of akka.cluster.roles is either "testrole1" or "testrole2".
How should application.conf be configured such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should node details be specified at the level of akka.cluster in application.conf ?
Update:
Another option is to pass the rolename via an environment variable? I've just noticed this is explicitly stated here: https://doc.akka.io/docs/akka/current/typed/cluster.html "The node roles are defined in the configuration property named akka.cluster.roles and typically defined in the start script as a system property or environment variable." In this scenario, use the same application.conf file for all nodes but each node uses an environment variable. For example, an updated appliction.conf (note addition of "ENV_VARIABLE")
akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["ENV_VARIABLE"]
seed-nodes = [
"akka://ClusterSystem#127.0.0.1:25251",
"akka://ClusterSystem#127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
Cluster startup scripts determine the role for each node via the ENV_VARIABLE parameter, is this a viable solution?
If you're going to assign different roles to different nodes, those nodes cannot use the same configuration. The easiest way to accomplish this is through n1 having "testRole1" in its akka.cluster.roles list and n2 having "testRole2" in its akka.cluster.roles list.
Everything in akka.cluster config is only configuring that node for participation in the cluster (it's configuring the cluster component on that node). A few of the settings have to be the same across the nodes of a cluster (e.g. the SBR settings), but a setting on n1 doesn't affect a setting on n2.

com.sap.mw.jco.rfc.MiddlewareRFC' JCO.nativeInit(): Could not initialize dynamic link library sapjcorfc

I am getting following error while running java class on linux server...
bash-3.2$ Exception in thread "main" java.lang.ExceptionInInitializerError: JCO.classInitialize(): Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC'
JCO.nativeInit(): Could not initialize dynamic link library sapjcorfc. Found version "2.1.8 (2006-12-11)" but required version "2.1.10 (2011-05-10)".
Below is program :
public static void startServers() {
accessLog.info("Start The Server");
JCO.addClientPool("POOL", 3, "250", "username", "password", "EN", "saptest", "01");
IRepository repository = JCO.createRepository("REP", "POOL");
accessLog.info("Repostory Details" + repository);
for (int i = 0; i < serverConnections.length; i++) {
// (Change gateway host, service, and program ID according to your needs)
serverConnections[i] = new MyClass(
"sapqa",//gateway host, often the same as host
"sapgw", //gateway service, generally sapgw+<SYSNR>
"PRGRAM_ID", // corresponds to program ID defined in SM59
true, // or false for non unicode listener
repository);
serverConnections[i].start();
}

Elasticsearch 1.3. - Call custom REST endpoint from Java

I am currently building an elasticsearch plugin which exposes a REST endpoint ( starting from this post )
I can call my endpoint with curl like this :
curl -X POST 'http://my-es:9200/lt-dev_terminology_v1/english/_terminology?pretty=-d '{
"segment": "database",
"analyzer": "en_analyzer"
}
My question is how can I call the same endpoint from java, using a transport client ? Could you point me to some tutorial ?
I suggest that you take a look here. It should be a good starting point for you.
Let me sum up :
Considering the following parameters :
String clustername = "...";
String clientTransportHost = "...";
Integer clientTransportPort= "...";
String clientIndex = "...";
String indexType = "...";
Of course you replace the dots with the settings you wish to use.
Then you define your cluster Settings:
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clustername).build();
You instantiate the TransportClient object :
TransportClient client = new TransportClient(settings);
client.addTransportAddress(new InetSocketTransportAddress(clientTransportHost, clientTransportPort));
You can verify your connection using this method :
private void verifyConnection(TransportClient client) {
ImmutableList<DiscoveryNode> nodes = client.connectedNodes();
if (nodes.isEmpty()) {
throw new ElasticsearchException(
"No nodes available. Verify ES is running!");
} else {
log.info("connected to nodes: " + nodes.toString());
}
}
PS: to use the log.info() method you have to instantiate a Logger.
So now you can use the verification method :
verifyConnection(client);
and once you've done all that you can now per example prepare a search :
SearchResponse response = client.prepareSearch(clientIndex)
.setTypes(indexType)
.addFields("...", "...")
.setSearchType(SearchType.DEFAULT)
.execute()
.actionGet();
PS : Tested on Elasticsearch 1.3

Need to create tql queries

I need to create TQL queries to query out sets of data from the UCMDB.
I am having 2 problems:
1) How can I find relationships which exists between CIs ( i do not have administrative privileges so need to do it in code somehow)
I need this to get required data.
2) I have created the following query: But I keep getting the IP property value as null.
I checked that IP has an attribute called ip_address.
Code:
import com.hp.ucmdb.api.types.TopologyRelation;
public class Main {
public static void main(String[] args)throws Exception {
final String HOST_NAME = "192.168.159.132";
final int PORT = 8080;
UcmdbServiceProvider provider = UcmdbServiceFactory.getServiceProvider(HOST_NAME, PORT);
final String USERNAME = "username";
final String PASSWORD = "password";
Credentials credentials = provider.createCredentials(USERNAME, PASSWORD);
ClientContext clientContext = provider.createClientContext("Test");
UcmdbService ucmdbService = provider.connect(credentials, clientContext);
TopologyQueryService queryService = ucmdbService.getTopologyQueryService();
Topology topology = queryService.executeNamedQuery("Host IP");
Collection<TopologyCI> hosts = topology.getAllCIs();
for (TopologyCI host : hosts) {
for (TopologyRelation relation : host.getOutgoingRelations()) {
System.out.print("Host " + host.getPropertyValue("display_label"));
System.out.println (" has IP " + relation.getEnd2CI().getPropertyValue("ip_address"));
}
}
}
In the above query output: I get the host names with IP = null
I have a sample query in JYthon which I am unable to figure out: Its for the above code only.
Attaching it for anyone who can understand it.
import sys
UCMDB_API="c:/ucmdb/api/ucmdb-api.jar"
sys.path.append(UCMDB_API)
from com.hp.ucmdb.api import *
# 0) Connection settings
HOST_NAME="192.168.159.132"
PORT=8080
USERNAME="username"
PASSWORD="password"
# 1) Get a Service Provider from the UcmdbServiceFactory
provider = UcmdbServiceFactory.getServiceProvider(HOST_NAME, PORT)
# 2) Setup credentials to log in
credentials = provider.createCredentials(USERNAME, PASSWORD)
# 3) Create a client context
clientContext = provider.createClientContext("TESTING")
# 4) Connect and retrieve a UcmdbService object
ucmdbService = provider.connect(credentials, clientContext)
# 5) Get the TopologyQueryService from the UcmdbService
queryService = ucmdbService.getTopologyQueryService()
# ======= Everything After this is specific to the query =======
# 6) Execute a Named Query and get the Topology
topology = queryService.executeNamedQuery('Host IP')
# 7) Get the hosts
hosts = topology.getAllCIs()
# 8) Print the hosts and IPs
host_ip = {}
for host in hosts:
host_name = host.getPropertyValue("display_label")
if host_name in host_ip.keys():
ips = host_ip[host_name]
else:
ips = {}
host_ip[host_name] = ips
for relation in host.getOutgoingRelations():
ip_address = relation.getEnd2CI().getPropertyValue("display_label")
if ip_address in ips.keys():
pass
else:
ips[ip_address] = ''
print "%s , %s" % (host_name, ip_address)
Please help.
I am unable to understand how to go about this further.
Thank you.
The easiest fix would be use the display_label property from the IP address CI instead of the ip_address property. The Jython reference code uses display_label for its logic.
I'd be a little concerned about using display_label since the display_label formatting logic could be changed to no display the IP address for IP CIs. Getting data directly from the ip_address property is a better choice and should work if the TQL is defined to return that data. Check the Host IP TQL and ensure that it's configured to return ip_address for IP CIs.

Categories

Resources