How to read from Hive HDFS in spark 1.6? - java

I have a few tables in Hive in HDFS, how do I read them into dataframe from spark? How does HiveContext know where my hive warehouse is?
My current code, it throws out of memory error for some reason, these tables are tiny, 30k rows max, 3-5 columns.
SparkConf sparkConf = new SparkConf().setAppName("Hive Test").setMaster("local[*]");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(ctx);
// Queries are expressed in HiveQL.
DataFrame results = sqlContext.sql("SELECT * FROM departments");
results.show();
$ hdfs dfs -ls /user/hive/warehouse
Found 6 items
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:09 /user/hive/warehouse/categories
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:13 /user/hive/warehouse/customers
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:09 /user/hive/warehouse/departments
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:11 /user/hive/warehouse/order_items
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:16 /user/hive/warehouse/orders
drwxrwxrwt - [Omitted] hive 0 2016-10-30 14:09 /user/hive/warehouse/products
16/10/30 14:38:29 INFO ClientWrapper: Inspected Hadoop version: 2.6.0
16/10/30 14:38:29 INFO ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
16/10/30 14:38:30 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/10/30 14:38:30 INFO ObjectStore: ObjectStore, initialize called
16/10/30 14:38:30 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/10/30 14:38:30 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/10/30 14:38:30 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/30 14:38:30 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/10/30 14:38:31 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
Exception in thread "main"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"

Related

Yaml change not detected,Exception encountered during startup: Invalid yaml: file:/etc/cassandra/cassandra.yaml

I have changed Cassandra configuration file
cat /etc/cassandra/cassandra.yaml | grep -n 'seed'
416:seed_provider:
423: # seeds is actually a comma-delimited list of addresses.
425: - seeds:"84.208.89.132,192.168.0.23,192.168.0.25,192.168.0.28"
and also cluster name
10:cluster_name: 'Petter Cluster'
I am surprised to see what the system.log shows
INFO [main] 2018-01-27 17:20:51,343 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml
ERROR [main] 2018-01-27 17:20:51,427 CassandraDaemon.java:706 - Exception encountered during startup: Invalid yaml: file:/etc/cassandra/cassandra.yaml
Error: while parsing a block mapping; expected <block end>, but found FlowEntry; in 'reader', line 425, column 34:
- seeds: "192.168.0.13","192.168.0.23","192.168.0.25"," ...
^
INFO [main] 2018-02-03 20:35:48,528 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml
ERROR [main] 2018-02-03 20:35:48,844 CassandraDaemon.java:706 - Exception encountered during startup: Invalid yaml: file:/etc/cassandra/cassandra.yaml
Error: null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=seed_provider for JavaBean=org.apache.cassandra.config.Config#551bdc27; java.lang.reflect.InvocationTargetException; in 'reader', line 10, column 1:
cluster_name: 'Test Cluster'
^
INFO [main] 2018-02-03 20:39:08,311 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml
ERROR [main] 2018-02-03 20:39:08,647 CassandraDaemon.java:706 - Exception encountered during startup: Invalid yaml: file:/etc/cassandra/cassandra.yaml
Error: null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=seed_provider for JavaBean=org.apache.cassandra.config.Config#551bdc27; java.lang.reflect.InvocationTargetException; in 'reader', line 10, column 1:
cluster_name: 'Test Cluster'
How to fix this?How to initialize system after the changes?
It seems you have got into a issue with Cluster name,it is supposed be changed on all the nodes if you willing to change it.
Here are instruction to change Cluster name :
1. Log into cqlsh
2. cqlsh> UPDATE system.local SET cluster_name = 'Petter Cluster' where key='local'; (You need to issue this command on each of the nodes where you would like to change the cluster name. )
system.local gets changed only locally
3. cqlsh> exit;
4. $ nodetool flush system
5. edit cassandra.yaml cluster name to YOUR_CLUSTER_NAME.
6. Restart cassandra.
Please check this link as well:
https://surbhinosqldba.wordpress.com/2015/07/23/how-to-rename-modify-cassandra-cluster-name/

java.nio.channels.UnresolvedAddressException when tranquility index data to druid

I am trying tranquility with Druid 0.11 and Kafka. When tranquility receive new data it throw the following exception:
2018-01-12 18:27:34,010 [Curator-ServiceCache-0] INFO c.m.c.s.net.finagle.DiscoResolver - Updating instances for service[firehose:druid:overlord:flow-018-0000-0000] to Set(ServiceInstance{name='firehose:druid:overlord:flow-018-0000-0000', id='ea85b248-0c53-4ec1-94a6-517525f72e31', address='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, sslPort=-1, payload=null, registrationTimeUTC=1515781653895, serviceType=DYNAMIC, uriSpec=null})
Jan 12, 2018 6:27:37 PM com.twitter.finagle.netty3.channel.ChannelStatsHandler exceptionCaught
WARNING: ChannelStatsHandler caught an exception
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779)
at org.jboss.netty.channel.SimpleChannelHandler.connectRequested(SimpleChannelHandler.java:306)
The worker was created by middle Manager:
2018-01-12T18:27:25,704 INFO [WorkerTaskMonitor] io.druid.indexing.worker.WorkerTaskMonitor - Submitting runnable for task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
2018-01-12T18:27:25,719 INFO [WorkerTaskMonitor] io.druid.indexing.worker.WorkerTaskMonitor - Affirmative. Running task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
And tranquility talk with overlord fine... I think by the following logs:
2018-01-12T18:27:25,268 INFO [qtp271944754-62] io.druid.indexing.overlord.TaskLockbox - Adding task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] to activeTasks
2018-01-12T18:27:25,272 INFO [TaskQueue-Manager] io.druid.indexing.overlord.TaskQueue - Asking taskRunner to run: index_realtime_flow_2018-01-12T18:00:00.000Z_0_0
2018-01-12T18:27:25,272 INFO [TaskQueue-Manager] io.druid.indexing.overlord.RemoteTaskRunner - Added pending task index_realtime_flow_2018-01-12T18:00:00.000Z_0_0
2018-01-12T18:27:25,279 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - No worker selection strategy set. Using default of [EqualDistributionWorkerSelectStrategy]
2018-01-12T18:27:25,294 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - Coordinator asking Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] to add task[index_realtime_flow_2018-01-12T18:00:00.000Z_0_0]
2018-01-12T18:27:25,334 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.RemoteTaskRunner - Task index_realtime_flow_2018-01-12T18:00:00.000Z_0_0 switched from pending to running (on [druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091])
2018-01-12T18:27:25,336 INFO [rtr-pending-tasks-runner-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] status changed to [RUNNING].
2018-01-12T18:27:25,747 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.RemoteTaskRunner - Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] wrote RUNNING status for task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] on [TaskLocation{host='null', port=-1, tlsPort=-1}]
2018-01-12T18:27:25,829 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.RemoteTaskRunner - Worker[druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local:8091] wrote RUNNING status for task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] on [TaskLocation{host='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, tlsPort=-1}]
2018-01-12T18:27:25,829 INFO [Curator-PathChildrenCache-1] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_realtime_flow_2018-01-12T18:00:00.000Z_0_0] location changed to [TaskLocation{host='druid-md-deployment-7877777bf7-tmmvh.druid-md-hs.default.svc.cluster.local', port=8100, tlsPort=-1}].
What's wrong? I tried a thousand things and nothing solves it ...
Thanks a lot
UnresolvedAddressException being hit by Druid broker
You have to have all the druid cluster information set in you servers running tranquility.
It's because you only get DNS of you druid cluster from zookeeper, not the IP.
For example, on linux server, save you cluster information in /etc/hosts.

Running Dataflow locally causes JVM crash (OOM)

Using the DirectPipelineRunner, I'd like to run my pipeline locally for debugging purposes. I'm using SDK 1.9.0 with Java 8.
My pipeline reads a table from BigQuery, transforms some fields, and writes back to BigQuery.
Running on GCP, i.e. using the DataflowPipelineRunner runner works absolutely fine. However, when I use the DirectPipelineRunner is just keeps spitting out the following log info, and does nothing else:
19:45:05,470 21866 [main] INFO com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner - Executing pipeline using the DirectPipelineRunner.
19:45:18,594 34990 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_c88ee6741e434aabbf50e73d4e6733d1-extract found.
19:45:27,344 43740 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_012dca76d75e461480fe75897b5fa7ba-extract found.
19:45:38,150 54546 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_3548a0ee373a417e8e7570ae90aef78d-extract found.
19:45:47,912 64308 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_db0b957250ef41279a639bdc113c5493-extract found.
19:45:56,685 73081 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_3773e0643ec14475aaa140bcf46ea7af-extract found.
19:46:45,958 122354 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_27af9a1163944cb19e520242de98d899-extract found.
19:46:55,766 132162 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_5473e6702b3544118c7da8877c900f7a-extract found.
19:47:04,015 140411 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_40f47d35aa154708a6fc684c8ffb0ba4-extract found.
19:47:11,913 148309 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_6dce34301c97498884d7344b85a1b07e-extract found.
19:47:35,809 172205 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_4f7c26d372974095a24ac58b547c13d6-extract found.
19:47:45,136 181532 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_a7c33e75bfdb41a6990dd66810a0d44a-extract found.
19:47:55,802 192198 [main] INFO com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl - No BigQuery job with job id beam_job_a1d7422ca42a4b1d96205bf8c6dada9d-extract found.
The log message is coming from here:
#VisibleForTesting
public Job getJob(JobReference jobRef, Sleeper sleeper, BackOff backoff)
throws IOException, InterruptedException {
String jobId = jobRef.getJobId();
Exception lastException;
do {
try {
return client.jobs().get(jobRef.getProjectId(), jobId).execute();
} catch (GoogleJsonResponseException e) {
if (errorExtractor.itemNotFound(e)) {
LOG.info("No BigQuery job with job id {} found.", jobId);
return null;
}....
Eventually, the JVM runs out of memory:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at java.lang.String.toLowerCase(String.java:2647)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:847)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:472)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:781)
at com.google.api.client.json.JsonParser.parseArray(JsonParser.java:648)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:740)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:472)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:781)
at com.google.api.client.json.JsonParser.parseArray(JsonParser.java:648)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:740)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:472)
at com.google.api.client.json.JsonParser.parseValue(JsonParser.java:781)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:382)
at com.google.api.client.json.JsonParser.parse(JsonParser.java:355)
at com.google.api.client.json.JsonObjectParser.parseAndClose(JsonObjectParser.java:87)
at com.google.api.client.json.JsonObjectParser.parseAndClose(JsonObjectParser.java:81)
at com.google.api.client.http.HttpResponse.parseAs(HttpResponse.java:459)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.dataflow.sdk.util.BigQueryTableRowIterator.executeWithBackOff(BigQueryTableRowIterator.java:497)
at com.google.cloud.dataflow.sdk.util.BigQueryTableRowIterator.advance(BigQueryTableRowIterator.java:180)
at com.google.cloud.dataflow.sdk.util.BigQueryServicesImpl$BigQueryJsonReaderImpl.advance(BigQueryServicesImpl.java:555)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$BigQuerySourceBase$BigQueryReader.advance(BigQueryIO.java:1331)
at com.google.cloud.dataflow.sdk.io.Read$Bounded$1.evaluateReadHelper(Read.java:180)
at com.google.cloud.dataflow.sdk.io.Read$Bounded$1.evaluate(Read.java:168)
at com.google.cloud.dataflow.sdk.io.Read$Bounded$1.evaluate(Read.java:164)
at com.google.cloud.dataflow.sdk.runners.DirectPipelineRunner$Evaluator.visitTransform(DirectPipelineRunner.java:858)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:221)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:217)
at com.google.cloud.dataflow.sdk.runners.TransformTreeNode.visit(TransformTreeNode.java:217)
at com.google.cloud.dataflow.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:103)
The table in BigQuery only has 100 rows (it's just for debugging).
What is the problem here?
I believe the BigQuery message is a red-herring; the stack trace of the OOM indicates that data is being read directly from the table and not via an export job.
The DirectPipelineRunner is not at all optimized for memory utilization; try using the newer InProcessPipelineRunner. Additionally, it may be worth using standard Java heap profiling tools to see where the memory is being used.

Log4j - SetLevel to Error but INFO / DEBUG Log Still Appear

Using scala and spark, i try to build simple apps. To avoid too many logs, i have setLevel Log to Level.ERROR, but it seems all log still appear. Here're my codes :
import org.apache.log4j.{BasicConfigurator, Level, Logger}
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by hduser on 16/08/16.
*/
object ALS_Test {
def main(command : Array[String]): Unit =
{
Logger.getRootLogger
Logger.getLogger(this.getClass).setLevel(Level.ERROR)
Logger.getLogger("org.spark_project").setLevel(Level.ERROR)
BasicConfigurator.configure()
val sparkConf = new SparkConf().setAppName("AppName").setMaster("local[4]")
val sc = new SparkContext(sparkConf)
println("test 123")
}
}
The Output when running still have many confused logs :
0 [main] INFO org.apache.spark.SparkContext - Running Spark version 2.0.0
163 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time)
175 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time)
176 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation #org.apache.hadoop.metrics2.annotation.Metric(always=false, about=, sampleName=Ops, type=DEFAULT, value=[GetGroups], valueName=Time)
177 [main] DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics
555 [main] DEBUG org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.

Ehcache Jgroups replication using TCP

I am trying to setup a replicated cache using Jgroups in Ehcache.I am having problems in clustering the cache.I created 2 projects in eclipse each refering to different ehcache.xml configuration file.
Both the configuration files are identical and is given beolw.
<?xml version="1.0"?>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=TCP(bind_port=7800):
TCPPING(initial_hosts=localhost[7800],localhost[7801];port_range=10;timeout=3000;
num_initial_members=3):
VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(retransmit_timeout=3000):
pbcast.GMS(join_timeout=50000;print_local_addr=true)"
propertySeparator="::" />
<cache name="sampleCache"
maxElementsInMemory="1000000"
eternal="true"
overflowToDisk="false">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true"/>
</cache>
I am using the following jar files in my classpath.
-ehcache-2.9.0.jar
-ehcache-jgroupsreplication-1.7.jar
-jgroups-3.6.0.Final.jar
-log4j-1.2.16
When I run the programs project1 shows
63511 [main] DEBUG org.jgroups.protocols.pbcast.NAKACK -
[SBSPBWSVM110-42986 setDigest()]
existing digest: []
new digest: SBSPBWSVM110-42986: [0 (0)]
resulting digest: SBSPBWSVM110-42986: [0 (0)]
63511 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-42986: installing view [SBSPBWSVM110-42986|0] (1) [SBSPBWSVM110-42986]
63543 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-42986: created cluster (first member). My view is [SBSPBWSVM110-42986|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
Jan 09, 2015 11:49:51 AM net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider init
INFO:JGroups Replication started for 'EH_CACHE'. JChannel: local_addr=SBSPBWSVM110-42986
cluster_name=EH_CACHE
my_view=[SBSPBWSVM110-42986|0] (1) [SBSPBWSVM110-42986]
state=CONNECTED
discard_own_messages=true
state_transfer_supported=false
When I run the programs project2 shows
63451 [main] DEBUG org.jgroups.protocols.pbcast.NAKACK -
[SBSPBWSVM110-20554 setDigest()]
existing digest: []
new digest: SBSPBWSVM110-20554: [0 (0)]
resulting digest: SBSPBWSVM110-20554: [0 (0)]
63451 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-20554: installing view [SBSPBWSVM110-20554|0] (1) [SBSPBWSVM110-20554]
63452 [main] DEBUG org.jgroups.protocols.pbcast.GMS - SBSPBWSVM110-20554: created cluster (first member). My view is [SBSPBWSVM110-20554|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
Jan 09, 2015 11:49:51 AM net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProvider init
INFO: JGroups Replication started for 'EH_CACHE'. JChannel: local_addr=SBSPBWSVM110-20554
cluster_name=EH_CACHE
my_view=[SBSPBWSVM110-20554|0] (1) [SBSPBWSVM110-20554]
state=CONNECTED
discard_own_messages=true
state_transfer_supported=false
But the replication simply isnt happening.I have done the RMIreplication using Ehcache and following the same approach here also. So I am assuming nothing is wrong in my java code.
I am unable to find the issue here.Is my configuration wrong??Please help me with this issue..
The config you use is strange: it's mising some protocols. Can't ehcache refer to a JGroups config file, e.g. udp.xml ?
Also you you set bind_addr in TCP or use -Djgroups.bind_addr=1.2.3.4 where 1.2.3.4 is the network interface.
Then, in TCPPING.initial_hosts, you'll need to list all the members with the bind addresses you used above, e.g. 1.2.3.4[7800],5.6.7.8[7800] etc.

Categories

Resources