TableNotFoundException when connecting to a remote HBase instance - java

I'm trying to connect to a remote hbase-0.94.8 installed on a ubuntu vm. I'm having a TableNotFoundException and this is my Java code:
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.101");
HTableInterface usersTable = new HTable(config, "users");
Here is the full exception trace:
14/06/24 15:59:48 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: users, row=users,,99999999999999
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:158)
at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
at org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:360)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:103)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:876)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:930)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
at com.heavenize.samples.hbase.UsersTool.main(UsersTool.java:37)
Exception in thread "main" org.apache.hadoop.hbase.TableNotFoundException: users
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:952)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
at com.heavenize.samples.hbase.UsersTool.main(UsersTool.java:37)

You can check for table before using it. To do it you may use HBaseAdmin.
Create HBaseAdmin instance and thet use isTableAvailable(String tableName) method.
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.101");
HBaseAdmin admin = new HBaseAdmin(config);
if(admin.isTableAvailable(tableName))
{
HTableInterface usersTable = new HTable(config, "users");
}
I hope it will help you.

Related

When doing a redeploy of JBoss WAR with Apache Ignite, Failed to marshal custom event: StartRoutineDiscoveryMessage

I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.

java.lang.IllegalStateException: A connection to a distributed system already exists in this VM. It has the following configuration:

enter image description here public class GemfireTest {
public static void main(String[] args) throws NameResolutionException, TypeMismatchException, QueryInvocationTargetException, FunctionDomainException {
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.setServerPort(40404)
.set("start-locator", "127.0.0.1[9090]")
.build();
serverLauncher.start();
String queryString = "SELECT * FROM /gemregion";
ClientCache cache = new ClientCacheFactory().create();
QueryService queryService = cache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults)query.execute();
int size = results.size();
System.out.println(size);
}
}
trying to run a locator and a server inside my java application getting an exception below:
Exception in thread "main" java.lang.IllegalStateException: A
connection to a distributed system already exists in this VM. It has
the following configuration: ack-severe-alert-threshold="0"
ack-wait-threshold="15" archive-disk-space-limit="0"
archive-file-size-limit="0" async-distribution-timeout="0"
async-max-queue-size="8" async-queue-timeout="60000"
bind-address="" cache-xml-file="cache.xml"
cluster-configuration-dir="" cluster-ssl-ciphers="any"
cluster-ssl-enabled="false" cluster-ssl-keystore=""
cluster-ssl-keystore-password="" cluster-ssl-keystore-type=""
cluster-ssl-protocols="any"
cluster-ssl-require-authentication="true" cluster-ssl-truststore=""
cluster-ssl-truststore-password="" conflate-events="server"
conserve-sockets="true" delta-propagation="true"
deploy-working-dir="C:\Users\Saranya\IdeaProjects\Gemfire"
disable-auto-reconnect="false" disable-tcp="false"
distributed-system-id="-1" distributed-transactions="false"
durable-client-id="" durable-client-timeout="300"
enable-cluster-configuration="true"
enable-network-partition-detection="true"
enable-time-statistics="false" enforce-unique-host="false"
gateway-ssl-ciphers="any" gateway-ssl-enabled="false"
gateway-ssl-keystore="" gateway-ssl-keystore-password=""
gateway-ssl-keystore-type="" gateway-ssl-protocols="any"
gateway-ssl-require-authentication="true" gateway-ssl-truststore=""
gateway-ssl-truststore-password="" groups=""
http-service-bind-address="" http-service-port="7070"
http-service-ssl-ciphers="any" http-service-ssl-enabled="false"
http-service-ssl-keystore="" http-service-ssl-keystore-password=""
http-service-ssl-keystore-type="" http-service-ssl-protocols="any"
http-service-ssl-require-authentication="false"
http-service-ssl-truststore=""
http-service-ssl-truststore-password="" jmx-manager="false"
jmx-manager-access-file="" jmx-manager-bind-address=""
jmx-manager-hostname-for-clients="" jmx-manager-http-port="7070"
jmx-manager-password-file="" jmx-manager-port="1099"
jmx-manager-ssl-ciphers="any" jmx-manager-ssl-enabled="false"
jmx-manager-ssl-keystore="" jmx-manager-ssl-keystore-password=""
jmx-manager-ssl-keystore-type="" jmx-manager-ssl-protocols="any"
jmx-manager-ssl-require-authentication="true"
jmx-manager-ssl-truststore="" jmx-manager-ssl-truststore-password=""
jmx-manager-start="false" jmx-manager-update-rate="2000"
load-cluster-configuration-from-dir="false" locator-wait-time="0"
locators="127.0.0.1[9090]" (wanted "") lock-memory="false"
log-disk-space-limit="0"
log-file="C:\Users\Saranya\IdeaProjects\Gemfire\server1.log"
(wanted "") log-file-size-limit="0" log-level="config" max-num-reconnect-tries="3" max-wait-time-reconnect="60000"
mcast-address="/239.192.81.1" mcast-flow-control="1048576, 0.25,
5000" mcast-port="0" mcast-recv-buffer-size="1048576"
mcast-send-buffer-size="65535" mcast-ttl="32"
member-timeout="5000" membership-port-range="[1024,65535]"
memcached-bind-address="" memcached-port="0"
memcached-protocol="ASCII" name="server1" (wanted "")
off-heap-memory-size="" redis-bind-address="" redis-password=""
redis-port="0" redundancy-zone="" remote-locators=""
remove-unresponsive-client="false" roles=""
security-client-accessor="" security-client-accessor-pp=""
security-client-auth-init="" security-client-authenticator=""
security-client-dhalgo="" security-log-file=""
security-log-level="config" security-manager=""
security-peer-auth-init="" security-peer-authenticator=""
security-peer-verifymember-timeout="1000" security-post-processor=""
security-shiro-init="" security-udp-dhalgo=""
serializable-object-filter="!" server-bind-address=""
server-ssl-ciphers="any" server-ssl-enabled="false"
server-ssl-keystore="" server-ssl-keystore-password=""
server-ssl-keystore-type="" server-ssl-protocols="any"
server-ssl-require-authentication="true" server-ssl-truststore=""
server-ssl-truststore-password="" socket-buffer-size="32768"
socket-lease-time="60000" ssl-ciphers="any" ssl-cluster-alias=""
ssl-default-alias="" ssl-enabled-components="[]"
ssl-gateway-alias="" ssl-jmx-alias="" ssl-keystore=""
ssl-keystore-password="" ssl-keystore-type="" ssl-locator-alias=""
ssl-protocols="any" ssl-require-authentication="true"
ssl-server-alias="" ssl-truststore="" ssl-truststore-password=""
ssl-truststore-type="" ssl-web-alias=""
ssl-web-require-authentication="false" start-dev-rest-api="false"
start-locator="127.0.0.1[9090]" (wanted "")*
statistic-archive-file="" statistic-sample-rate="1000"
statistic-sampling-enabled="true" tcp-port="0"
udp-fragment-size="60000" udp-recv-buffer-size="1048576"
udp-send-buffer-size="65535" use-cluster-configuration="true"
user-command-packages="" validate-serializable-objects="false"
at
org.apache.geode.distributed.internal.InternalDistributedSystem.validateSameProperties(InternalDistributedSystem.java:2959)
at
org.apache.geode.distributed.DistributedSystem.connect(DistributedSystem.java:199)
at
org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:243)
at
org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:214)
at GemfireTest.main(GemfireTest.java:61)
How to solve this exception?
The error here is pretty self explanatory: you can’t have more than one connection to a distributed system within a single JVM. In this particular case you’re starting both a server cache (ServerLauncher) and a client cache (ClientCacheFactory) within the same JVM, which is not supported.
To solve the issue, use two different applications or JVMs, one for the server and another one for the client executing the query.
Cheers.

Assign memory for individual worker in storm

I need to assign different values of memory for each new worker. So I tried changing memory for each bolt and spout. I am currently using a custom scheduler also. Here is my approach to the problem.
MY CODE:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new EmailSpout(), 1).addConfiguration("node", "zoo1").setMemoryLoad(512.0);
builder.setBolt("increment1", new IncrementBolt(), PARALLELISM).shuffleGrouping("spout").addConfiguration("node", "zoo2").setMemoryLoad(2048.0);
builder.setBolt("increment2", new IncrementBolt(), PARALLELISM).shuffleGrouping("increment1").addConfiguration("node", "zoo3").setMemoryLoad(2048.0);
builder.setBolt("increment3", new IncrementBolt(), PARALLELISM).shuffleGrouping("increment2").addConfiguration("node", "zoo4").setMemoryLoad(2048.0);
builder.setBolt("output", new OutputBolt(), 1).globalGrouping("increment2").addConfiguration("node", "zoo1").setMemoryLoad(512.0);
Config conf = new Config();
conf.setDebug(false);
conf.setNumWorkers(4);
StormSubmitter.submitTopologyWithProgressBar("Microbenchmark", conf, builder.createTopology());
MY STORM.YAML:
storm.zookeeper.servers:
- "zoo1"
storm.zookeeper.port: 2181
nimbus.seeds: ["zoo1"]
storm.local.dir: "/home/ubuntu/eranga/storm-data"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
storm.scheduler: "org.apache.storm.scheduler.NodeBasedCustomScheduler"
supervisor.scheduler.meta:
node: "zoo4"
worker.profiler.enabled: true
worker.profiler.childopts: "-XX:+UnlockCommercialFeatures -XX:+FlightRecorder"
worker.profiler.command: "flight.bash"
worker.heartbeat.frequency.secs: 1
worker.childopts: "-Xmx2048m -Xms2048m -Djava.net.preferIPv4Stack=true -Dorg.xml.sax.driver=com.sun.org.apache.xerces.internal.parsers.SAXParser -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl -Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl"
When I submit the topology I get the following error.
ERROR:
Exception in thread "main" java.lang.IllegalArgumentException: Topology will not be able to be successfully scheduled: Config TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB=768.0 < 2048.0 (Largest memory requirement of a component in the topology). Perhaps set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a larger amount
at org.apache.storm.StormSubmitter.validateTopologyWorkerMaxHeapSizeMBConfigs(StormSubmitter.java:496)
Any suggestions?
Try using this.
import org.apache.storm.Config;
public class TopologyExecuter{
for(List<StormTopology> StormTopologyObject : StormTopologyObjects ){
Config topologyConf = new Config();
topologyConf.put(Config.TOPOLOGY_WORKER_CHILDOPTS,"-Xmx512m -Xms256m");
StormSubmitter.submitTopology("topology name", topologyConf, StormTopologyObject);
}
}
Did you try following the advice from the error message?
Perhaps set TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB to a larger amount
Try adding this to storm.yaml:
topology.worker.max.heap.size.mb=2048.0

drop table not working - com.datastax.driver.core

Drop table using the datastax driver for Cassandra doesn't look to be working. create table works but drop table does not and does not throw an exception. 1) Am I doing the drop correctly? 2) Anyone else seen this behavior?
In the output you can see the table gets created and apparently dropped as it is not in the second table listing in the first run. However, when I reconnect (second run) the table is there resulting in an exception.
import java.util.Collection;
import com.datastax.driver.core.*;
public class Fail {
SimpleStatement createTableCQL = new SimpleStatement("create table test_table(testfield varchar primary key)");
SimpleStatement dropTableCQL = new SimpleStatement("drop table test_table");
Session session = null;
Cluster cluster = null;
public Fail()
{
System.out.println("First Run");
this.run();
System.out.println("Second Run");
this.run();
}
private void run()
{
try
{
cluster = Cluster.builder().addContactPoints("10.48.8.43 10.48.8.47 10.48.8.53")
.withCredentials("394016","394016")
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.ALL))
.build();
session = cluster.connect("gid394016");
}
catch(Exception e)
{
System.err.println(e.toString());
System.exit(1);
}
//create the table
System.out.println("createTableCQL");
this.session.execute(createTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results1 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results1)
{
System.out.println(tm.toString());
}
//drop the table
System.out.println("dropTableCQL");
this.session.execute(dropTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results2 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results2)
{
System.out.println(tm.toString());
}
session.close();
cluster.close();
}
public static void main(String[] args) {
new Fail();
}
}
Console output:
First Run
[main] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Table list:
CREATE TABLE gid394016.test_table (testfield text, PRIMARY KEY (testfield)) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = '' AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' } AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.LZ4Compressor' } AND default_time_to_live = 0 AND speculative_retry = '99.0PERCENTILE' AND min_index_interval = 128 AND max_index_interval = 2048;
dropTableCQL
Table list:
Second Run
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Exception in thread "main" com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:217)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:54)
at com.bdcauto.cassandrachecks.Fail.run(Fail.java:38)
at com.bdcauto.cassandrachecks.Fail.<init>(Fail.java:17)
at com.bdcauto.cassandrachecks.Fail.main(Fail.java:65)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:130)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:118)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:151)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:175)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:44)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:801)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1014)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:937)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:69)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:230)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:221)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 14 more
You are running this code with the table present in the database and that's why you are getting the "already exists" error. Please connect to the database using cqlsh and check that yourself.
Create, alter and drop table statements are being propagated throughout the cluster asynchronously. Even though you are receiving a response from the coordinator you still need to wait for schema agreement.

getting exception: The Network Adapter could not establish the connection in simple jdbc program

I was trying to run a simple jdbc code [ using jdk 1.6, oracle 10g] as,
package javaapplication2;
import java.text.*;
import java.sql.*;
import java.io.FileWriter;
import java.io.PrintWriter;
import java.io.BufferedWriter;
/**
* #author animark
*/
public class CallableStatementEx1 {
public CallableStatementEx1(){;}
public static void main(String s[]) throws Exception {
try
{
Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
Connection con=null;
String url= "jdbc:oracle:thin:#//localhost:1521:orcl" ;
con = DriverManager.getConnection(url,"scott","password");
String query="update emp set HIREDATE=?,ENAME=? where empno=?";
//Step1: Get PreparedStatement
PreparedStatement ps=con.prepareStatement(query);
//Prepare java.sql.Date object
/*
This logic shows how to convert simple String that is in
dd-MM-yyyy format into Date object
*/
SimpleDateFormat sdf=new SimpleDateFormat("dd-MM-yyyy");
java.util.Date d=sdf.parse("26-12-2001");
java.sql.Date newdate=new java.sql.Date(d.getTime());
//Step2: set parameters
ps.setDate(1,newdate);
ps.setString(2,"animark");
ps.setInt(3,7839);
//Step3: execute the query
int i=ps.executeUpdate();
System.out.println("record updated count: "+i);
con.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}//main
}//class
The code is getting compiled properly. But when I'm trying to run it, i'm getting the following exception..
java.sql.SQLException: Io exception: The Network Adapter could not establish the connection
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:414)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at javaapplication2.CallableStatementEx1.main(CallableStatementEx1.java:19)
I've checked the oracle services and all of them are up and running.
Also, please find the contents of other files as
=================================================================================
tnsnames.ora:
# tnsnames.ora Network Configuration File: C:\oraclexe\app\oracle\product\10.2.0\server\BIN\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
====================================================================================
sqlnet.ora
# sqlnet.ora Network Configuration File: C:\oraclexe\app\oracle\product\10.2.0\server\BIN\network\admin\sqlnet.ora
# Generated by Oracle configuration tools.
# This file is actually generated by netca. But if customers choose to
# install "Software Only", this file wont exist and without the native
# authentication, they will not be able to connect to the database on NT.
SQLNET.AUTHENTICATION_SERVICES= (NTS)
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
================================================================================
# listener.ora Network Configuration File: C:\oraclexe\app\oracle\product\10.2.0\server\BIN\network\admin\listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oraclexe\app\oracle\product\10.2.0\server\BIN)
(PROGRAM = extproc)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
)
)
===============================================================================
I'm being able to connect to the instance "orcl" using credentials "scott/password", but when i'm trying to connect using statement
SQL> connect sys/password#orcl as sysdba
I'm getting the following error..
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor.
I've also run LSNRCTL for orcl and found
LSNRCTL for 32-bit Windows: Version 10.2.0.1.0 - Production on 22-JUL-2012 13:42:30
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
The command completed successfully
Please help me if I'm doing anything wrong here.
Try taking the // out of the connection URL. Instead of
String url= "jdbc:oracle:thin:#//localhost:1521:orcl" ;
try
String url= "jdbc:oracle:thin:#localhost:1521:orcl" ;

Categories

Resources