So i have this code to connect to openfire
XMPPTCPConnectionConfiguration.Builder config = XMPPTCPConnectionConfiguration.builder();
config.setUsernameAndPassword(loginUser, passwordUser);
config.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled);
config.setServiceName(serverAddress);
config.setHost(serverAddress);
config.setPort(5222);
config.setDebuggerEnabled(true);
connection = new XMPPTCPConnection(config.build());
ReconnectionManager.getInstanceFor(connection).enableAutomaticReconnection();
System.out.println("Reconnection enabled : " + ReconnectionManager.getInstanceFor(connection).isAutomaticReconnectEnabled());
ConnectionListener connectionListener = new XMPPConnectionListener();
connection.addConnectionListener(connectionListener);
but when i try to connect i get this error :
org.jivesoftware.smack.XMPPException$StreamErrorException: internal-server-error You can read more about the meaning of this stream error at http://xmpp.org/rfcs/rfc6120.html#streams-error-conditions
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1007)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$300(XMPPTCPConnection.java:948)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:963)
at java.lang.Thread.run(Thread.java:744)
EDIT : Openfire's log :
Warn log :
2016.06.13 11:06:31 org.apache.mina.core.filterchain.DefaultIoFilterChain - Unexpected exception from exceptionCaught handler.
java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
at org.jivesoftware.openfire.roster.Roster.broadcastPresence(Roster.java:628)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.broadcastUpdate(PresenceUpdateHandler.java:309)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:163)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:138)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:202)
at org.jivesoftware.openfire.PresenceRouter.handle(PresenceRouter.java:144)
at org.jivesoftware.openfire.PresenceRouter.route(PresenceRouter.java:80)
at org.jivesoftware.openfire.spi.PacketRouterImpl.route(PacketRouterImpl.java:88)
at org.jivesoftware.openfire.SessionManager$ClientSessionListener.onConnectionClose(SessionManager.java:1267)
at org.jivesoftware.openfire.nio.NIOConnection.notifyCloseListeners(NIOConnection.java:266)
at org.jivesoftware.openfire.nio.NIOConnection.close(NIOConnection.java:248)
at org.jivesoftware.openfire.nio.ConnectionHandler.exceptionCaught(ConnectionHandler.java:162)
i tried to connect to a local openfire server(windows), i succeded, but I fail when i try to connect to an ubuntu openfre server.
Any help would be appreciated.
Newer versions of Openfire need Java 8 (or higher).
To be precise : openfire needs oracle jre 8 NOT Openjdk
Related
Currently I am running Redis 6 with ACL and mTLS with a C# client just fine. I am trying to update our Java side to also use ACL and mTLS but have been running into issues. I am primarily focused on mTLS at the moment and have not been getting anywhere with it. This could be user fault in these that I have not used Java for 5-6 years before attempting to do this, so please advise. Not sure what or how to really progress from this error and I have done google searches with not success really. Any help greatly appreciated, again I have not done Java in a long time so that most likely might be the issue.
Trace:
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:295)
at io.lettuce.core.RedisClient.connect(RedisClient.java:214)
at io.lettuce.core.RedisClient.connect(RedisClient.java:199)
at blah blah blah my code....
... 48 more
Caused by: javax.net.ssl.SSLException: SSLEngine closed already
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:834)
at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:797)
at io.netty.handler.ssl.SslHandler.handleUnwrapThrowable(SslHandler.java:1254)
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1230)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1271)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:505)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
... 2 more
Redis Server Debug logs:
1:M 30 Jul 2020 15:23:10.837 - Accepted 10.0.2.2:62023
1:M 30 Jul 2020 15:23:11.024 # Error accepting a client connection: (null)
Java code:
final RedisClient client = RedisClient.create(RedisURI.Builder.redis(hostConfig,portConfig)
.withSsl(true).withVerifyPeer(false).build().toURI().toString());
if (redisTruststorePath != null && !redisTruststorePath.isEmpty()) {
SslOptions sslOptions;
if (redisKeystorePath != null && !redisKeystorePath.isEmpty()) {
sslOptions = SslOptions.builder()
.jdkSslProvider()
.keystore(new File(redisKeystorePath), redisKeystorePass)
.truststore(new File(redisTruststorePath), redisTruststorePass)
.build();
}
else {
sslOptions = SslOptions.builder()
.jdkSslProvider()
.truststore(new File(redisTruststorePath), redisTruststorePass)
.build();
}
client.setOptions(ClientOptions.builder().sslOptions(sslOptions).build());
}
client.connect();
Versions:
Lettuce version(s): 6.0.0.M1 (Running on windows locally)
Redis version: 6.0.5 (Running on linux VM locally)
Notes:
C# client is working fine so doubt its a Redis Server issue.
Redis URI (printed in my real code before set): rediss://localhost:6379
Please check your client-side logs.
16797:M 03 Aug 2020 09:11:11.246 # Error accepting a client connection: (null)
This message above happens when Redis wasn't able to continue with the connection phase. Such a message occurs in SSL arrangements when the SSL handshake wasn't completed successfully, e.g. caused by a failed certificate validation.
Looking at the code above, the client gets created with:
RedisClient.create(RedisURI.Builder.redis(hostConfig,portConfig) .withSsl(true).withVerifyPeer(false).build().toURI().toString());
The RedisURI object gets converted into a string which causes a loss of the verifyPeer flag.
Please change your code to:
RedisClient.create(RedisURI.Builder.redis(hostConfig,portConfig) .withSsl(true).withVerifyPeer(false).build());
by removing .toURI().toString().
As #mp911de mentioned I removed .toURI().toString(); as well as, updated to lettuce-core 6.0.0.RC and started using RESP2 (as suggested here). This resolved my problem. I think the main solution here was switching to RESP2, which again was a suggestion from #mp911de. Thank you for the assistance #mp911de!!
I'm trying to establish Oracle DB connection from the application which runs on an EC2 instance of AWS. Oracle DB is in on-prem server. Firewall has been opened and I'm able to telnet to the SCAN and VIP hosts of that DB from my EC2 instance. But, still I'm getting the below exception:
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:392) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:434) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:687) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:343) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1102) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:320) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
... 239 common frames omitted
Caused by: java.net.UnknownHostException: <<hostname>>: unknown error
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_40-internal]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907) ~[na:1.8.0_40-internal]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302) ~[na:1.8.0_40-internal]
at java.net.InetAddress.getAllByName0(InetAddress.java:1255) ~[na:1.8.0_40-internal]
at java.net.InetAddress.getAllByName(InetAddress.java:1171) ~[na:1.8.0_40-internal]
at java.net.InetAddress.getAllByName(InetAddress.java:1105) ~[na:1.8.0_40-internal]
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.net.nt.ConnOption.connect(ConnOption.java:133) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:370) ~[ojdbc6-11.2.0.3.jar:11.2.0.3.0]
JDBC URL:
jdbc:oracle:thin:#(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)
(HOST = <<IP address of the host>>)(PORT = 1590)))(LOAD_BALANCE = yes)
(CONNECT_DATA =(SERVICE_NAME = <<example.service.com>>)(FAILOVER_MODE =(TYPE = SELECT)
(METHOD = BASIC))))`
Are you using java 8 ? If so then probably it's related to this bug
Java 7 or 9 would probably give you a more useful error message instead of "unknown error" (eg possibly "Name or service not known")
Apart from that, did you try a tnsping from the host you're trying to connect from ?
Also as per the below in the Oracle driver documentation when using a tnsnames entry in the jdbc URL it should be as below, using OCI driver :
Note that you can also specify the database by a TNSNAMES entry. You can find the available TNSNAMES entries listed in the file tnsnames.ora on the client computer from which you are connecting. For example, if you want to connect to the database on host myhost as user scott with password tiger that has a TNSNAMES entry of MyHostString, enter:
Connection conn = DriverManager.getConnection
("jdbc:oracle:oci8:#MyHostString","scott","tiger");
I am using 2 jars :
1. tdgssconfig
2. terajdbc4
Class.forName("com.teradata.jdbc.TeraDriver");
connString = "jdbc:teradata://" + host + "/database=" + db + ",tmode=DEFAULT,charset=UTF8,LOGMECH=TD2";
//DriverManager.registerDriver (new com.teradata.jdbc.TeraDriver());
DriverManager.setLoginTimeout(120);
this.conn = DriverManager.getConnection(connString, user,password);
On my computer which as also installed Teradata SQLAssistant the process is working and connecting teradata
but on the server which don't have the software ,the process is failed to connect tera data. I get the following error :
TERAJDBC4 ERROR [main] com.teradata.jdbc.jdk6.JDK6_SQL_Connection
I'm Having a problem establishing a connection to SAP in my java program.
I'm following the examples that come in the JCO download but i always get this error:
com.sap.conn.jco.JCoException: (102) RFC_ERROR_COMMUNICATION: Connect to SAP gateway failed
Connection parameters: TYPE=A DEST=ABAP_AS_WITHOUT_POOL ASHOST=xx.xx.x.xx SYSNR=00 PCS=1
LOCATION CPIC (TCP/IP) on local host with Unicode
ERROR partner 'xx.xx.x.xx:3300' not reached
TIME Wed Jul 08 08:18:28 2015
RELEASE 711
COMPONENT NI (network interface)
VERSION 39
RC -10
MODULE nixxi.cpp
LINE 3147
DETAIL NiPConnect2: xx.xx.x.xx:3300
SYSTEM CALL connect
ERRNO 10060
ERRNO TEXT WSAETIMEDOUT: Connection timed out
COUNTER 2
I don't know what it can be, i'm writing down the correct connection properties (ashost,user,passwd,sysnr,etc).
Has anybody else has had a problem like this?
This is my connection code:
Properties connectProperties = new Properties();
connectProperties.setProperty(DestinationDataProvider.JCO_ASHOST, "xx.xx.x.xx");
connectProperties.setProperty(DestinationDataProvider.JCO_SYSNR, "00");
connectProperties.setProperty(DestinationDataProvider.JCO_CLIENT, "020");
connectProperties.setProperty(DestinationDataProvider.JCO_USER, "xxxxxx");
connectProperties.setProperty(DestinationDataProvider.JCO_PASSWD, "xxxxxxx");
connectProperties.setProperty(DestinationDataProvider.JCO_LANG, "en");
createDataFile(ABAP_AS, "jcoDestination", connectProperties);
After that i just instantiate the object with those properties and call the method connect that is written like this:
JCoDestination destination = JCoDestinationManager.getDestination(ABAP_AS);
System.out.println("Attributes:");
System.out.println(destination.getAttributes());
System.out.println();
I'm working on Java, using netbeans, the sapjco3.jar is added to my libraries.
Do i have to do anything with the dll file that comes?
I have a Hadoop Cluster using inner network(ip range is 192.168.0.0/24), and I want to connect hbase using java library(org.apache.hadoop.hbase.client)
from development computer on the Windows 7 that use different network(ip is outter network 203.252.x.x), But, I couldn't connect hbase.
I Have a question.
Is my code wrong??
Is it possible using Java Library (org.apache.hadoop.hbase.client), should i use thrift protocol? (I don't want to use Thrift)
Do you have any idea? or comment ?
Thank you
This is My Code for Connecting Hbase.
public class TestBase {
public static void main(String[] args) throws MasterNotRunningException, ZooKeeperConnectionException, ServiceException, IOException {
Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.master", "203.252.x.x"); // master info
configuration.set("hbase.master.port", "6000");
configuration.set("hbase.zookeeper.quorum", "203.252.x.x");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("zookeeper.znode.parent", "/hbase-unsecure");
HBaseAdmin.checkHBaseAvailable(configuration);
HTable table = null;
table = new HTable(configuration, "weatherData");
Scan scan = new Scan();
scan.setTimeRange(1L, 1435633313526L);
ResultScanner scanner = null;
scanner = table.getScanner(scan);
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
System.out.println(Bytes.toString(rr.getRow())
+ " => "
+ Bytes.toString(rr.getValue(Bytes.toBytes("temp"),
Bytes.toBytes("max"))));
}
table.close();
scanner.close();
}
}
and That is Error Code in Eclipse
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1661)
at enter code hereorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1687)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1904)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isMasterRunning(ConnectionManager.java:932)
at enter code hereorg.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2409)
at TestBase.main(TestBase.java:28)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1739)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:42561)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1698)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1607)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1633)
... 5 more
Caused by: java.net.UnknownHostException: unknown host: datanode2
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.<init>(RpcClient.java:501)
at org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:325)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1494)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
... 10 more
There might be a problem of the HBase Master DNS name mapping to the ip address of hbase.master. Be sure that you have either a DNS server set up or else you can try to find something similar to this that worked on my GNU/Linux machine. Such as configuring "/etc/hostname" :set up the name of the HBase Master node) and "/etc/hosts" on the machine that tries to connect to the master node.
Hopefully you can set up this on your Windows machine somehow.
Here is a helpful link for the GNU/Linux way:
http://sujee.net/2012/03/08/getting-dns-right-for-hadoop-hbase-clusters/#.XULnEZNKhTZ
You are unable to reach to nodes of cluster. Check the firewall and network settings. Make sure ports are also open to connect.
This is error in you stack trace:
Exception in thread "main" org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.net.UnknownHostException: unknown host: datanode2
Also, you dont need to specify HBase cluster properties in your code. Put hbase-site.xml in classpath of your java and just instantiate the connection.