HBase access from Java API Client - java

I'm having some troubles accessing HBase from java API Client and I can't figure out what I'm doing wrong.
I'm using HBase 1.1.2 in standalone mode on VM (10.166.205.41) with RHEL6 and JAVA 1.7.
Here is my HBase configuration from the hbase-site.xml
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>10.166.205.41</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>9091</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>file:///usr/local/hbaserootdir/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/hbaserootdir/zookeeper</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>
My regionservers file is defined as followed:
10.166.205.41
The HBase shell client is working fine and i can access the HBase master UI from url 10.166.205.41:16010.
Here is my Java API Client running on Eclipse on Windows 7.
Pom.xml
<dependencies>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>1.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.hbase</groupId>
<artifactId>hbase</artifactId>
<version>1.1.2</version>
<type>pom</type>
</dependency>
</dependencies>
Source code:
public class InsertData {
final static Logger logger = Logger.getLogger(InsertData.class);
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
config.setInt("timeout", 120000);
config.set("hbase.zookeeper.quorum","10.166.205.41");
config.set("hbase.zookeeper.property.clientPort", "9091");
Connection connection = ConnectionFactory.createConnection(config);
Table table = connection.getTable(TableName.valueOf("emp"));
try {
Get g = new Get(Bytes.toBytes("1"));
Result result = table.get(g);
byte [] name = result.getValue(Bytes.toBytes("personal data"), Bytes.toBytes("name"));
logger.info("name : " + Bytes.toString(name));
} finally {
table.close();
connection.close();
}
}
}
During execution connection to server 10.166.205.41:41571 failed.
2018-12-14 16:35:19 DEBUG FailedServers:56 - Added failed server with address hlzudd5hdf01.yres.ytech/10.166.205.41:41571 to list caused by org.apache.hbase.thirdparty.io.netty.channel.ConnectTimeoutException: connection timed out: hlzudd5hdf01.yres.ytech/10.166.205.41:41571
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 3,4 replyHeader:: 3,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
2018-12-14 16:35:19 DEBUG ClientCnxn:742 - Got ping response for sessionid: 0x167ad4d815d000b after 38ms
2018-12-14 16:35:19 DEBUG AbstractRpcClient:349 - Not trying to connect to hlzudd5hdf01.yres.ytech/10.166.205.41:41571 this server is in the failed servers list
2018-12-14 16:35:19 DEBUG ClientCnxn:843 - Reading reply sessionid:0x167ad4d815d000b, packet:: clientPath:/hbase/meta-region-server serverPath:/hbase/meta-region-server finished:false header:: 4,4 replyHeader:: 4,4697,0 request:: '/hbase/meta-region-server,F response:: #ffffffff0001a726567696f6e7365727665723a3431353731ffffffa0ffffffe9ffffff80fffffffd5611ffffff8c6a50425546a24a17686c7a7564643568646630312e797265732e797465636810ffffffe3ffffffc4218ffffffceffffff93ffffffb6ffffffeafffffffa2c100183,s{4519,4519,1544800813291,1544800813291,0,0,0,0,77,0,4519}
On the HBase master UI this is the region server address and by clicking on the link i can't get the page neither.
I packaged my program as jar file and running it on my VM is fine which makes me think it could be a port access issue.
Entering netstat -tanp | grep LISTEN command on my RHEL6 VM tells me my region server port is listening
tcp 0 0 10.166.205.41:41571 0.0.0.0:* LISTEN 26322/java
Seems I don't have any firewall running so don't know why the connection failed. Maybe it is something else.
I'm out of idea to fix that issue so if you could help me that would be much appreciated ^^
Thanks a lot.

Related

How to set encrypt false in Camel Debezium SQL server connector for JDBC connection

I am facing an issue while trying to use Camel Debezium SQL server connector. I am trying to capture data changes in SQL server db table using camel Debezium SQL server connector and sink them to message broker. I know the JDBC SQL server connection has the option to make encrypt false to prevent this issue. But I can't find a similar way in Camel Debezium SQL server connector.
To use Camel Debezium SQL server connector, I was following this documentation:
https://camel.apache.org/components/3.18.x/debezium-sqlserver-component.html#_samples
When I run the app it shows me following error:
ERROR io.debezium.embedded.EmbeddedEngine - Error while trying to run connector class 'io.debezium.connector.sqlserver.SqlServerConnector'
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target".
My POM is as follows:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-parent</artifactId>
<version>3.18.1-SNAPSHOT</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-main</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-debezium-sqlserver</artifactId>
</dependency>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>11.2.0.jre11</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jackson</artifactId>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-kafka</artifactId>
</dependency>
</dependencies>
I am using:
spring-boot:2.7.2
SQL Server:docker image: mcr.microsoft.com/mssql/server:2022-latest
Kafka image: confluentinc/cp-zookeeper:latest
Can anyone help me to resolve this issue?
When dealing with Debezium connectors, to register a new SQL Server connector we might normally POST a JSON configuration like the following:
curl -H "Content-Type: application/json" -XPOST http://127.0.0.1:8083/connectors --data #- << EOF
{
"name": "local-hub-connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"database.hostname": "mssql-2019",
"database.port": 1433,
"database.user": "Debezium",
"database.password": "StrongPassw0rd",
"database.dbname": "DebeziumTest",
"database.server.name": "DebeziumTestServer",
"table.include.list": "dbo.tb_CDCTab1",
"database.history.kafka.bootstrap.servers": "broker:29092",
"database.history.kafka.topic": "dbhistory.DebeziumTestServer"
}
}
EOF
This works fine when the connector is using JDBC versions prior to 10.2, but JDBC Driver 10.2 for SQL Server introduced breaking changes, in particular:
BREAKING CHANGE - Default Encrypt to true
This is generally problematic because, by default, SQL Server is installed with a self-signed X.509 certificate so it doesn't appear in any trust stores.
If you're using a new connector container that has JDBC Driver 10.2 for SQL Server (or later) installed you'll need to modify the connector configuration:
Do you not need encryption? Turn it off with encrypt=false in the connection string options.
Do you need encryption? Add trustServerCertificate=true to the connection string options.
We can do this by way of pass-through configuration properties, i.e.: Debezium SQL Server connector pass-through database driver configuration properties:
The Debezium connector provides for pass-through configuration of the database driver. Pass-through database properties begin with the prefix database.*. For example, the connector passes properties such as database.foobar=false to the JDBC URL.
To turn off encryption we would POST the following JSON configuration:
curl -H "Content-Type: application/json" -XPOST http://127.0.0.1:8083/connectors --data #- << EOF
{
"name": "local-hub-connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"database.hostname": "mssql-2019",
"database.port": 1433,
"database.user": "Debezium",
"database.password": "StrongPassw0rd",
"database.dbname": "DebeziumTest",
"database.server.name": "DebeziumTestServer",
"table.include.list": "dbo.tb_CDCTab1",
"database.history.kafka.bootstrap.servers": "broker:29092",
"database.history.kafka.topic": "dbhistory.DebeziumTestServer",
"database.encrypt": false
}
}
EOF
To keep encryption and trust SQL Server's self-signed certificate we would POST the following JSON configuration instead:
curl -H "Content-Type: application/json" -XPOST http://127.0.0.1:8083/connectors --data #- << EOF
{
"name": "local-hub-connector",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"database.hostname": "mssql-2019",
"database.port": 1433,
"database.user": "Debezium",
"database.password": "StrongPassw0rd",
"database.dbname": "DebeziumTest",
"database.server.name": "DebeziumTestServer",
"table.include.list": "dbo.tb_CDCTab1",
"database.history.kafka.bootstrap.servers": "broker:29092",
"database.history.kafka.topic": "dbhistory.DebeziumTestServer",
"database.encrypt": true,
"database.trustServerCertificate": true
}
}
EOF
If you can't POST configuration changes then perhaps the camel.component.debezium-sqlserver.additional-properties can provide similar functionality.
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>9.2.1.jre11</version>
</dependency>
Finally I was able to solve the issue by downgrading the mssql-jdbc driver to the above one.

Hive error after the command 'show databases;'

Hi I am beginner to hadoop,
I just installed hive 2.3.7 and setup the metastore with mysql
according to this tutorial https://www.guru99.com/hive-metastore-configuration-mysql.html
and this one https://ravi-chamarthy.medium.com/apache-hive-configuration-with-mysql-metastore-3ecb9a0df3a1.
Here is my hdfs-site.xml file
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>user name for connecting to mysql server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hivepassword</value>
<description>password for connecting to mysql server</description>
</property>
When I executed schematool -initSchema -dbType mysql
everything is fine. It initialize the 2.3.0 schema of hive.
when I started hive and execute the command show databases; or any other,
I got this errors
hive> show databases;
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is
generally unnecessary.
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator()Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator
at org.apache.hadoop.hive.ql.exec.FetchOperator.<init>(FetchOperator.java:108)
at org.apache.hadoop.hive.ql.exec.FetchTask.initialize(FetchTask.java:87)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:541)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Note: I used mysql 8.0.22, mysql-connector-java.jar, mysql-connector-java-8.0.22.jar
ubuntu 18.04, hadoop 3.1.4.
What version of Hadoop are you using? Might have to do with incompatible version of hadoop

Application failed to start

I am starting to learn springboot and already encountered an error. I tried searching for this error, but i wasn't able to find it. I have inserted the pictures of the entire error as well as my code for the pom.xml and the main class.
pom.xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.2.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<properties>
<java.version>1.8</java.version>
</properties>
</project>
Main
package io.java.springbootstarter;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class CourseApiApp {
public static void main(String[] args) {
SpringApplication.run(CourseApiApp.class, args);
}
}
This was the description for the error:
The Tomcat connector configured to listen on port 8080 failed to start. The port may already be in use or the connector may be misconfigured.
Action:
Verify the connector's configuration, identify and stop any process that's listening on port 8080, or configure this application to listen on another port.
2018-03-21 22:47:48.794 INFO 9412 --- [ main] ationConfigEmbeddedWebApplicationContext : Closing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#f75083: startup date [Wed Mar 21 22:47:46 EDT 2018]; root of context hierarchy
2018-03-21 22:47:48.794 INFO 9412 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
Error,
Error Continued
Thank you in advance.
If you are using linux/mac, u can try this command :
lsof -i :8080
This will return the process id along with other information, then use the following command to kill the process :
kill -9 your_process_id
This way, you need not to change the port anymore.
In case the other process is a java process as well, you could also just do jps which shows all running java processes and kill it accordingly.
The port 8080 in using, you should use another port. You can config in application.properties by setting server.port
For me just restarting my computer worked. As the error message says some application was already using the specified port.

Hadoop Job hangs at ACCEPTED, with yarn resourcemanager log java.net.UnknownHostException

As is described in the title, I deployed a hadoop v2.6.3 cluster on an internal network with static ip like 10.0.0.x.
Then I ran an example WordCount Program However, the shell just give the outputs and hangs:
hadoop jar wc.jar WordCount /user/alex/data/kaggle.sample /user/alex/wc/output
16/04/06 10:44:29 INFO client.RMProxy: Connecting to ResourceManager at master/10.0.0.7:8032
16/04/06 10:44:29 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/04/06 10:44:30 INFO input.FileInputFormat: Total input paths to process : 1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: number of splits:1
16/04/06 10:44:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1459942813464_0002
16/04/06 10:44:30 INFO impl.YarnClientImpl: Submitted application application_1459942813464_0002
16/04/06 10:44:30 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1459942813464_0002/
16/04/06 10:44:30 INFO mapreduce.Job: Running job: job_1459942813464_0002
Then I goes to Hadoop Cluster Web UI, and found that the job status is ACCEPTED, and not running. I checked the log file of YARN.ResourceManager, and its last ERROR message is like this:
2016-04-06 10:34:42,466 ERROR org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt: Error trying to assign container token and NM token to an allocated container container_1459942813464_0001_02_000001
java.lang.IllegalArgumentException: java.net.UnknownHostException: worker14.alex
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:256)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttempt.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:896)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:930)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:755)
at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:106)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:842)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:823)
at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException: worker14.alex
... 19 more
The Hadoop Configuration file is following:
#core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/</value>
</property>
</configuration>
#yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/alex/hadoop-2.6.3/tmp/nm.local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/home/alex/hadoop-2.6.3/log/nm.log</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
#mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.0.0.7:10020</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/staging</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/tmp</value>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/alex/hadoop-2.6.3/tmp/mr-history/done</value>
</property>
</configuration>
/etc/hosts file have map ips to either master or worker1 - worker14
slaves file are master, worker1 - worker14
It seems that my hostname resolve goes wrong. It is worker14.alex rather than worker14 (alex is my linux username)
So what's wrong with my configuration? Do I need to restart all the servers? Or I just need to restart some of the services like service networking restart?
were you able to get to a resolution? I'm seeing the exact same issue, I see a Caused by: java.net.UnknownHostException: var exception. – Nishant Kelkar
Check your yarn-site.xml, this value:
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/var/log/hadoop-yarn/apps</value>
If you put "hdfs://" before the path, the error occurs.

Can't connect to mySql docker container with JDBC

I use Docker Maven Plugin
When test-integration starts i can connect to mysql on container in terminal with this command:
mysql -h 127.0.0.1 -P 32795 -uroot -p
and everythings works good but when i want to connect mysql in java app with JDBC with this code:
Class.forName("com.mysql.jdbc.Driver").newInstance();
Connection connection = DriverManager.getConnection(
"jdbc:mysql://127.0.0.1:" + System.getProperty("mysqlPort") + "/dashboardmanager",
"root",
"root"
);
i get this error:
org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) ~[spring-jdbc-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:615) ~[spring-jdbc-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:866) ~[spring-jdbc-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927) ~[spring-jdbc-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:937) ~[spring-jdbc-4.2.4.RELEASE.jar:4.2.4.RELEASE]
I tried:
export _JAVA_OPTIONS="-Djava.net.preferIPv4Stack=true"
and
System.setProperty("java.net.preferIPv4Stack" , "true");
but nothing changed.
Docker Maven Plugin Conf:
<plugin>
<groupId>org.jolokia</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>${docker-maven-plugin.version}</version>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<id>start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
The problem was this:
MySql starting process takes about 40 seconds, so i should stay about 40 seconds and after that try to connecting to mySql, so simple :)
Or i can use these settings in pom.xml:

Make sure your your MySQL config file (my.cnf) set in your mysql container uses:
bind-address = 0.0.0.0
As explained in this answer (for a reverse case: connecting to mysql running on host from a docker container, but the idea is the same here), in bridge mode, setting bind-address to broadcast mode would help validate that mysql is reacheable.
Note: if you use bind-address = 0.0.0.0 your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to setup firewall rules accordingly.
After this test, check "How to connect to mysql running in container from host machine"
By default, root only has access from the localhost, 127.0.0.1 & ::1, you need to specifically allow access from 192.168.99.1 or from anywhere using '%' in the user setup.
See "Securing the Initial MySQL Accounts".

Categories

Resources