hadoop configuration error in runtime with openjdk11 - java

I am migrating our application to openjdk11 and with this setup my application is throwing below error.
PLease help on this
Note : With Jdk 1.8 the same code and configurations are working fine .
Java version: openjdk 11
Springboot-hadoop : 2.4.0 RELEASE
application properties
spring.hadoop.fsshell.enabled=false
#hadoop security properties
hadoop.config.key=hadoop.security.authentication
hadoop.config.value=Kerberos
#Hive connection properties
hive.datasource.keytab=/config/security/sit.001.keytab
hive.datasource.drivername=org.apache.hive.jdbc.HiveDriver
hive.datasource.username=ssit.001
#hive.datasource.password=password
hive.truststore.file=/config/security/hivetrust.jks
hive.krb5.conf=/config/security/krb5.conf
hive.datasource.url=url
hive.krb5.conf.debug.prop=sun.security.krb5.debug
hive.krb5.conf.isdebug=true
Java changes
#Value("${hive.datasource.drivername}")
private String driverName;
#Value("${hive.datasource.url}")
private String jdbcUrl;
#Value("${hive.datasource.username}")
private String userId;
#Value("${hive.datasource.keytab}")
private String keytab;
#Value("${hive.krb5.conf}")
private String kerberosConf;
#Value("${hadoop.config.key}")
public String hadoopConfigKey;
#Value("${hadoop.config.value}")
public String hadoopConfigValue;
#Bean(name = "hiveDS")
public DataSource configureHiveDataSource() throws IOException, ClassNotFoundException, SQLException {
Connection con = null;
// System.setProperty("hadoop.home.dir", hadoopHome);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("java.security.krb5.conf", kerberosConf);
org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
conf.set(hadoopConfigKey, hadoopConfigValue);
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(userId, keytab);
Class.forName(driverName);
con = DriverManager.getConnection(jdbcUrl);
LOGGER.info("Hive Db Connected");
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(driverName);
dataSource.setUrl(jdbcUrl);
return dataSource;
}
#Bean(name = "hiveJdbc")
public JdbcTemplate getHiveJdbcTemplate(#Qualifier("hiveDS") DataSource hiveDS) {
return new JdbcTemplate(hiveDS);
}
#Bean(name = "hiveNamedJdbc")
public NamedParameterJdbcTemplate getHiveNamedJdbcTemplate(#Qualifier("hiveDS") DataSource hiveNamedDS) {
return new NamedParameterJdbcTemplate(hiveNamedDS);
}
}
2021-04-28T21:18:18.829+0530 [main] ERROR o.s.d.h.c.c.a.AbstractConfiguredAnnotationBuilder - Failed to perform build. Returning null
java.lang.IllegalArgumentException: Bean name must not be null
at org.springframework.util.Assert.notNull(Assert.java:201)
Error creating bean with name 'hadoopConfiguration' defined in class path resource [org/springframework/data/hadoop/config/annotation/configuration/SpringHadoopConfiguration.class]: Bean instantiation via factory method failed; nested exception is **org.springframework.beans.BeanInstantiationException: Failed to instantiate **[org.apache.hadoop.conf.Configuration]: Factory method 'configuration' threw exception; nested exception is java.lang.NullPointerException

Related

How to test JdbcPagingItemReaderBuilder with Junit

I'm creating a spring-batch application, and I'm having trouble creating a unit test class with junit that tests my reader that uses JdbcPaginItemReaderBuilder.
Reader Code:
#Configuration
public class RelatorioReader {
#Bean("relatorioreader")
#StepScope
public ItemReader<Relatorio> relatorioItemReader(
#Qualifier("dataSource") DataSource dataSource,
#Value("#{jobParameters[dateParam]}") String dateParam) {
return new JdbcPagingItemReaderBuilder<Relatorio>()
.name("relatorioDiario")
.dataSource(dataSource)
.selectClause("SELET * ")
.fromClause("FROM myTable ")
.whereClause(" WHERE date = :dateParam")
.parameterValues(Collections.singletonMap("dateParam", dateParam))
.sortKeys(Collections.singletonMap("ID", Order.ASCENDING))
.rowMapper(new RelatorioMapper())
.build();
}
}
Junit Code
#ExtendWith(MockitoExtension.class)
public class RelatorioReaderTest {
#InjectMocks
RelatorioReader reader;
#Mock
DataSource dataSource;
#Test
public void test_itemReader() {
ItemReader<Relatorio> itemReader = reader.relatorioItemReader(dataSource, "2023-02-16");
assertNotNull(itemReader);
}
}
Exception when running Junit:
java.lang.IllegalArgumentException: Unable to determine PagingQueryProvider type
at org.springframework.batch.item.database.builder.JdbcPagingItemReaderBuilder.determineQueryProvider(JdbcPagingItemReaderBuilder.java:383)
at org.springframework.batch.item.database.builder.JdbcPagingItemReaderBuilder.build(JdbcPagingItemReaderBuilder.java:335)
at com.erico.relatorio.item.reader.RelatorioReader.relatorioItemReader(RelatorioReader.java:34)
at com.erico.relatorio.item.reader.RelatorioReaderTest.test_itemReader(RelatorioReaderTest.java:27)
...
Caused by: org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection: DataSource returned null from getConnection(): dataSource
at ...
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection: DataSource returned null from getConnection(): dataSource
at ...
When you do not specify a paging query provider, the builder will try to determine a suitable one from the meta-data of your data source. Since you are using a mocked database, you need to mock the call to getConnection(). Otherwise, you have to use a stub database for tests (like an embedded H2 or HSQL).
If you know what datasource you will be using, the best way is to specify its paging query provider implementation in your builder. Here is an example if you use H2:
#Configuration
public class RelatorioReader {
#Bean("relatorioreader")
#StepScope
public ItemReader<Relatorio> relatorioItemReader(
#Qualifier("dataSource") DataSource dataSource,
#Value("#{jobParameters[dateParam]}") String dateParam) {
return new JdbcPagingItemReaderBuilder<Relatorio>()
.name("relatorioDiario")
.dataSource(dataSource)
.selectClause("SELET * ")
.fromClause("FROM myTable ")
.whereClause(" WHERE date = :dateParam")
.parameterValues(Collections.singletonMap("dateParam", dateParam))
.sortKeys(Collections.singletonMap("ID", Order.ASCENDING))
.rowMapper(new RelatorioMapper())
.queryProvider(new H2PagingQueryProvider())
.build();
}
}

Can i run liquibase database migrations after the app was initialized with spring boot?

Context
I am trying to start my spring app without a database(so when no database is available at initialization the app won't be stopped), i managed to do this with the following commands in app.prop:
#DB should not kill the app
spring.sql.init.continue-on-error=true //app should continue if a sql init error arrises
spring.liquibase.enabled=false // liquibase bean shouldn't be initialized at start up, without this command the app crashes anyway
spring.jpa.hibernate.ddl-auto=none
Now the only thing that i need to do is figure a way so when the app does make a successful connection with the db the liquibase migration files will get executed. For this task I understood I need to customize the liquibase bean, the following code shows my progress so far:
#Configuration
public class Config {
#Value("${postgres.host}")
private String host;
#Value("${postgres.port}")
private Integer port;
#Value("${postgres.database}")
private String database;
#Value("${postgres.user}")
private String user;
#Value("${postgres.password}")
private String password;
#Value("${spring.liquibase.change-log}")
private String changelog;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("org.postgresql.Driver");
dataSource.setUrl(String.format("jdbc:postgresql://%s:%d/%s", host, port, database));
dataSource.setUsername(user);
dataSource.setPassword(password);
return dataSource;
}
#Bean
public SpringLiquibase liquibase() {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(dataSource());
liquibase.setChangeLog(changelog);
return liquibase;
}
}
Preferably if the database is down the bean should not be created and if the database is running/ the server established connection with the db at some point the bean will be brought in the context and execute the migration files, i don't know if that is possible as I am a newbie,but let me know if you have any suggestions.

Spring batch NoClassDefFoundError: oracle/xdb/XMLType

I have a Spring batch project which connect to an Oracle SQL Database, and allow to export/import some data with xls files.
In my job, I do first a delete in the table, before import the data.
Sometimes, the job failed because there is problems in the xls to import.
For example : If I have duplicate lines, I'm gonna have a SQLException for duplicate when the job will insert the lines in database.
I want to simply no commit anything (especially the delete part).
If the job is successful -> commit
If the job failed -> rollback
So I find that I have to put "setAutocommit" to false.
I have my datasource loaded at the beginning of my job, so I do a :
dataSource.getConnection().setAutoCommit(false);
The instructions works, but when I launch the job, I have this error :
ERROR o.s.batch.core.step.AbstractStep -
Encountered an error executing step step_excel_sheet_1551274910254 in job importExcelJob
org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'scopedTarget.xlsListener'
defined in class path resource [com/adeo/config/ImportExcelConfig.class]:
Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException:
Failed to instantiate [org.springframework.batch.core.StepExecutionListener]:
Factory method 'xlsListener' threw exception; nested exception is
java.lang.NoClassDefFoundError: oracle/xdb/XMLType
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:599)
~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
The Job config is :
#Configuration
public class ImportExcelConfig {
private static final Logger LOG = LoggerFactory.getLogger("ImportExcelConfig");
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Resource(name = "dataSource")
private DataSource dataSource;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Bean(name = "importExcelJob")
public Job importExcel(#Qualifier("xlsPartitionerStep") Step xlsPartitionerStep) throws Exception {
return jobBuilderFactory.get("importExcelJob").start(xlsPartitionerStep).build();
}
#Bean(name = "xlsPartitionerStep")
public Step xlsPartitionerStep(#Qualifier("xlsParserSlaveStep") Step xlsParserSlaveStep, XlsPartitioner xlsPartitioner){
return stepBuilderFactory.get("xls_partitioner_step_builder")
.partitioner(xlsParserSlaveStep)
.partitioner("xls_partitioner_step_builder",XlsPartitioner)
.gridSize(3)
.build();
}
#Bean(name = "xlsParserSlaveStep")
#StepScope
public Step xlsParserSlaveStep(#Qualifier("step") Step step,XlsSheetPartitioner xlsPartitioner) throws Exception {
return stepBuilderFactory.get("sheet_partitioner_"+System.currentTimeMillis())
.partitioner(step)
.partitioner("sheet_partitioner_"+System.currentTimeMillis(),XlsPartitioner)
.gridSize(3)
.build();
}
#Bean(name = "step")
#StepScope
public Step step(#Qualifier("xlsReader") PoiItemReader xlsReader,
#Qualifier("jdbcWriter") ItemWriter jdbcWriter,
#Qualifier("xlsListener") StepExecutionListener xlsListener
) throws Exception {
return ((SimpleStepBuilder)stepBuilderFactory
.get("step_excel_sheet_"+System.currentTimeMillis())
.<Object, Map>chunk(1000)
.reader(xlsReader)
.writer(jdbcWriter)
.listener(xlsListener)
).build();
}
#Bean(name = "xlsListener")
#StepScope
#DependsOn
public StepExecutionListener xlsListener() {
XlsStepExecutionListener listener = new xlsStepExecutionListener();
listener.setDataSource(dataSource);
listener.afterPropertiesSet();
return listener;
}
#Bean(name = "jdbcWriter")
#StepScope
#DependsOn
public ItemWriter<Map> jdbcWriter(#Value("#{stepExecutionContext[sheetConfig]}") SheetConfig sheetConfig) throws IOException, ClassNotFoundException {
JdbcBatchItemWriter<Map> writer = new JdbcBatchItemWriter<>();
writer.setItemPreparedStatementSetter(preparedStatementSetter());
String sql = sheetConfig.getSqlInsert().replaceAll("#TABLE#", sheetConfig.getTable());
LOG.info(sql);
writer.setSql(sql);
writer.setDataSource(dataSource);
writer.afterPropertiesSet();
return writer;
}
#Bean
#StepScope
public ItemPreparedStatementSetter preparedStatementSetter(){
return new ItemPreparedStatementSetter();
}
#Bean
public ItemProcessor testProcessor() {
return new TestProcessor();
}
#Bean(name = "xlsReader")
#StepScope
#DependsOn
public PoiItemReader xlsReader(#Value("#{stepExecutionContext[sheetConfig]}") SheetConfig sheetConfig,
#Value("#{stepExecutionContext[xls]}") File xlsFile) throws IOException {
PoiItemReader reader = new PoiItemReader();
reader.setResource(new InputStreamResource(new PushbackInputStream(new FileInputStream(xlsFile))));
reader.setRowMapper(mapRowMapper());
reader.setSheet(sheetConfig.getSheetIndex());
reader.setLinesToSkip(sheetConfig.getLinesToSkip());
return reader;
}
#Bean
#StepScope
#DependsOn
public RowMapper mapRowMapper() throws IOException {
return new MapRowMapper();
}
}
The listener is :
public class XlsStepExecutionListener implements StepExecutionListener, InitializingBean {
private final static Logger LOGGER = LoggerFactory.getLogger(XlsStepExecutionListener.class);
#Value("#{stepExecutionContext[sheetConfig]}")
private SheetConfig config;
#Value("#{jobParameters['isFull']}")
private boolean isFull;
#Value("#{stepExecutionContext[supp]}")
private String supp;
private DataSource dataSource;
#Override
public void afterPropertiesSet() {
Assert.notNull(dataSource, "dataSource must be provided");
}
#Override
public void beforeStep(StepExecution stepExecution) {
LOGGER.info("Start - Import sheet {}", config.sheetName);
dataSource.getConnection().setAutoCommit(false);
JdbcTemplate jt = new JdbcTemplate(dataSource);
if(config.sqlDelete != null){
//DELETE DATA
LOGGER.info("beforeStep - PURGE DATA"+config.getSqlDelete().replaceAll("#TABLE#", config.getTable()));
jt.update(config.getSqlDelete().replaceAll("#TABLE#", config.getTable()),supp);
}
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
LOGGER.info ("End - Import sheet {}",config.sheetName);
//TODO :
//If status failed -> rollback, if status success : commit
return ExitStatus.COMPLETED;
}
public DataSource getDataSource() {
return dataSource;
}
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}
}
In the pom.xml, I have the oracle jar :
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.2.0.3</version>
</dependency>
I see that the class XMLType is in another jar of Oracle, but I don't know why I need to add this jar when I simply do a modification of auto commit mode ?
Also, I see that, for ALL the method I can call from getConnection().XXXX, the same exception happen. So it's not specific to the auto commit.
Thank you

Configure OracleDataSource programmatically in Spring Boot with a default schema

How to configure Oracle DataSource programmatically in Spring Boot with a default schema?
#Bean
public DataSource getDataSource() throws SQLException {
OracleDataSource d = new OracleDataSource();
d.setURL(Secrets.get("DB_URL"));
d.setUser(Secrets.get("DB_USER"));
d.setPassword(Secrets.get("DB_PASS"));
// d.setSchema(System.getenv("DB_SCHEMA")); ???
return d;
}
You can't change the schema in the OracleDataSource or using connection URL, you need to execute
ALTER SESSION SET CURRENT_SCHEMA=targetschema;
statement as explained in this answer. According to Connection Properties Recognized by Oracle JDBC Drivers there is no driver property for initial schema.
Full example:
#Bean
public DataSource getDataSource() throws SQLException {
OracleDataSource oracleDs = new OracleDataSource();
oracleDs.setURL(Secrets.get("DB_URL"));
oracleDs.setUser(Secrets.get("DB_USER"));
oracleDs.setPassword(Secrets.get("DB_PASS"));
// other Oracle related settings...
HikariDataSource hikariDs = new HikariDataSource();
hikariDs.setDataSource(oracleDs);
hikariDs.setConnectionInitSql("ALTER SESSION SET CURRENT_SCHEMA = MY_SCHEMA");
return hikariDs;
}
Try to add sql execution into datasources creation method
#Bean
public DataSource getDataSource() throws SQLException {
OracleDataSource d = new OracleDataSource();
d.setURL(Secrets.get("DB_URL"));
d.setUser(Secrets.get("DB_USER"));
d.setPassword(Secrets.get("DB_PASS"));
Resource initSchema = new ClassPathResource("scripts/schema-alter.sql");
DatabasePopulator databasePopulator = new ResourceDatabasePopulator(initSchema);
DatabasePopulatorUtils.execute(databasePopulator, dataSource);
return d;
}
In scripts/schema-alter.sql will be this code
ALTER SESSION SET CURRENT_SCHEMA=targetschema;
In Spring Boot 2 the wanted schema can be set in application.properties file with the following property:
spring.datasource.hikari.connection-init-sql=ALTER SESSION SET CURRENT_SCHEMA = MY_SCHEMA
HikariCP is the default connection pool in Spring Boot 2. To see all HikariCP settings (including "connectionInitSql") in you log file add also the following in application.properties:
logging.level.com.zaxxer.hikari=DEBUG

Set unique bean name javax.management.InstanceAlreadyExistsException

When I deploy 2 packages with Spring AMQP I get JMX error in the below code:
#Bean
public CachingConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(HOST);
connectionFactory.setBeanName("Test_123");
return connectionFactory;
}
I error Caused by: javax.management.InstanceAlreadyExistsException: org.springframework.amqp.rabbit.connection:name=connectionFactory,type=CachingConnectionFactory
Full error stack:
https://pastebin.com/CdU3epMz
How I can set unique name for connectionFactory?
EDIT:
I also tried to place application.properties under src/main/java/resources this configuration:
spring.jmx.enabled=false
spring.datasource.jmx-enabled=false
spring.jmx.default-domain=ssds # JMX domain name.
spring.jmx.server=apiServer # MBeanServer bean name.
management.metrics.export.jmx.domain=metccriddcs # Metrics JMX domain name.
management.metrics.export.jmx.enabled=false # Whether exporting of metrics to JMX is enabled.
management.endpoints.jmx.exposure.exclude=*
But I get the same error.
The solution:
... implements ObjectNamingStrategy {
#Override
public ObjectName getObjectName(Object managedBean, String beanKey) throws MalformedObjectNameException {
Class managedClass = AopUtils.getTargetClass(managedBean);
String domain = ClassUtils.getPackageName(managedClass);
Hashtable<String, String> properties = new Hashtable<>();
properties.put("type", ClassUtils.getShortName(managedClass));
properties.put("name", "asdsdsd");
// ensure the application name is included as a property in the object name
properties.put("app", "api");
return ObjectNameManager.getInstance(domain, properties);
}
}

Categories

Resources