Not able to parse Protobuf in java - java

I have two protobuf files. I have to compare the contents of both of them in order to proceed further with the code. For this, i am trying to parse a protobuf file but some how i am not able to get the various message types and other information within the .proto file. I have to do all this in java.
Code snippets:
package com.example.demo;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.Reader;
import com.google.protobuf.DescriptorProtos;
import com.google.protobuf.DescriptorProtos.FileDescriptorProto;
import com.google.protobuf.Descriptors;
import com.google.protobuf.Descriptors.FileDescriptor;
import com.google.protobuf.InvalidProtocolBufferException;
public class TestProto {
public static FileDescriptorProto parseProto(InputStream protoStream)
throws InvalidProtocolBufferException, Descriptors.DescriptorValidationException {
DescriptorProtos.FileDescriptorProto descriptorProto = null;
try {
descriptorProto = FileDescriptorProto.parseFrom(protoStream);
} catch (IOException e) {
e.printStackTrace();
}
return descriptorProto;
}
public static InputStream readProto(File filePath) {
InputStream is = null;
Reader reader = null;
try {
is = new FileInputStream(filePath);
reader = new InputStreamReader(is);
int data = reader.read();
while (data != -1) {
System.out.print((char) data);
data = reader.read();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return is;
}
public static void main(String args[]) {
InputStream protoStream = readProto(new File("D:/PROTOBUF CONVERTER/default.proto"));
Descriptors.FileDescriptor fileDescriptor = null;
DescriptorProtos.FileDescriptorProto fileDescriptorProto = null;
try {
fileDescriptorProto = parseProto(protoStream);
fileDescriptor = FileDescriptor.buildFrom(fileDescriptorProto, new FileDescriptor[] {}, true);
System.out.println("\n*******************");
System.out.println(fileDescriptor.getFullName());
System.out.println(fileDescriptor.getName());
System.out.println(fileDescriptor.getPackage());
System.out.println(fileDescriptor.getClass());
System.out.println(fileDescriptor.getDependencies());
System.out.println(fileDescriptor.toProto());
System.out.println(fileDescriptor.getServices());
System.out.println(fileDescriptor.getMessageTypes());
System.out.println(fileDescriptor.getOptions());
} catch(Exception e) {
e.printStackTrace();
}
}
}
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.springboot</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.xolstice.maven.plugins</groupId>
<artifactId>protobuf-maven-plugin</artifactId>
<version>0.6.1</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>3.5.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-io/commons-io -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
</dependencies>
<build>
<finalName>ProtobufParseDemo</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<inherited>true</inherited>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
default.proto
syntax = "proto3";
package tutorial;
option java_package = "com.example.tutorial";
option java_outer_classname = "AddressBookProtos";
message Person {
required string name = 1;
required int32 id = 2;
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
message PhoneNumber {
required string number = 1;
optional PhoneType type = 2 [default = HOME];
}
repeated PhoneNumber phones = 4;
}
message AddressBook {
repeated Person people = 1;
}
I can see the protofile data on the console due to code line "System.out.print((char) data);". However, i am not able to see any output in the sysout of the FileDescriptors.
I am new to Protocol buffers.
Questions:
what I am trying to do, is it relevant OR I am making some mistake?
Is there any other method to do this in Java?
I have seen some answers, like the one here Protocol Buffers: How to parse a .proto file in Java.
It says that the input to the parseFrom method should be of binary type i.e. a compiled schema. Is there a way in which we can obtain the compiled version of the .proto file in java code (not in command line)?
Ok, to be more clear on this, I have to compare two .proto files.
First would be the one which is already uploaded with the ML model
and
Second would be the one which is to be uploaded for the same ML model.
If there are differences in the input or output message types of the two .proto files, then accordingly i have to increment the version number of the model.
I have found solutions where the proto is converted to proto descriptors and then converted to byte array and further passed tp parsrFrom method. Can't this process of converting .proto to proto.desc, be done via java code ?
Point to keep in mind here is that, i do not have the proto files in my classpath and giving the address in pom.xml (that of input and output directories) is not possible here as i have to download the old proto and compare it with the new proto to be uploaded as mentioned above.

Related

Problem with unit tests: Junit and the Mockito framework

I am learning to do unit and double tests with Junit and the Mockito framework, but I am not getting the expected result in a specific test with 'mocks'. I do an assertThat that should return positive test, instead, it returns an error that says Mockito cannot mock this class. It is a class called 'Console' that must print and collect values ​​from the user's keyboard, but of course, in unit tests this should be 'mocked' to avoid 'intervening test' in antipattern, where the test asks for data to the developer, that is, I need to 'mock up' a user input. This 'Console' class is like a small facade of the typical java BufferedReader class.
I pass you the classes involved:
Console:
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Console {
private BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in));
public String readString(String title) {
String input = null;
boolean ok = false;
do {
this.write(title);
try {
input = this.bufferedReader.readLine();
ok = true;
} catch (Exception ex) {
this.writeError("characte string");
}
} while (!ok);
return input;
}
public int readInt(String title) {
int input = 0;
boolean ok = false;
do {
try {
input = Integer.parseInt(this.readString(title));
ok = true;
} catch (Exception ex) {
this.writeError("integer");
}
} while (!ok);
return input;
}
public char readChar(String title) {
char charValue = ' ';
boolean ok = false;
do {
String input = this.readString(title);
if (input.length() != 1) {
this.writeError("character");
} else {
charValue = input.charAt(0);
ok = true;
}
} while (!ok);
return charValue;
}
public void writeln() {
System.out.println();
}
public void write(String string) {
System.out.print(string);
}
public void writeln(String string) {
System.out.println(string);
}
public void write(char character) {
System.out.print(character);
}
public void writeln(int integer) {
System.out.println(integer);
}
private void writeError(String format) {
System.out.println("FORMAT ERROR! " + "Enter a " + format + " formatted value.");
}
}
ConsoleTest:
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import java.io.BufferedReader;
import java.io.IOException;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.mockito.Mockito.*;
import static org.mockito.MockitoAnnotations.initMocks;
public class ConsoleTest {
#Mock
private BufferedReader bufferedReader;
#InjectMocks
private Console console;
#BeforeEach
public void before(){
initMocks(this);
//this.console = new Console();
}
#Test
public void givenConsoleWhenReadStringThenValue() throws IOException {
String string = "yes";
when(this.bufferedReader.readLine()).thenReturn(string);
assertThat(this.console.readString("title"), is(string));
}
}
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<modules>
</modules>
<artifactId>solution.java.swing.socket.sql</artifactId>
<groupId>usantatecla.tictactoe</groupId>
<version>0.0.1-SNAPSHOT</version>
<packaging>pom</packaging>
<name>${project.groupId}.${project.artifactId}</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-inline</artifactId>
<version>3.6.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-junit-jupiter</artifactId>
<version>3.6.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest</artifactId>
<version>2.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.3</version>
<executions>
<execution>
<id>default-prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>default-report</id>
<phase>post-integration-test</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.2</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.2.0</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<version>2.22.2</version>
<dependencies>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-surefire-provider</artifactId>
<version>1.2.0</version>
</dependency>
</dependencies>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Thanks and greetings to the community!
I personally am not a huge fun of Mockito, I prefer to have full control of my classes using an interface and two implementations (one for production and one for test).
So this doesn't respond directly to your question about Mockito but it allows you to perfectly control the behaviour of your code without the need to use another framework.
You may define a very simple interface:
public interface Reader {
String readLine();
}
Then, you use that interface in your class Console:
public class Console {
private final Reader reader;
public Console(Reader reader) {
this.reader = reader;
}
//Replace all your this.bufferedReader.readLine() by this.reader.readLine();
}
So, in your production code you can use the real implementation with the Buffered reader:
public class ProductionReader implements Reader {
private final BufferedReader bufferedReader = new BufferedReader(...);
#Override
public String readLine() {
this.bufferedReader.readLine();
}
}
Console console = new Console(new ProductionReader());
... While in your tests you can use a test implementation:
public class TestReader implements Reader {
#Override
public String readLine() {
return "Yes";
}
}
Console console = new Console(new TestReader());
Note that while in your specific case you may mock the behaviour using Mockito, there are a lot of other cases when you will need a more complex approach and the a ove will allow you to have full control and full debuggability of your code without the need of adding any dependency.

Junit test fail on maven clean install - load a file in test resources folder

I have a project that has a resource file in test folder:
src/test/resources/myfolder/testfile.txt
I have:
#Test
public void test() {
String args[] = { "myfolder/testfile.txt" };
MyClass.load(args);
}
And this is MyClass.java method:
public void load(String filePath)
ClassLoader classloader = Thread.currentThread().getContextClassLoader();
InputStream inputStream = classloader.getResourceAsStream(filePath);
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
reader = new BufferedReader(streamReader);
//...
}
If I launch the tests from Eclipse, all the tests goes well.
I I launch the maven clean install, test fails with java.lang.NullPointerException at this line:
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
What I have to do?
Thanks
Your testfile.txt resource is in the right place. This should work unless you have custom Maven resource filtering rules e.g. to exclude .txt files. Check what's in the target/test-classes after the failed build.
You could try to use absolute resource path /myfolder/testfile.txt instead and stop using ContextClassLoader:
String path = "/myfolder/testfile.txt";
InputStream inputStream = MyClass.class.getResourceAsStream(path);
You can try adding the below line with the build tag in pom.xml.
<directory>src/test/resources</directory>
I have created a same code for the same and its working for me. Please find below code, this might help you.
my pom file
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.radhey</groupId>
<artifactId>junitTest</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<java-version>1.8</java-version>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.0</version>
<configuration>
<source>${java-version}</source>
<target>${java-version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
Main class
public class TestJunit {
public void load(String filePath) {
ClassLoader classloader = Thread.currentThread().getContextClassLoader();
InputStream inputStream = classloader.getResourceAsStream(filePath);
InputStreamReader streamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
BufferedReader reader = new BufferedReader(streamReader);
String strCurrentLine;
try {
while ((strCurrentLine = reader.readLine()) != null) {
System.out.println(strCurrentLine);
}
}catch (Exception e)
{
e.printStackTrace();
}
}
}
and Test class
public class Test {
#org.junit.Test
public void test() {
String args[] = { "test/testfile.txt" };
TestJunit test2 = new TestJunit();
test2.load(args[0]);
}
}
I have also added this code to git
https://github.com/itthought/junitTest

Unable to connect to mongo database using Java, OSGI, Karaf

I've installed the mongo driver in my running Karaf server:
bundle:install -s wrap:mvn:org.mongodb/mongo-java-driver/3.6.3
I'm simply trying to connect to the DB and log the databases I have. Currently running out of the box local instance. Below is the code I wrote to demo this in OSGI/Karaf. I'm using the mvn bundle plugin.
I created a database under the alias osgiDatabase
I'm running my debugger and the failure happens during the instantiation of the MongoClient() but not understanding what I could be doing wrong.
This works when I don't use Karaf. The only error I get is Activator start error in bundle
POM
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.qa</groupId>
<artifactId>board</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>bundle</packaging>
<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.6.3</version>
</dependency>
<dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
<version>6.0.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<instructions>
<Import-Package>com.mongodb, org.osgi.framework</Import-Package>
<Bundle-Activator>Connection.Activator</Bundle-Activator>
<Export-Package>*</Export-Package>
</instructions>
</configuration>
</plugin>
</plugins>
</build>
</project>
DBUtil
package Connection;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoDatabase;
import java.util.List;
public class DBUtil {
MongoClient client;
MongoDatabase database;
public DBUtil() {
}
public DBUtil(String databaseName) {
if (client == null) {
client = new MongoClient();
database = client.getDatabase(databaseName);
}
}
/**
* Allows you to reveal all databases under the current connection
*/
public void showDatabases() {
if (client == null) {
throw new NullPointerException();
}
List<String> databases = client.getDatabaseNames();
for (String db : databases) {
System.out.println("The name of the database is: " + db);
}
}
}
Activator
package Connection;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class Activator implements BundleActivator {
public void start(BundleContext bundleContext) throws Exception {
DBUtil util = new DBUtil("osgiDatabase");
// util.showDatabases();
System.out.println("Working");
}
public void stop(BundleContext bundleContext) throws Exception {
System.out.println("Bundle disabled");
}
}
Your Import-Package configuration looks wrong. If you configure it explicitly like this you switch off the auto detection of needed packages. So it is very likely you are missing some packages your code needs.
Instead try to only configure the activator and leave the rest on defaults.
To get better logs you should use a try catch in your Activator an log the exception using slf4j. So you get some more information what is wrong.

Functional tests (Jellytools) don't start on netbeans platform

I'm trying to add some functional tests on existing netbeans application.
Info about application: packaged by maven, used netbeans platform 7.3.1.
I've added dependencies how described in this article but got exception:
Running qa.FuncTest
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.067 sec <<< FAILURE! - in qa.FuncTest
org.netbeans.junit.NbModuleSuite$S#67ad77a7(org.netbeans.junit.NbModuleSuite$S) Time elapsed: 0.066 sec <<< ERROR!
java.lang.ClassNotFoundException: org.netbeans.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at org.netbeans.junit.NbModuleSuite$S.runInRuntimeContainer(NbModuleSuite.java:819)
at org.netbeans.junit.NbModuleSuite$S.access$100(NbModuleSuite.java:667)
Does anybody know why it happend? And how to fix it?
Thanks in advance.
UPD dependency section from application/pom.xml
<dependencies>
<dependency>
<groupId>org.netbeans.cluster</groupId>
<artifactId>platform</artifactId>
<version>${software.netbeans.version}</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-jdesktop-beansbinding</artifactId>
<version>${software.netbeans.version}</version>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-netbeans-modules-nbjunit</artifactId>
<version>${software.netbeans.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.netbeans.api</groupId>
<artifactId>org-netbeans-modules-jellytools-platform</artifactId>
<version>${software.netbeans.version}</version>
<scope>test</scope>
</dependency>
</dependencies>
UPD1 test class:
package qa;
import junit.framework.Test;
import org.netbeans.jellytools.JellyTestCase;
import org.netbeans.jellytools.OptionsOperator;
import org.netbeans.junit.NbModuleSuite;
import org.openide.windows.TopComponent;
public class FuncTest extends JellyTestCase {
public static Test suite() {
return NbModuleSuite.allModules(FuncTest.class);
}
public FuncTest(String n) {
super(n);
}
public void testWhatever() throws Exception {
TopComponent tc = new TopComponent();
tc.setName("label");
tc.open();
OptionsOperator.invoke().selectMiscellaneous();
Thread.sleep(5000);
System.err.println("OK.");
}
}
I would like to share results of my investigation. I noticed when application was started as usual i saw in output window:
Installation =.../application/target/application/extra
.../application/target/application/java
.../application/target/application/kws
.../application/target/application/platform
but when application was started via nbjubit/jellytool i saw only:
Installation =.../application/target/application/platform
so i decided to expand this values and investigated source code. Let's consider a few interesting methods in NbModuleSuite.java :
private static String[] tokenizePath(String path) {
List<String> l = new ArrayList<String>();
StringTokenizer tok = new StringTokenizer(path, ":;", true); // NOI18N
.....
}
static File findPlatform() {
String clusterPath = System.getProperty("cluster.path.final"); // NOI18N
if (clusterPath != null) {
for (String piece : tokenizePath(clusterPath)) {
File d = new File(piece);
if (d.getName().matches("platform\\d*")) {
return d;
}
}
}
String allClusters = System.getProperty("all.clusters"); // #194794
if (allClusters != null) {
File d = new File(allClusters, "platform"); // do not bother with old numbered variants
if (d.isDirectory()) {
return d;
}
}
....
}
static void findClusters(Collection<File> clusters, List<String> regExps) throws IOException {
File plat = findPlatform().getCanonicalFile();
String selectiveClusters = System.getProperty("cluster.path.final"); // NOI18N
Set<File> path;
if (selectiveClusters != null) {
path = new TreeSet<File>();
for (String p : tokenizePath(selectiveClusters)) {
File f = new File(p);
path.add(f.getCanonicalFile());
}
} else {
File parent;
String allClusters = System.getProperty("all.clusters"); // #194794
if (allClusters != null) {
parent = new File(allClusters);
} else {
parent = plat.getParentFile();
}
path = new TreeSet<File>(Arrays.asList(parent.listFiles()));
}
....
}
As you can see we can set path values in cluster.path.final or all.clusters and use ; : as delimeters. I spent some time to play with this constants and realised that settings didn't set up in pom.xml:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${software.maven-surefire-plugin}</version>
<configuration>
<skipTests>false</skipTests>
<systemPropertyVariables>
<branding.token>${brandingToken}</branding.token>
<!--problem part start-->
<property>
<name>cluster.path.final</name>
<value>${project.build.directory}/${brandingToken}/platform:${project.build.directory}/${brandingToken}/java:...etc</value>
</property>
<!--problem part end-->
</systemPropertyVariables>
</configuration>
</plugin>
but this work well:
<properties>
<cluster.path.final>${project.build.directory}/${brandingToken}/platform:${project.build.directory}/${brandingToken}/java:...etc</cluster.path.final>
</properties>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${software.maven-surefire-plugin}</version>
<configuration>
<skipTests>false</skipTests>
<systemPropertyVariables>
<branding.token>${brandingToken}</branding.token>
<cluster.path.final>${cluster.path.final}</cluster.path.final>
</systemPropertyVariables>
</configuration>
</plugin>
I don't know why it happens but I would recommend to use maven section properties to set systemPropertyVariables of maven-surefire-plugin. Good luck!

Hadoop: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

My MapReduce jobs runs ok when assembled in Eclipse with all possible Hadoop and Hive jars included in Eclipse project as dependencies. (These are the jars that come with single node, local Hadoop installation).
Yet when trying to run the same program assembled using Maven project (see below) I get:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
This exception happens when program is assembled using the following Maven project:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.bigdata.hadoop</groupId>
<artifactId>FieldCounts</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>FieldCounts</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-jobclient</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive.hcatalog</groupId>
<artifactId>hcatalog-core</artifactId>
<version>0.12.0</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>16.0.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>attached</goal>
</goals>
<phase>package</phase>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.bigdata.hadoop.FieldCounts</mainClass>
</manifest>
</archive>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
* Please advise where and how to find compatible Hadoop jars? *
[update_1]
I am running Hadoop 2.2.0.2.0.6.0-101
As I have found here: https://github.com/kevinweil/elephant-bird/issues/247
Hadoop 1.0.3: JobContext is a class
Hadoop 2.0.0: JobContext is an interface
In my pom.xml I have three jars with version 2.2.0
hadoop-hdfs 2.2.0
hadoop-common 2.2.0
hadoop-mapreduce-client-jobclient 2.2.0
hcatalog-core 0.12.0
The only exception is hcatalog-core which version is 0.12.0, I could not find any more recent version of this jar and I need it!
How can I find which of these 4 jars produces java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected ?
Please, give me an idea how to solve this. (The only solution I see is to compile everything from source!)
[/update_1]
Full text of my MarReduce Job:
package com.bigdata.hadoop;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.util.*;
import org.apache.hcatalog.mapreduce.*;
import org.apache.hcatalog.data.*;
import org.apache.hcatalog.data.schema.*;
import org.apache.log4j.Logger;
public class FieldCounts extends Configured implements Tool {
public static class Map extends Mapper<WritableComparable, HCatRecord, TableFieldValueKey, IntWritable> {
static Logger logger = Logger.getLogger("com.foo.Bar");
static boolean firstMapRun = true;
static List<String> fieldNameList = new LinkedList<String>();
/**
* Return a list of field names not containing `id` field name
* #param schema
* #return
*/
static List<String> getFieldNames(HCatSchema schema) {
// Filter out `id` name just once
if (firstMapRun) {
firstMapRun = false;
List<String> fieldNames = schema.getFieldNames();
for (String fieldName : fieldNames) {
if (!fieldName.equals("id")) {
fieldNameList.add(fieldName);
}
}
} // if (firstMapRun)
return fieldNameList;
}
#Override
protected void map( WritableComparable key,
HCatRecord hcatRecord,
//org.apache.hadoop.mapreduce.Mapper
//<WritableComparable, HCatRecord, Text, IntWritable>.Context context)
Context context)
throws IOException, InterruptedException {
HCatSchema schema = HCatBaseInputFormat.getTableSchema(context.getConfiguration());
//String schemaTypeStr = schema.getSchemaAsTypeString();
//logger.info("******** schemaTypeStr ********** : "+schemaTypeStr);
//List<String> fieldNames = schema.getFieldNames();
List<String> fieldNames = getFieldNames(schema);
for (String fieldName : fieldNames) {
Object value = hcatRecord.get(fieldName, schema);
String fieldValue = null;
if (null == value) {
fieldValue = "<NULL>";
} else {
fieldValue = value.toString();
}
//String fieldNameValue = fieldName+"."+fieldValue;
//context.write(new Text(fieldNameValue), new IntWritable(1));
TableFieldValueKey fieldKey = new TableFieldValueKey();
fieldKey.fieldName = fieldName;
fieldKey.fieldValue = fieldValue;
context.write(fieldKey, new IntWritable(1));
}
}
}
public static class Reduce extends Reducer<TableFieldValueKey, IntWritable,
WritableComparable, HCatRecord> {
protected void reduce( TableFieldValueKey key,
java.lang.Iterable<IntWritable> values,
Context context)
//org.apache.hadoop.mapreduce.Reducer<Text, IntWritable,
//WritableComparable, HCatRecord>.Context context)
throws IOException, InterruptedException {
Iterator<IntWritable> iter = values.iterator();
int sum = 0;
// Sum up occurrences of the given key
while (iter.hasNext()) {
IntWritable iw = iter.next();
sum = sum + iw.get();
}
HCatRecord record = new DefaultHCatRecord(3);
record.set(0, key.fieldName);
record.set(1, key.fieldValue);
record.set(2, sum);
context.write(null, record);
}
}
public int run(String[] args) throws Exception {
Configuration conf = getConf();
args = new GenericOptionsParser(conf, args).getRemainingArgs();
// To fix Hadoop "META-INFO" (http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file)
conf.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName());
// Get the input and output table names as arguments
String inputTableName = args[0];
String outputTableName = args[1];
// Assume the default database
String dbName = null;
Job job = new Job(conf, "FieldCounts");
HCatInputFormat.setInput(job,
InputJobInfo.create(dbName, inputTableName, null));
job.setJarByClass(FieldCounts.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
// An HCatalog record as input
job.setInputFormatClass(HCatInputFormat.class);
// Mapper emits TableFieldValueKey as key and an integer as value
job.setMapOutputKeyClass(TableFieldValueKey.class);
job.setMapOutputValueClass(IntWritable.class);
// Ignore the key for the reducer output; emitting an HCatalog record as
// value
job.setOutputKeyClass(WritableComparable.class);
job.setOutputValueClass(DefaultHCatRecord.class);
job.setOutputFormatClass(HCatOutputFormat.class);
HCatOutputFormat.setOutput(job,
OutputJobInfo.create(dbName, outputTableName, null));
HCatSchema s = HCatOutputFormat.getTableSchema(job);
System.err.println("INFO: output schema explicitly set for writing:"
+ s);
HCatOutputFormat.setSchema(job, s);
return (job.waitForCompletion(true) ? 0 : 1);
}
public static void main(String[] args) throws Exception {
String classpath = System.getProperty("java.class.path");
//System.out.println("*** CLASSPATH: "+classpath);
int exitCode = ToolRunner.run(new FieldCounts(), args);
System.exit(exitCode);
}
}
And class for complex key:
package com.bigdata.hadoop;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import org.apache.hadoop.io.WritableComparable;
import com.google.common.collect.ComparisonChain;
public class TableFieldValueKey implements WritableComparable<TableFieldValueKey> {
public String fieldName;
public String fieldValue;
public TableFieldValueKey() {} //must have a default constructor
//
public void readFields(DataInput in) throws IOException {
fieldName = in.readUTF();
fieldValue = in.readUTF();
}
public void write(DataOutput out) throws IOException {
out.writeUTF(fieldName);
out.writeUTF(fieldValue);
}
public int compareTo(TableFieldValueKey o) {
return ComparisonChain.start().compare(fieldName, o.fieldName)
.compare(fieldValue, o.fieldValue).result();
}
}
Hadoop has gone through a huge code refactoring from Hadoop 1.0 to Hadoop 2.0. One side effect
is that code compiled against Hadoop 1.0 is not compatible with Hadoop 2.0 and vice-versa.
However source code is mostly compatible and thus one just need to recompile code with target
Hadoop distribution.
The exception "Found interface X, but class was expected" is very common when you're running
code that is compiled for Hadoop 1.0 on Hadoop 2.0 or vice-versa.
You can find the correct hadoop version used in the cluster, then specify that hadoop version in the pom.xml file Build your project with the same version of hadoop used in the cluster and deploy it.
You need to recompile "hcatalog-core" to support Hadoop 2.0.0.
Currently "hcatalog-core" only supports Hadoop 1.0
Obviously, you have versions incompatibility between you Hadoop and Hive versions. You need to upgrade (or downgrade) your Hadoop version or Hive version.
This is due the incompatibility between Hadoop 1 and Hadoop 2.
Look for entries like this
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
in your pom.xml.
These define the hadoop version to use. Change them or remove them as per your requirements.
Even I ran through this problem.
Was trying use HCatMultipleInputs with hive-hcatalog-core-0.13.0.jar. We are using hadoop 2.5.1.
The following code change helped me fix the issue:
//JobContext ctx = new JobContext(conf,jobContext.getJobID());
JobContext ctx = new Job(conf);

Categories

Resources