I have created a simple sikuli script within inteliJ however when attempting to execute the script it seems to throw the following exception:
Exception in thread "main" java.lang.ExceptionInInitializerError
Currently I'm using Java: 11 and I'am using the following Maven dependency:
<dependency>
<groupId>com.sikulix</groupId>
<artifactId>sikulixapi</artifactId>
<version>1.1.0</version>
</dependency>
My Script:
public class Test {
public static void main(String[] args) throws FindFailed {
Screen s = new Screen();
//Click on settingimage
Pattern setting = new Pattern("image1.png");
s.wait(setting, 2000);
s.click();
}
Related
I'm getting NoClassDefFoundError exception when I run this main class through Eclipse
import org.json.simple.parser.JSONParser;
public class A {
public static void main(String args[]) throws IOException {
JSONParser jsonParser = new JSONParser();
// code
}
}
Exception:
Exception in thread "main" java.lang.NoClassDefFoundError: org/json/simple/parser/JSONParser
Caused by: java.lang.ClassNotFoundException: org.json.simple.parser.JSONParser
I am implementing below dependency in gradle file:
implementation 'com.googlecode.json-simple:json-simple:1.1.1'
Have you changed/set your classpath to your current project? Maybe that's why your program is giving you error
I'm getting the following exception when using cosmosdb sdk for Java:
Exception in thread "main" java.lang.NoSuchFieldError: ALLOW_TRAILING_COMMA
at com.microsoft.azure.cosmosdb.internal.Utils.<clinit>(Utils.java:75)
at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.<clinit>(RxDocumentClientImpl.java:132)
at com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient$Builder.build(AsyncDocumentClient.java:224)
at Program2.<init>(Program2.java:25)
at Program2.main(Program2.java:30)
I'm just trying to connect to the CosmosDB using AsyncDocumentClient. The exception occurs in that moment.
executorService = Executors.newFixedThreadPool(100);
scheduler = Schedulers.from(executorService);
client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("[cosmosurl]")
.withMasterKeyOrResourceToken("[mykey]")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
I heard about some library conflict but I haven't found the properly fix.
Thanks!
Please refer to my working sample.
Java code:
import com.microsoft.azure.cosmosdb.ConnectionPolicy;
import com.microsoft.azure.cosmosdb.ConsistencyLevel;
import com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient;
public class test {
public static void main(String[] args) throws Exception {
AsyncDocumentClient client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("https://XXX.documents.azure.com:443/")
.withMasterKeyOrResourceToken("XXXX")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
System.out.println(client);
}
}
pom.xml
<dependencies>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb</artifactId>
<version>2.6.4</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb-commons</artifactId>
<version>2.6.4</version>
</dependency>
</dependencies>
I have sample python file, which i need to call through java program.
For this is am using Jython.
Pom Dependency
<dependency>
<groupId>org.python</groupId>
<artifactId>jython-standalone</artifactId>
<version>2.7.0</version>
</dependency>
Java File
public class JythonIntegrationTest {
public static void main(String[] args) throws FileNotFoundException , ScriptException {
StringWriter writer = new StringWriter();
ScriptEngineManager manager = new ScriptEngineManager();
ScriptContext context = new SimpleScriptContext();
context.setWriter(writer);
ScriptEngine engine = manager.getEngineByName("python");
engine.eval(new FileReader("D:\\python\\sample.py") , context);
System.out.println(writer.toString());
}
}
when i run this program, i get below error :-
line is - manager.getEngineByName("python");
Exception in thread "main" java.lang.NullPointerException
at maven_test.maven_test.JythonIntegrationTest.main(JythonIntegrationTest.java:38)
Do i need to run some python exe/service on system.
I'm running spark job from EMR cluster which connects to Cassandra on EC2
The following are the dependencies which I'm using for my project.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.6</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
The issue that Im facing here is if I use the cassandra-driver-core 3.0.0 , I get the following error
java.lang.ExceptionInInitializerError
at mobi.vserv.SparkAutomation.DriverTester.doTest(DriverTester.java:28)
at mobi.vserv.SparkAutomation.DriverTester.main(DriverTester.java:16)
Caused by: java.lang.IllegalStateException: Detected Guava issue #1635 which indicates that a version of Guava less than 16.01 is in use. This introduces codec resolution issues and potentially other incompatibility issues in the driver. Please upgrade to Guava 16.01 or later.
at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
... 2 more
I have tried including the guaua version 19.0.0 also but still I'm unable to run the job
and when I degrate the cassandra-driver-core 2.1.6 I get the following error.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /EMR PUBLIC IP:9042 (com.datastax.driver.core.TransportException: [/EMR PUBLIC IP:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
Please note that I have tested my code locally and it runs absolutely fine and I have followed the different combinations of dependencies as mentioned here https://github.com/datastax/spark-cassandra-connector
Code :
public class App1 {
private static Logger logger = LoggerFactory.getLogger(App1.class);
static SparkConf conf = new SparkConf().setAppName("SparkAutomation").setMaster("yarn-cluster");
static JavaSparkContext sc = null;
static
{
sc = new JavaSparkContext(conf);
}
public static void main(String[] args) throws Exception {
JavaRDD<String> Data = sc.textFile("S3 PATH TO GZ FILE/*.gz");
JavaRDD<UserSetGet> usgRDD1=Data.map(new ConverLineToUSerProfile());
List<UserSetGet> t3 = usgRDD1.collect();
for(int i =0 ; i <=t3.size();i++){
try{
phpcallone php = new phpcallone();
php.sendRequest(t3.get(i));
}
catch(Exception e){
logger.error("This Has reached ====> " + e);
}
}
}
}
public class phpcallone{
private static Logger logger = LoggerFactory.getLogger(phpcallone.class);
static String pid;
public void sendRequest(UserSetGet usg) throws JSONException, IOException, InterruptedException {
UpdateCassandra uc= new UpdateCassandra();
try {
uc.UpdateCsrd();
}
catch (ClassNotFoundException e) {
e.printStackTrace(); }
}
}
}
public class UpdateCassandra{
public void UpdateCsrd() throws ClassNotFoundException {
Cluster.Builder clusterBuilder = Cluster.builder()
.addContactPoint("PUBLIC IP ").withPort(9042)
.withCredentials("username", "password");
clusterBuilder.getConfiguration().getSocketOptions().setConnectTimeoutMillis(10000);
try {
Session session = clusterBuilder.build().connect("dmp");
session.execute("USE dmp");
System.out.println("Connection established");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Assuming that you are using EMR 4.1+, you can pass in the guava jar into the --jars option for spark submit. Then supply a configuration file to EMR to use user class paths first.
For example, in a file setup.json
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.userClassPathFirst": "true",
"spark.executor.userClassPathFirst": "true"
}
}
]
You would supply the --configurations file://setup.json option into the create-cluster aws cli command.
I developed a JAVA (JDK1.6) application to manage PDF file with iText (v5.5.0).
After I wrote test application using groovy, but when i create a PdfReader object, in my test case,
PdfReader pdfReader = new PdfReader("/my/path/project/test.pdf")
I obtain the following error:
java.lang.NoClassDefFoundError: org/bouncycastle/cms/RecipientId
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2484)
...
Caused by: java.lang.ClassNotFoundException: org.bouncycastle.cms.RecipientId
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
The first asserts in groovy test class works fine.
I created the same test class with JUnit4, and all works fine!
How can I fix the error in groovy test class?
I don't use bouncycastle class, why have I that ClassNotFound Exception?
EDIT:
GroovyTestCase
class PdfMergeItextTest extends GroovyTestCase {
PdfMergeItext pdfMerge
void setUp() {
super.setUp();
println "Test class [PdfMergeItext] avviato..."
pdfMerge = new PdfMergeItext()
pdfMerge.openOutputPdf("/my/path/project/output.pdf")
}
void tearDown() {
println "Test class [PdfMergeItext] END."
}
#Test
void testMergeSinglePdfFile() {
println "Test one Pdf.."
File outputPdf = new File("/my/path/project/output.pdf")
assertTrue outputPdf.exists()
PdfReader pdfReader = new PdfReader("/my/path/project/test.pdf")
pdfMerge.addPdf(pdfReader)
pdfMerge.flush()
pdfMerge.close()
assert outputPdf.size() > 0
println "File size: ${outputPdf.size()}"
println "End test one Pdf"
}
}
JUnit4 TestCase
public class PdfMergeItextUnitTest {
PdfMergeItext pdfMergeItext = null;
#Before
public void setUp() throws Exception {
System.out.println("Start..");
this.pdfMergeItext = new PdfMergeItext();
this.pdfMergeItext.openOutputPdf("/my/path/project/output.pdf");
}
#After
public void tearDown() throws Exception {
System.out.println("END!");
}
#Test
public void testMergePdfFile() throws IOException, BadPdfFormatException {
File outputPdf = new File("/my/path/project/output.pdf");
Assert.assertTrue(outputPdf.exists());
PdfReader pdfReader = new PdfReader("/my/path/project/test.pdf");
this.pdfMergeItext.addPdf(pdfReader);
this.pdfMergeItext.flush();
this.pdfMergeItext.close();
Assert.assertTrue(outputPdf.size() > 0);
}
}
Thanks
The BounceCastle JARs are optional, see the pom.xml (search for dependency). So, if you didn't include them in your pom.xml or build.gradle , you won't get them (anyway, what build tool are you using)?
So if you are using dependency management I understand why it is not there. The real question is: why is it needed in the first test and not in the second test?
Please make the tests totally equalent (you are calling addPdf once vs. twice and you are calling length vs. size
Since BounceCastle is used for decryption, are you testing with the same PDF file?