Jython - Integration Java and Python - java

I have sample python file, which i need to call through java program.
For this is am using Jython.
Pom Dependency
<dependency>
<groupId>org.python</groupId>
<artifactId>jython-standalone</artifactId>
<version>2.7.0</version>
</dependency>
Java File
public class JythonIntegrationTest {
public static void main(String[] args) throws FileNotFoundException , ScriptException {
StringWriter writer = new StringWriter();
ScriptEngineManager manager = new ScriptEngineManager();
ScriptContext context = new SimpleScriptContext();
context.setWriter(writer);
ScriptEngine engine = manager.getEngineByName("python");
engine.eval(new FileReader("D:\\python\\sample.py") , context);
System.out.println(writer.toString());
}
}
when i run this program, i get below error :-
line is - manager.getEngineByName("python");
Exception in thread "main" java.lang.NullPointerException
at maven_test.maven_test.JythonIntegrationTest.main(JythonIntegrationTest.java:38)
Do i need to run some python exe/service on system.

Related

AuroraRDS Serverless with data-api library in Java does not work

I want to access a database via the data-api which is AWS providing since the start of 2020.
This is my Maven code (only aws dependency shown):
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.790</version>
</dependency>
<dependency>
<groupId>software.amazon.rdsdata</groupId>
<artifactId>rds-data-api-client-library-java</artifactId>
<version>1.0.4</version>
</dependency>
This is my Java code
public class Opstarten {
public static final String RESOURCE_ARN = "arn:aws:rds:eu-central <number - name >";
public static final String SECRET_ARN = "arn:aws:secretsmanager:eu-central-1:<secret>";
public static final String DATABASE = "dbmulesoft";
public static void main(String[] args) {
// TODO Auto-generated method stub
new Opstarten().testme();
}
public void testme( ) {
var account1 = new Account(1, "John"); //plain POJO conform AWS manual hello world example
var account2 = new Account(2, "Mary");
RdsDataClient client = RdsDataClient.builder().database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN).build();
client.forSql("INSERT INTO accounts(accountId, name) VALUES(:accountId, :name)").
withParameter(account1).withParameter(account2).execute();
}
}
Error I am having:
Exception in thread "main" java.lang.NullPointerException
at com.amazon.rdsdata.client.RdsDataClient.executeStatement(RdsDataClient.java:134)
at com.amazon.rdsdata.client.Executor.executeAsSingle(Executor.java:92)
at com.amazon.rdsdata.client.Executor.execute(Executor.java:77)
at nl.bpittens.aws.rds.worker.Opstarten.testme(Opstarten.java:47)
at nl.bpittens.aws.rds.worker.Opstarten.main(Opstarten.java:29)
When I debug it I see that the client object is nog null but the rdsDataService is null as a method or parameter of the client object.
I have checked AWS side for Java RDS Data API but nothing is mentioned there.
Any idea whats wrong ?
Looks like you aren't passing RDS data service, you need to do as follows:
AWSRDSData awsrdsData = AWSRDSDataClient.builder().build();
RdsDataClient client = RdsDataClient.builder()
.rdsDataService(awsrdsData)
.database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN)
.build();
You can also configure mapping options as follows:
MappingOptions mappingOptions = MappingOptions.builder()
.ignoreMissingSetters(true)
.useLabelForMapping(true)
.build();
AWSRDSData awsrdsData = AWSRDSDataClient.builder().build();
RdsDataClient client = RdsDataClient.builder()
.rdsDataService(awsrdsData)
.database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN)
.mappingOptions(mappingOptions)
.build();

Why Java Azure Function App freezes when trying to access Azure datalake?

I am developing a Java Azure function that needs to download a file from Azure Datalake Gen2.
When the function tries to read the file, it freezes and no exception is thrown, and nothing is written to the console.
I am using the azure-storage-file-datalake SDK for Java dependency and this is my code:
import com.azure.storage.common.StorageSharedKeyCredential;
import com.azure.storage.file.datalake.DataLakeDirectoryClient;
import com.azure.storage.file.datalake.DataLakeFileClient;
import com.azure.storage.file.datalake.DataLakeFileSystemClient;
import com.azure.storage.file.datalake.DataLakeServiceClient;
import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
public DataLakeServiceClient GetDataLakeServiceClient(String accountName, String accountKey)
{
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey);
DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
builder.endpoint("https://" + accountName + ".dfs.core.windows.net");
builder.credential(sharedKeyCredential);
return builder.buildClient();
}
public void DownloadFile(DataLakeFileSystemClient fileSystemClient, String fileName) throws Exception{
DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("DIR");
DataLakeDirectoryClient subdirClient= directoryClient.getSubdirectoryClient("SUBDIR");
DataLakeFileClient fileClient = subdirClient.getFileClient(fileName);
File file = new File("downloadedFile.txt");
OutputStream targetStream = new FileOutputStream(file);
fileClient.read(targetStream);
targetStream.close();
}
#FunctionName("func")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context
)
{
String fileName= request.getQueryParameters().get("file");
DataLakeServiceClient datalakeClient= GetDataLakeServiceClient("datalake", "<the shared key>");
DataLakeFileSystemClient datalakeFsClient= datalakeClient.getFileSystemClient("fs");
DownloadFile(datalakeFsClient, fileName);
}
The app freezes when it hits fileClient.read(targetStream);
I've tried with really small files, I've checked the credentials and the file paths, the access rights to datalake, I've switched to SAS token - the result is the same: no error at all, but the app freezes.
I am using these Maven dependencies:
<dependency>
<groupId>com.microsoft.azure.functions</groupId>
<artifactId>azure-functions-java-library</artifactId>
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-file-datalake</artifactId>
<version>12.2.0</version>
</dependency>
so i was facing the same problem.Then i came across this.
https://github.com/Azure/azure-functions-java-library/issues/113
This worked for me on java 8,azure function v3.
Set FUNCTIONS_WORKER_JAVA_LOAD_APP_LIBS to True
in the function app Application settings.Then save and restart the function app.It will work.
Please check and do update if it worked for you as well.

Sikuli seems to be throwing java.lang.ExceptionInInitializerError

I have created a simple sikuli script within inteliJ however when attempting to execute the script it seems to throw the following exception:
Exception in thread "main" java.lang.ExceptionInInitializerError
Currently I'm using Java: 11 and I'am using the following Maven dependency:
<dependency>
<groupId>com.sikulix</groupId>
<artifactId>sikulixapi</artifactId>
<version>1.1.0</version>
</dependency>
My Script:
public class Test {
public static void main(String[] args) throws FindFailed {
Screen s = new Screen();
//Click on settingimage
Pattern setting = new Pattern("image1.png");
s.wait(setting, 2000);
s.click();
}

Unable to run spark job from EMR which connects to Cassandra on EC2

I'm running spark job from EMR cluster which connects to Cassandra on EC2
The following are the dependencies which I'm using for my project.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.6</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
The issue that Im facing here is if I use the cassandra-driver-core 3.0.0 , I get the following error
java.lang.ExceptionInInitializerError
at mobi.vserv.SparkAutomation.DriverTester.doTest(DriverTester.java:28)
at mobi.vserv.SparkAutomation.DriverTester.main(DriverTester.java:16)
Caused by: java.lang.IllegalStateException: Detected Guava issue #1635 which indicates that a version of Guava less than 16.01 is in use. This introduces codec resolution issues and potentially other incompatibility issues in the driver. Please upgrade to Guava 16.01 or later.
at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
... 2 more
I have tried including the guaua version 19.0.0 also but still I'm unable to run the job
and when I degrate the cassandra-driver-core 2.1.6 I get the following error.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /EMR PUBLIC IP:9042 (com.datastax.driver.core.TransportException: [/EMR PUBLIC IP:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
Please note that I have tested my code locally and it runs absolutely fine and I have followed the different combinations of dependencies as mentioned here https://github.com/datastax/spark-cassandra-connector
Code :
public class App1 {
private static Logger logger = LoggerFactory.getLogger(App1.class);
static SparkConf conf = new SparkConf().setAppName("SparkAutomation").setMaster("yarn-cluster");
static JavaSparkContext sc = null;
static
{
sc = new JavaSparkContext(conf);
}
public static void main(String[] args) throws Exception {
JavaRDD<String> Data = sc.textFile("S3 PATH TO GZ FILE/*.gz");
JavaRDD<UserSetGet> usgRDD1=Data.map(new ConverLineToUSerProfile());
List<UserSetGet> t3 = usgRDD1.collect();
for(int i =0 ; i <=t3.size();i++){
try{
phpcallone php = new phpcallone();
php.sendRequest(t3.get(i));
}
catch(Exception e){
logger.error("This Has reached ====> " + e);
}
}
}
}
public class phpcallone{
private static Logger logger = LoggerFactory.getLogger(phpcallone.class);
static String pid;
public void sendRequest(UserSetGet usg) throws JSONException, IOException, InterruptedException {
UpdateCassandra uc= new UpdateCassandra();
try {
uc.UpdateCsrd();
}
catch (ClassNotFoundException e) {
e.printStackTrace(); }
}
}
}
public class UpdateCassandra{
public void UpdateCsrd() throws ClassNotFoundException {
Cluster.Builder clusterBuilder = Cluster.builder()
.addContactPoint("PUBLIC IP ").withPort(9042)
.withCredentials("username", "password");
clusterBuilder.getConfiguration().getSocketOptions().setConnectTimeoutMillis(10000);
try {
Session session = clusterBuilder.build().connect("dmp");
session.execute("USE dmp");
System.out.println("Connection established");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Assuming that you are using EMR 4.1+, you can pass in the guava jar into the --jars option for spark submit. Then supply a configuration file to EMR to use user class paths first.
For example, in a file setup.json
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.userClassPathFirst": "true",
"spark.executor.userClassPathFirst": "true"
}
}
]
You would supply the --configurations file://setup.json option into the create-cluster aws cli command.

How do I register a Quercus custom function when using Quercus in Java ScriptEngine?

I am using Quercus in Apache JMeter for simple scripting of tests. I have a requirement to log from PHP using log4j, and on the whole this works well. So I wrote a Quercus module like this:
public class LogFunction extends AbstractQuercusModule {
private static Logger log = Logger.getLogger(LogFunction.class);
public void log_str(Env env, String str) {
log.info(str);
}
}
Now, I am testing this with the following code:
public class QuercusTest {
private static ScriptEngine engine;
static{
//set up Quercus
ScriptEngineManager manager = new ScriptEngineManager();
engine = manager.getEngineByName("php");
}
public static void main(String[] args) throws ScriptException{
engine.eval("<?php log_str('Hello');");
}
}
This throws an exception (as I would expect) because this custom function isn't registered.
Exception in thread "main" com.caucho.quercus.QuercusErrorException: eval::1: Fatal Error: 'log_str' is an unknown function.
at com.caucho.quercus.env.Env.error(Env.java:6420)
at com.caucho.quercus.env.Env.error(Env.java:6306)
at com.caucho.quercus.env.Env.error(Env.java:5990)
at com.caucho.quercus.expr.CallExpr.evalImpl(CallExpr.java:198)
at com.caucho.quercus.expr.CallExpr.eval(CallExpr.java:151)
at com.caucho.quercus.expr.Expr.evalTop(Expr.java:523)
at com.caucho.quercus.statement.ExprStatement.execute(ExprStatement.java:67)
at com.caucho.quercus.program.QuercusProgram.execute(QuercusProgram.java:413)
at com.caucho.quercus.script.QuercusScriptEngine.eval(QuercusScriptEngine.java:134)
at com.caucho.quercus.script.QuercusScriptEngine.eval(QuercusScriptEngine.java:179)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:247)
at com.succeed.QuercusTest.main(QuercusTest.java:18)
However, I can't see how to register this Quercus module with the Java scripting engine. Docs are a bit sparse... Any help would be appreciated.
1.
ScriptEngineManager manager = new ScriptEngineManager();
engine = manager.getEngineByName("php");
2.
if( engine instanceof QuercusScriptEngine )
{
((QuercusScriptEngine)engine).getQuercus().addModule(new LogFunction());
}
This works.
(quercus-4.0.18-src + resin 4.0)
I ended up ditching the scripting engine code and going native-Quercus:
QuercusEngine engine = new QuercusEngine();
engine.getQuercus().getModuleContext().addModule("LogFunction", new LogFunction());
engine.setOutputStream(os);
engine.getQuercus().init();
engine.execute(phpCode);
This works OK. It at least has fairly predictable behaviour.

Categories

Resources