IllegalArgumentException: Jetty ALPN/NPN has not been properly configured - java

I'm trying to use google vision in google api, however I have the following problem:
jun 07, 2017 8:50:00 AM io.grpc.internal.ChannelExecutor drain
ADVERTÊNCIA: Runnable threw exception in ChannelExecutor
java.lang.IllegalArgumentException: Jetty ALPN/NPN has not been
properly configured. at
io.grpc.netty.GrpcSslContexts.selectApplicationProtocolConfig(GrpcSslContexts.java:174)
at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:151)
at io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:139)
at io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:109)
at
io.grpc.netty.NettyChannelBuilder.createProtocolNegotiatorByType(NettyChannelBuilder.java:335)
at
io.grpc.netty.NettyChannelBuilder.createProtocolNegotiator(NettyChannelBuilder.java:308)
at
io.grpc.netty.NettyChannelBuilder$NettyTransportFactory$DynamicNettyTransportParams.getProtocolNegotiator(NettyChannelBuilder.java:499)
at
io.grpc.netty.NettyChannelBuilder$NettyTransportFactory.newClientTransport(NettyChannelBuilder.java:448)
at
io.grpc.internal.CallCredentialsApplyingTransportFactory.newClientTransport(CallCredentialsApplyingTransportFactory.java:61)
at
io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:209)
at
io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:186)
at
io.grpc.internal.ManagedChannelImpl$SubchannelImplImpl.obtainActiveTransport(ManagedChannelImpl.java:806)
at
io.grpc.internal.GrpcUtil.getTransportFromPickResult(GrpcUtil.java:568)
at
io.grpc.internal.DelayedClientTransport.reprocess(DelayedClientTransport.java:296)
at
io.grpc.internal.ManagedChannelImpl$LbHelperImpl$5.run(ManagedChannelImpl.java:724)
at io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:87)
at
io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:715)
at
io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onUpdate(ManagedChannelImpl.java:752)
at io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:174)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm trying to run the api's own sample code, below:
public class QuickstartSample {
public static void main(String... args) throws Exception {
// Instantiates a client
ImageAnnotatorClient vision = ImageAnnotatorClient.create();
// The path to the image file to annotate
String fileName = "./resources/wakeupcat.jpg";
// Reads the image file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString imgBytes = ByteString.copyFrom(data);
// Builds the image annotation request
List<AnnotateImageRequest> requests = new ArrayList<>();
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.LABEL_DETECTION).build();
AnnotateImageRequest request = AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Performs label detection on the image file
BatchAnnotateImagesResponse response = vision.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.printf("Error: %s\n", res.getError().getMessage());
return;
}
for (EntityAnnotation annotation : res.getLabelAnnotationsList()) {
annotation.getAllFields().forEach((k, v) -> System.out.printf("%s : %s\n", k, v.toString()));
}
}
}
}
I'm using the following dependencies:
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-vision</artifactId>
<version>v1-rev357-1.22.0</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-vision</artifactId>
<version>0.17.2-beta</version>
</dependency>
Has anyone ever had this problem?

I've actually discovered that to run the api, you must necessarily be in a web context and also be running the application on a Jetty server

Related

JVM falls after parsing images via Tesseract

I have an app that works on backend it uses maven dependency: tess4j 5.4.0
Trained data was installed and connected through env variable.
I have endpoint where I can create entity by adding an image and it works correctly when I use Postman(image added -> text recognized and added to entity -> entity created -> I can create next component), but if I invoke this endpoint via frontend - JVM crashed after 4-5 minutes. If I turn off this function I can create entities and backend works correctly
Also it works when I try to run in locally, but when I deploy it on the server everything breaks(via frontend).
Example of entity Service:
public String storeFileAndGetContentText(MultipartFile file) {
String fileName;
Path targetLocation = this.fileStorageLocation.resolve(fileName);
Files.copy(file.getInputStream(), targetLocation, StandardCopyOption.REPLACE_EXISTING);
File targetFile = targetLocation.toFile();
String contentText = new RecognitionService().parseImage(targetFile);
return contentText;
}
Example of recognition service:
public class RecognitionService {
private static ITesseract tesseract;
static {
try {
tesseract = new Tesseract();
} catch (Exception e) {
log.warn("Failure during Tesseract initialization", e);
}
}
public String parseImage(File file) {
try {
return tesseract.doOCR(file).replaceAll("\n", " ").trim();
} catch (TesseractException e) {
log.warn("Tesseract can't read file {}", file.getName(), e);
return "";
}
}
Logs:
java.lang.IllegalStateException: failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path?
at org.elasticsearch.server#8.4.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:285)
at org.elasticsearch.server#8.4.2/org.elasticsearch.node.Node.<init>(Node.java:456)
at org.elasticsearch.server#8.4.2/org.elasticsearch.node.Node.<init>(Node.java:311)
at org.elasticsearch.server#8.4.2/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)
at org.elasticsearch.server#8.4.2/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)
at org.elasticsearch.server#8.4.2/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data
at org.elasticsearch.server#8.4.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:230)
at org.elasticsearch.server#8.4.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:198)
at org.elasticsearch.server#8.4.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:277)
... 5 more
Caused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/node.lock
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixPath.toRealPath(UnixPath.java:825)
at org.apache.lucene.core#9.3.0/org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:94)
at org.apache.lucene.core#9.3.0/org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:43)
at org.apache.lucene.core#9.3.0/org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:44)
at org.elasticsearch.server#8.4.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:223)
... 7 more
Suppressed: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/node.lock
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218)
at java.base/java.nio.file.Files.newByteChannel(Files.java:380)
at java.base/java.nio.file.Files.createFile(Files.java:658)
at org.apache.lucene.core#9.3.0/org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:84)
Actually, I see that something wrong with ElasticSearch, but I don’t see how it’s connected
I installed tesseract-ocr on target machine - to make test recognition work
I tried to install next libraries: imagemagic, ghostscript - I thought that tesseract-ocr expect something from these libraries(found in Intenet)

AuroraRDS Serverless with data-api library in Java does not work

I want to access a database via the data-api which is AWS providing since the start of 2020.
This is my Maven code (only aws dependency shown):
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.790</version>
</dependency>
<dependency>
<groupId>software.amazon.rdsdata</groupId>
<artifactId>rds-data-api-client-library-java</artifactId>
<version>1.0.4</version>
</dependency>
This is my Java code
public class Opstarten {
public static final String RESOURCE_ARN = "arn:aws:rds:eu-central <number - name >";
public static final String SECRET_ARN = "arn:aws:secretsmanager:eu-central-1:<secret>";
public static final String DATABASE = "dbmulesoft";
public static void main(String[] args) {
// TODO Auto-generated method stub
new Opstarten().testme();
}
public void testme( ) {
var account1 = new Account(1, "John"); //plain POJO conform AWS manual hello world example
var account2 = new Account(2, "Mary");
RdsDataClient client = RdsDataClient.builder().database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN).build();
client.forSql("INSERT INTO accounts(accountId, name) VALUES(:accountId, :name)").
withParameter(account1).withParameter(account2).execute();
}
}
Error I am having:
Exception in thread "main" java.lang.NullPointerException
at com.amazon.rdsdata.client.RdsDataClient.executeStatement(RdsDataClient.java:134)
at com.amazon.rdsdata.client.Executor.executeAsSingle(Executor.java:92)
at com.amazon.rdsdata.client.Executor.execute(Executor.java:77)
at nl.bpittens.aws.rds.worker.Opstarten.testme(Opstarten.java:47)
at nl.bpittens.aws.rds.worker.Opstarten.main(Opstarten.java:29)
When I debug it I see that the client object is nog null but the rdsDataService is null as a method or parameter of the client object.
I have checked AWS side for Java RDS Data API but nothing is mentioned there.
Any idea whats wrong ?
Looks like you aren't passing RDS data service, you need to do as follows:
AWSRDSData awsrdsData = AWSRDSDataClient.builder().build();
RdsDataClient client = RdsDataClient.builder()
.rdsDataService(awsrdsData)
.database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN)
.build();
You can also configure mapping options as follows:
MappingOptions mappingOptions = MappingOptions.builder()
.ignoreMissingSetters(true)
.useLabelForMapping(true)
.build();
AWSRDSData awsrdsData = AWSRDSDataClient.builder().build();
RdsDataClient client = RdsDataClient.builder()
.rdsDataService(awsrdsData)
.database(DATABASE)
.resourceArn(RESOURCE_ARN)
.secretArn(SECRET_ARN)
.mappingOptions(mappingOptions)
.build();

Why Java Azure Function App freezes when trying to access Azure datalake?

I am developing a Java Azure function that needs to download a file from Azure Datalake Gen2.
When the function tries to read the file, it freezes and no exception is thrown, and nothing is written to the console.
I am using the azure-storage-file-datalake SDK for Java dependency and this is my code:
import com.azure.storage.common.StorageSharedKeyCredential;
import com.azure.storage.file.datalake.DataLakeDirectoryClient;
import com.azure.storage.file.datalake.DataLakeFileClient;
import com.azure.storage.file.datalake.DataLakeFileSystemClient;
import com.azure.storage.file.datalake.DataLakeServiceClient;
import com.azure.storage.file.datalake.DataLakeServiceClientBuilder;
public DataLakeServiceClient GetDataLakeServiceClient(String accountName, String accountKey)
{
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey);
DataLakeServiceClientBuilder builder = new DataLakeServiceClientBuilder();
builder.endpoint("https://" + accountName + ".dfs.core.windows.net");
builder.credential(sharedKeyCredential);
return builder.buildClient();
}
public void DownloadFile(DataLakeFileSystemClient fileSystemClient, String fileName) throws Exception{
DataLakeDirectoryClient directoryClient = fileSystemClient.getDirectoryClient("DIR");
DataLakeDirectoryClient subdirClient= directoryClient.getSubdirectoryClient("SUBDIR");
DataLakeFileClient fileClient = subdirClient.getFileClient(fileName);
File file = new File("downloadedFile.txt");
OutputStream targetStream = new FileOutputStream(file);
fileClient.read(targetStream);
targetStream.close();
}
#FunctionName("func")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.GET}, authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
final ExecutionContext context
)
{
String fileName= request.getQueryParameters().get("file");
DataLakeServiceClient datalakeClient= GetDataLakeServiceClient("datalake", "<the shared key>");
DataLakeFileSystemClient datalakeFsClient= datalakeClient.getFileSystemClient("fs");
DownloadFile(datalakeFsClient, fileName);
}
The app freezes when it hits fileClient.read(targetStream);
I've tried with really small files, I've checked the credentials and the file paths, the access rights to datalake, I've switched to SAS token - the result is the same: no error at all, but the app freezes.
I am using these Maven dependencies:
<dependency>
<groupId>com.microsoft.azure.functions</groupId>
<artifactId>azure-functions-java-library</artifactId>
</dependency>
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-storage-file-datalake</artifactId>
<version>12.2.0</version>
</dependency>
so i was facing the same problem.Then i came across this.
https://github.com/Azure/azure-functions-java-library/issues/113
This worked for me on java 8,azure function v3.
Set FUNCTIONS_WORKER_JAVA_LOAD_APP_LIBS to True
in the function app Application settings.Then save and restart the function app.It will work.
Please check and do update if it worked for you as well.

Google App Engine ApiProxy delegate NullPointerException on saving file

I'm trying to save a file to Google App Engine and following their documentation, but constantly getting the NullPointerException on the CreateOrReplace method line. Already figured it out that gcsService is created. Any ideas?
public String getFileUrl(MultipartFile file) throws CustomException {
String unique = UUID.randomUUID().toString();
String fileName = unique + ".jpeg";
GcsFilename gcsFilename = new GcsFilename("MY_BUCKET", fileName);
try {
GcsOutputChannel outputChannel = GcsServiceFactory.createGcsService().createOrReplace(gcsFilename, GcsFileOptions.getDefaultInstance());
copy(file.getInputStream(), Channels.newOutputStream(outputChannel));
} catch (IOException e) {
e.printStackTrace();
}
ImagesService imagesService = ImagesServiceFactory.getImagesService();
ServingUrlOptions options = ServingUrlOptions.Builder
.withGoogleStorageFileName("/gs/MY_BUCKET/" + fileName)
.secureUrl(true);
return imagesService.getServingUrl(options);
}
Included dependency:
<dependency>
<groupId>com.google.appengine.tools</groupId>
<artifactId>appengine-gcs-client</artifactId>
<version>0.7</version>
</dependency>
And getting the:
RetryHelper(32.34 s, 6 attempts, com.google.appengine.tools.cloudstorage.GcsServiceImpl$1#74c3f0b0): Too many failures, giving up
With the exception in the log:
c.g.a.tools.cloudstorage.RetryHelper : RetryHelper(1.386 s, x attempts, com.google.appengine.tools.cloudstorage.GcsServiceImpl$1#6bd1dbe9): Attempt #x failed [java.io.IOException: java.lang.NullPointerException], sleeping for x ms
Thanks in advance!
Edit:
Found that ApiProxy class in com.google.apphosting.api on
public static ApiProxy.Delegate getDelegate() {
return delegate;
}
returns NULL.
Any ideas?

Unable to run spark job from EMR which connects to Cassandra on EC2

I'm running spark job from EMR cluster which connects to Cassandra on EC2
The following are the dependencies which I'm using for my project.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0-M1</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>2.1.6</version>
</dependency>
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>
The issue that Im facing here is if I use the cassandra-driver-core 3.0.0 , I get the following error
java.lang.ExceptionInInitializerError
at mobi.vserv.SparkAutomation.DriverTester.doTest(DriverTester.java:28)
at mobi.vserv.SparkAutomation.DriverTester.main(DriverTester.java:16)
Caused by: java.lang.IllegalStateException: Detected Guava issue #1635 which indicates that a version of Guava less than 16.01 is in use. This introduces codec resolution issues and potentially other incompatibility issues in the driver. Please upgrade to Guava 16.01 or later.
at com.datastax.driver.core.SanityChecks.checkGuava(SanityChecks.java:62)
at com.datastax.driver.core.SanityChecks.check(SanityChecks.java:36)
at com.datastax.driver.core.Cluster.<clinit>(Cluster.java:67)
... 2 more
I have tried including the guaua version 19.0.0 also but still I'm unable to run the job
and when I degrate the cassandra-driver-core 2.1.6 I get the following error.
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /EMR PUBLIC IP:9042 (com.datastax.driver.core.TransportException: [/EMR PUBLIC IP:9042] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)
at com.datastax.driver.core.Cluster.init(Cluster.java:158)
at com.datastax.driver.core.Cluster.connect(Cluster.java:248)
Please note that I have tested my code locally and it runs absolutely fine and I have followed the different combinations of dependencies as mentioned here https://github.com/datastax/spark-cassandra-connector
Code :
public class App1 {
private static Logger logger = LoggerFactory.getLogger(App1.class);
static SparkConf conf = new SparkConf().setAppName("SparkAutomation").setMaster("yarn-cluster");
static JavaSparkContext sc = null;
static
{
sc = new JavaSparkContext(conf);
}
public static void main(String[] args) throws Exception {
JavaRDD<String> Data = sc.textFile("S3 PATH TO GZ FILE/*.gz");
JavaRDD<UserSetGet> usgRDD1=Data.map(new ConverLineToUSerProfile());
List<UserSetGet> t3 = usgRDD1.collect();
for(int i =0 ; i <=t3.size();i++){
try{
phpcallone php = new phpcallone();
php.sendRequest(t3.get(i));
}
catch(Exception e){
logger.error("This Has reached ====> " + e);
}
}
}
}
public class phpcallone{
private static Logger logger = LoggerFactory.getLogger(phpcallone.class);
static String pid;
public void sendRequest(UserSetGet usg) throws JSONException, IOException, InterruptedException {
UpdateCassandra uc= new UpdateCassandra();
try {
uc.UpdateCsrd();
}
catch (ClassNotFoundException e) {
e.printStackTrace(); }
}
}
}
public class UpdateCassandra{
public void UpdateCsrd() throws ClassNotFoundException {
Cluster.Builder clusterBuilder = Cluster.builder()
.addContactPoint("PUBLIC IP ").withPort(9042)
.withCredentials("username", "password");
clusterBuilder.getConfiguration().getSocketOptions().setConnectTimeoutMillis(10000);
try {
Session session = clusterBuilder.build().connect("dmp");
session.execute("USE dmp");
System.out.println("Connection established");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Assuming that you are using EMR 4.1+, you can pass in the guava jar into the --jars option for spark submit. Then supply a configuration file to EMR to use user class paths first.
For example, in a file setup.json
[
{
"Classification": "spark-defaults",
"Properties": {
"spark.driver.userClassPathFirst": "true",
"spark.executor.userClassPathFirst": "true"
}
}
]
You would supply the --configurations file://setup.json option into the create-cluster aws cli command.

Categories

Resources