Unable to create grpc connection in 32bit windows machine - java

I am trying to access the list of Subscribers in my GCP Project using my java code and few libraries provided by GCP. This code works fine in my 64 bit Windows environment but not working in 32bit windows environment.
I have seen in a few documents, saying that netty is not supported on 32bit machines and we can build our own binaries if required.
https://netty.io/wiki/forked-tomcat-native.html#how-to-build
CredentialsProvider credentialsProvider =
FixedCredentialsProvider.create(
ServiceAccountCredentials.fromStream(new FileInputStream(JSONPath)));
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create(SubscriptionAdminSettings.newBuilder().setCredentialsProvider(credentialsProvider).build())) {
ListSubscriptionsRequest listSubscriptionsRequest =
ListSubscriptionsRequest.newBuilder()
.setProject(ProjectName.of(ProjectId).toString())
.build();
SubscriptionAdminClient.ListSubscriptionsPagedResponse response =
subscriptionAdminClient.listSubscriptions(listSubscriptionsRequest);
logger.log(Level.SEVERE,"response List: "+response.toString());
Iterable<Subscription> subscriptions = response.iterateAll();
for (Subscription subscription : subscriptions) {
if(subscription.getName().equals(SubscriptionId)){
return true;
}
}
[20:02:30:384]|[06-17-2019]|[io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts]|[INFO]|[36]: netty-tcnative unavailable (this may be normal)|
java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_windows_x86_32, netty_tcnative_x86_32, netty_tcnative] at io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:104)
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:526)
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.(OpenSsl.java:93)
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:244)
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171)
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120)
at io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:385)
at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:435)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:254)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:165)
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:157)
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157)
at com.google.cloud.pubsub.v1.stub.GrpcSubscriberStub.create(GrpcSubscriberStub.java:260)
at com.google.cloud.pubsub.v1.stub.SubscriberStubSettings.createStub(SubscriberStubSettings.java:241)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.(SubscriptionAdminClient.java:177)
at com.google.cloud.pubsub.v1.SubscriptionAdminClient.create(SubscriptionAdminClient.java:158)

The grpc-java SECURITY.md describes your options:
Use Java 9+, which support ALPN without need for tcnative
For 32 bit Windows specifically, you can Conscrypt
The documentation also describes how to use Conscrypt. Namely to add a dependency on conscrypt-openjdk-uber and to register it as the default security provider:
import org.conscrypt.Conscrypt;
import java.security.Security;
...
// Somewhere in main()
Security.insertProviderAt(Conscrypt.newProvider(), 1);

You answered your own question... netty-tcnative-* does not support 32bit platforms so you will need to compile it yourself and include it in your class-path.

Related

Listing blobs in Azure Blobstorage using Azure Java SDK V12 and ListBlobs() is extremely slow

I need to list all of the blobs in an Azure Blobstorage container. The container has circa 200,000~ blobs in it, and I'm looking to obtain the blob name, the last modified date, and the blob size.
Following the documentation for the Azure Java SDK V12, the following code should work:
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().connectionString(AzureBlobConnectionString).buildClient();
String containerName = "container1";
BlobContainerClient containerClient = blobServiceClient.getBlobContainerClient(containerName);
System.out.println("\nListing blobs...");
// List the blob(s) in the container.
for (BlobItem blobItem : containerClient.listBlobs()) {
System.out.println("\t" + blobItem.getName());
}
However, when executed this application just seems to hang indefinitely. If I open Powershell and run the following command:
Get-AzStorageBlob -Container container1 -Context $ctx
I get the expected result within about 3 minutes.
I've given the code example upwards of an hour to execute, yet nothing comes of it. I attempted to restrict the data being requested as per the documentation, along with setting a 5 minute time out:
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().connectionString(AzureBlobConnectionString).buildClient();
String containerName = "container1";
BlobContainerClient containerClient = blobServiceClient.getBlobContainerClient(containerName);
System.out.println("\nListing blobs...");
ListBlobsOptions options = new ListBlobsOptions()
.setMaxResultsPerPage(10)
.setDetails(new BlobListDetails()
.setRetrieveDeletedBlobs(false)
.setRetrieveSnapshots(true));
Duration duration = Duration.ofMinutes(5);
containerClient.listBlobs(options, duration).forEach(blob ->
System.out.printf("Name: %s, Directory? %b, Deleted? %b, Snapshot ID: %s%n",
blob.getName(),
blob.isPrefix(),
blob.isDeleted(),
blob.getSnapshot()));
However this resulted in it timing out with the exception:
Exception in thread "main" reactor.core.Exceptions$ReactiveException: java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 300000ms in 'flatMap' (and no fallback has been configured)
at reactor.core.Exceptions.propagate(Exceptions.java:366)
at reactor.core.publisher.BlockingIterable$SubscriberIterator.hasNext(BlockingIterable.java:168)
at java.lang.Iterable.forEach(Iterable.java:74)
at AzureManagement.AzureControl.listAllBlobs(AzureControl.java:42)
at Main.main(Main.java:8)
I understand there used to be a method called "listBlobsSegmented", however this does not appear to be in V12 of the Azure SDK for Java.
If anybody has any ideas as to how to get a list of the blobs in the container in an effective and efficient manner I would very much appreciate it!
Thanks.
I faced exactly the same problem of any operation to hang forever. Actually you have no problem in the way you list blobs.
It turned out to be a dependency conflict problem, make sure that there's no conflict in your dependencies with Azure SDK. It seems weird but we discovered this when we downgraded the Azure SDK version from 12 to older version, instead of hanging it throw an exception like method not found in class ...
in my case, the conflict came from hadoop-hdfs which forces an old version of netty. While Azure SDK wants a newer version of netty.
When I removed the HDFS dependency:
group: 'org.apache.hadoop', name: 'hadoop-hdfs', version: '3.2.0'
I can list files and blobs without the hanging problem.

Which region endpoint should AWS Java SDK v2 be using for Route 53?

On Windows 10 I'm using the AWS Java SDK v2 (software.amazon.awssdk:route53:2.8.3) and I'm trying to merely connect and list all my Route 53 hosted zones. I have us-west-1 specified in my user configuration (in my .aws/config file) as the default region.
I create a Route53Client using the following:
Route53Client route53Client = Route53Client.builder().build();
Note that I don't indicate a region, because in the online documentation it says:
When you submit requests using the AWS CLI or SDKs, either leave the Region and endpoint unspecified, or specify us-east-1 as the Region.
I then try to list hosted zones using something like this:
Set<HostedZone> hostedZones = client.listHostedZonesPaginator().stream()
.flatMap(response -> response.hostedZones().stream())
.collect(Collectors.toSet());
In the logs I see a debug message like this:
[DEBUG] Unable to load region from software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider#...:Unable to load region from system settings. Region must be specified either via environment variable (AWS_REGION) or system property (aws.region).
Then it throws a java.net.UnknownHostException for route53.us-west-1.amazonaws.com.
Granted I am on a spotty Internet connection right now. Is that the correct endpoint? If it is, the why isn't that endpoint listed at https://docs.aws.amazon.com/general/latest/gr/rande.html ? If it isn't, why is it trying to connect to a us-west1 endpoint, if I'm following the online documentation (as I quoted above), which indicates that a region need not be indicated? Or is the problem simply my Internet connection and spotty DNS lookup at the moment?
The AWS SDK development team decided to require Route53 requests to explicitly indicate the Region.AWS_GLOBAL or requests would not work, as someone noted in Issue #456 for the SDK:
To access Route53 you would currently need to specify the AWS_GLOBAL region. This was done to prevent customers from using global services and not realizing that for this service your calls are likely not staying in region and could potentially be spanning the globe.
Unfortunately Amazon didn't bother documenting this in the SDK (that I could find), and didn't provide a helpful error message, instead assuming developers would somehow guess the problem when the SDK tried to access an endpoint that did not exist even though the SDK was being used according to the API and according to the online documentation.
In short the Route53 client must be created like this:
route53Client = Route53Client.builder().region(Region.AWS_GLOBAL).build();
Here is the AWS Route 53 V2 Code example that lists hosted zones:
package com.example.route;
//snippet-start:[route.java2.list_zones.import]
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.route53.Route53Client;
import software.amazon.awssdk.services.route53.model.HostedZone;
import software.amazon.awssdk.services.route53.model.Route53Exception;
import software.amazon.awssdk.services.route53.model.ListHostedZonesResponse;
import java.util.List;
//snippet-end:[route.java2.list_zones.import]
public class ListHostedZones {
public static void main(String[] args) {
Region region = Region.AWS_GLOBAL;
Route53Client route53Client = Route53Client.builder()
.region(region)
.build();
listZones(route53Client);
}
//snippet-start:[route.java2.list_zones.main]
public static void listZones(Route53Client route53Client) {
try {
ListHostedZonesResponse zonesResponse = route53Client.listHostedZones();
List<HostedZone> checklist = zonesResponse.hostedZones();
for (HostedZone check: checklist) {
System.out.println("The name is : "+check.name());
}
} catch (Route53Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
//snippet-end:[route.java2.list_zones.main]
}

java.security.InvalidAlgorithmParameterException Coming in SFTP

Hi i am trying a simple sftp, but i m getting an error while establishing. connection, i am using maverick-legacy-client-all jar available at
https://www.sshtools.com/en/products/java-ssh-client this code was working fine with release 1.6.9 but it failed when i updated it to 1.6.17.
I also tried going through there jar changes doc here, there was few notes regarding to my exception DiffieHellmanGroupExchange Algo related changes but i failed to understand them clearly.
public void connect() throws SshException, IOException,
SftpStatusException, ChannelOpenException {
SshConnector con = SshConnector.createInstance();
con.setKnownHosts(new SftpHostKeyVerification());
// Tries SSH2 first and fallback to SSH1 if its not available
con.setSupportedVersions(SshConnector.SSH1 | SshConnector.SSH2);
/*Error coming here, in con.connect*/
this.ssh = con
.connect(new SocketTransport(this.host, DEFAULT_SSH_PORT),
this.userName);
PasswordAuthentication pwd = new PasswordAuthentication();
pwd.setPassword(this.passwod);
int isLoggedIn = this.ssh.authenticate(pwd);
if (SshAuthentication.COMPLETE == isLoggedIn) {
this.client = new SftpClient(this.ssh);
} else {
throw new IOException("[Authentication failure] login status: "
+ isLoggedIn);
}
}
Exception Log:
com.maverick.ssh.SshException: com.maverick.ssh.SshException
at com.maverick.ssh.components.jce.client.DiffieHellmanGroupExchangeSha1.performClientExchange(DiffieHellmanGroupExchangeSha1.java:315)
at com.maverick.ssh2.TransportProtocol.performKeyExchange(TransportProtocol.java:1424)
at com.maverick.ssh2.TransportProtocol.processMessage(TransportProtocol.java:1835)
at com.maverick.ssh2.TransportProtocol.startTransportProtocol(TransportProtocol.java:348)
at com.maverick.ssh2.Ssh2Client.connect(Ssh2Client.java:146)
at com.maverick.ssh.SshConnector.connect(SshConnector.java:649)
at com.maverick.ssh.SshConnector.connect(SshConnector.java:471)
at com.tekelec.ems.util.SftpImpl.connect(SftpImpl.java:73)
at com.tekelec.ems.eagle.measurement.WriterThread.run(WriterThread.java:93)
Caused by: com.maverick.ssh.SshException: Failed to generate DH value: Prime size must be multiple of 64, and can only range from 512 to 1024 (inclusive) [java.security.InvalidAlgorithmParameterException]
at com.maverick.ssh.components.jce.client.DiffieHellmanGroupExchangeSha1.performClientExchange(DiffieHellmanGroupExchangeSha1.java:250)
... 8 more
Caused by: java.security.InvalidAlgorithmParameterException: Prime size must be multiple of 64, and can only range from 512 to 1024 (inclusive)
at com.sun.crypto.provider.DHKeyPairGenerator.initialize(DHKeyPairGenerator.java:120)
at java.security.KeyPairGenerator$Delegate.initialize(KeyPairGenerator.java:658)
at java.security.KeyPairGenerator.initialize(KeyPairGenerator.java:400)
at com.maverick.ssh.components.jce.client.DiffieHellmanGroupExchangeSha1.performClientExchange(DiffieHellmanGroupExchangeSha1.java:240)
... 8 more
This because the default key exchange algorithm was changed to a more secure algorithm between those versions and you have not included all of the 3rd party dependencies that are provided in the lib folder of the Maverick Legacy Client distribution. This folder contains the BouncyCastle JCE provider which if added to the class path will resolve this issue.
The problem you are facing is that without the BouncyCastle JCE provider or a suitable JCE provider that supports large Diffie Hellman primes you will not be able to generate a large prime for the updated, more secure key exchange method.
i believe this a very serious condition occurring to many coders,
also i would like to thank Lee David here for the advice here. i was able to handle this situation by adding Bouncy Castle JCE 3rd party jar available in maverick lib folder.
Before this i was trying to edit my java.security file as suggested in other post but this was much easy way, also these Bouncy Castle jars come bundled in Maverick official release, so no worries on that part to.

HP ALM OTAClient.dll is incompatible with 64Bit OS

I Added a code to connect and create a defect in HP ALM through Eclipse(Java) in which it communicates OTAClient and com4j.jar. I successfully able to connect and create a defect in 32 Bit OS but i couldn't able to connect it on 64 bit based OS.
I walkaround some of the solutions posted here and even though following the solution successfully i couldn't achieve a solution.
[1]: com4j on Windows 64 bit ..
Here is My Code
import com.ClassFactory;
import com.IBug;
import com.IBugFactory;
import com.ITDConnection;
import com4j.Variant;
public class AlmQc {
public static void main(String args[])
{
login();
}
public static void createDefect(ITDConnection connection) {
IBugFactory bugFactory = (IBugFactory) connection.bugFactory().queryInterface(IBugFactory.class);
IBug bug = (bugFactory.addItem(new Variant(Variant.Type.VT_NULL))).queryInterface(IBug.class);
bug.assignedTo("Administrator");
bug.detectedBy("Administrator");
bug.status("New");
bug.project("Banking");
bug.summary("Created by Esh");
//bug.priority("Low");
bug.field("BG_SEVERITY", "2-Medium");
bug.field("BG_DETECTION_DATE", "2016-01-27 00:00:00");
bug.post();
}
public static void login()
{
String url = "http://almqc:8080/qcbin";
String username = "Administrator";
String password = "********";
String domain = "DEFAULT";
String project = "Banking";
ITDConnection itdc = ClassFactory.createTDConnection();
itdc.initConnectionEx(url);
itdc.connectProjectEx(domain, project, username, password);
System.out.println(itdc.projectConnected());
createDefect(itdc);
}
While running above code in eclipse i encountered following error.
Exception in thread "main" com4j.ExecutionException: com4j.ComException: 80040154 CoCreateInstance failed : Class not registered : .\com4j.cpp:153
at com4j.ComThread.execute(ComThread.java:203)
at com4j.Task.execute(Task.java:25)
at com4j.COM4J.createInstance(COM4J.java:97)
at com4j.COM4J.createInstance(COM4J.java:72)
at com.mercury.qualitycenter.otaclient.ClassFactory.createTDConnection(Unknown Source)
at Sample.main(Sample.java:18)
Caused by: com4j.ComException: 80040154 CoCreateInstance failed : Class not registered : .\com4j.cpp:153
at com4j.Native.createInstance(Native Method)
at com4j.COM4J$CreateInstanceTask.call(COM4J.java:117)
at com4j.COM4J$CreateInstanceTask.call(COM4J.java:104)
at com4j.Task.invoke(Task.java:51)
at com4j.ComThread.run0(ComThread.java:153)
at com4j.ComThread.run(ComThread.java:134)
Please provide any walkaround or solution who got successfully executed on 64 bit Based OS.
You'll have to make a 32-bit version of your program that can use the 32-bit version of OTACLIENT.DLL. I'm not aware of a 64-bit version of OTACLIENT.DLL.
The issue is not with 64 bit OS but with 64 bit JRE. If you are using IDE, point your JRE library (build path) to a 32 bit JRE (bin folder) else you can also install 32 bit JRE in 64 bit machines and run in that environment
OTAClient is pure windows dll, even though you are using java you need to register it on windows machine. Better approach to get most out of it is to use it with .net, in such cases you can create windows/web service exposed over http. With this service you can develop c# code to do operations with OTAClient.dll. Using web/rest/wcf service you can communicate with the developed service. Gr8 part of it is it allows you to run over 64-bit architecture. IIS also allows with option "Enable 32-bit application" at application pool level.

How access native provider API with Jclouds 1.7

Using JClouds, up to version 1.6.x it was possible to access to the native EC2 provider API by using the following idiom:
AWSEC2Client ec2Client = AWSEC2Client.class.cast(context.getProviderSpecificContext().getApi());
Actually, I copied from the documentation page: http://jclouds.apache.org/guides/aws/
It turns out that in the latest release this method has been removed. Is there an alternative method/way to access to the provider specific features (security groups, key-pairs, etc)?
Unwrapping the API from the ComputeServiceContext
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("accessKey", "secretAccessKey")
.buildView(ComputeServiceContext.class);
ComputeService computeService = context.getComputeService();
AWSEC2Api ec2Api = context.unwrapApi(AWSEC2Api.class);
Building the API directly
AWSEC2Api ec2Api = ContextBuilder.newBuilder("aws-ec2")
.credentials("accessKey", "secretAccessKey")
.buildApi(AWSEC2Api.class);

Categories

Resources