I have been working on a project, in which I have to send files from local system to my FTP-server. for this purpose I thought of using Apache MINA.
is Apache MINA can be implemented in this situation, any suggestion or help will be useful. thanks.
I know Apache Commons Net is a convenient and efficient library for writing FTP clients.
They also provide a FTP client example: FTPClientExample.java
Yes you can use Apache Mina for this purpose. Look for following JARs/References
mina-core-2.0.19.jar - For authentication purpose
slf4j-api-1.7.25.jar - For logging purpose
sshd-common-2.1.0.jar - Common functions dependent jars
sshd-core-2.1.0.jar - Common functions dependent jars
sshd-sftp-2.1.0.jar - For SFTP file transfers and creating
clients and connections
Some example:
mSshClient = SshClient.setUpDefaultClient();
mSshClient.start();
mConnectFuture = mSshClient.connect(mUsername,mServerAddress.getHostAddress(),mServerPort,null);
mClientSession = mConnectFuture.verify().getSession();
mSftpClient = new DefaultSftpClient(mClientSession);
Related
I'm using apache camel for consuming an IBM Mq, I use jms for that, everything is ok that works fine, but in the performance testing the api create a lot of dynamic queues but just use once, I've used a lot of properties for solve this problem but I didn't get it yet. my api use a pattern InOut so the responses are in queue in a dynamic queue, when exist a lot of them, for example my api create 50 dynamic queues, but just use 3 of them.
Here are the properties I used to solve it, but didn´t work for me:
-maxConcurrentConsumers
-conccurrentConsumers
-threads
I found a solution for this and is this.
this is my consume to mq
.setHeader("CamelJmsDestinationName",
constant("queue:///"+queue+"?targetClient=1"))
.to("jms://queue:" + queue
+"?exchangePattern=InOut"
+"&replyToType=Temporary"
+"&requestTimeout=10s"
+"&useMessageIDAsCorrelationID=true"
+"&replyToConcurrentConsumers=40"
+"&replyToMaxConcurrentConsumers=90"
+"&cacheLevelName=CACHE_CONSUMER")
.id("idJms")
and this is the properties to connect the mq
ibm.mq.queueManager=${MQ_QUEUE_MANAGER}
ibm.mq.channel=${MQ_CHANNEL}
ibm.mq.connName=${MQ_HOST_NAME}
ibm.mq.user=${MQ_USER_NAME}
ibm.mq.additionalProperties.WMQ_SHARE_CONV_ALLOWED_YES=${MQ_SHARECNV}
ibm.mq.defaultReconnect=${MQ_RECONNECT}
# Config SSL
ibm.mq.ssl-f-i-p-s-required=false
ibm.mq.user-authentication-m-q-c-s-p=${MQ_AUTHENTICATION_MQCSP:false}
ibm.mq.tempModel=MQMODEL
the issue was in the MQ Model, the MQModel has to be shared if you are using the pattern inOut, this is because the concurrent create dynamic queues using the mqModel
We have a project made in Dropwizard version 2.0.0-RC, where we use REST-endpoints. After some issues we decided to use gRPC instead of using REST. There is a couple of 3rd party libraries to connect gRPC to Dropwizard, but we believe they are a bit outdated and not useable. So we are thinking about implementing Armeria and their GRPC solution.
To implement this, I need the Jetty instance to attach the GRPC.
This is how I can solve it (Mix between GRPC and Armeria):
Server server = Server.builder()
.http(8080)
.service(GrpcService.builder()...build())
.serviceUnder("/", JettyService.forServer(jettyServer))
.build();
server.start().join();
So I need the jettyServer to be the instance of Jetty with the type of org.eclipse.jetty.server. The rest of the code is Armerias way of embedding Jetty. Link to embedding jetty.
How can I retrieve the instance of Jetty?
I was able to solve this by using Dropwizard lifecycles to get the server.
// variable server is of type org.eclipse.jetty.server.Server
environment.lifecycle().addServerLifecycleListener(new ServerLifecycleListener() {
#Override
public void serverStarted(Server server) {
// ....
}
});
With the instance, you can use Armeria to attach gRPC
I was able to use the links provided in the comments of the other answer and put this PR together in Armeria project for adding a dropwizard module.
https://github.com/line/armeria/pull/2236
It currently targets 1.3.x rather than 2.0, but once a stable release exists, it'll need upgraded
Edit: PR was accepted and merged
I'm wondering if Apache Beam supports windows azure storage blob files(wasb) IO. Is there any support yet?
I'm asking because I've deployed an apache beam application to run a job on an Azure Spark cluster and basically it's impossible to IO wasb files from the associated storage container with that spark cluster. Is there any alternative solution?
Context: I'm trying to run the WordCount example on my Azure Spark Cluster. Already had set some components as stated here believing that would help me. Below is the part of my code where I'm setting the hadoop configuration:
final SparkPipelineOptions options = PipelineOptionsFactory.create().as(SparkPipelineOptions.class);
options.setAppName("WordCountExample");
options.setRunner(SparkRunner.class);
options.setSparkMaster("yarn");
JavaSparkContext context = new JavaSparkContext();
Configuration conf = context.hadoopConfiguration();
conf.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
conf.set("fs.azure.account.key.<storage-account>.blob.core.windows.net",
"<key>");
options.setProvidedSparkContext(context);
Pipeline pipeline = Pipeline.create(options);
But unfortunately I keep ending with the following error:
java.lang.IllegalStateException: Failed to validate wasb://<storage-container>#<storage-account>.blob.core.windows.net/user/spark/kinglear.txt
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:288)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:195)
at org.apache.beam.sdk.runners.PipelineRunner.apply(PipelineRunner.java:76)
at org.apache.beam.runners.spark.SparkRunner.apply(SparkRunner.java:129)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:400)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:323)
at org.apache.beam.sdk.values.PBegin.apply(PBegin.java:58)
at org.apache.beam.sdk.Pipeline.apply(Pipeline.java:173)
at spark.example.WordCount.main(WordCount.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.io.IOException: Unable to find handler for wasb://<storage-container>#<storage-account>.blob.core.windows.net/user/spark/kinglear.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:187)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:283)
... 13 more
I was thinking about implementing a customized IO for Apache Beam in this case for Azure Storage Blobs if comes to that as a solution, I'd like to check with the community if that's an alternative solution.
Apache Beam doesn't have a built-in connector for Windows Azure Storage Blob (WASB) at this moment.
There's an active effort in the Apache Beam project to add support for the HadoopFileSystem. I believe WASB has a connector for HadoopFileSystem in the hadoop-azure module. This would make WASB available with Beam indirectly -- this is likely the easiest path forward, and it should be ready very soon.
Now, it would be great to have a native support for WASB in Beam. It would likely enable another level of performance, and should be relatively straightforward to implement. As far as I'm aware, nobody is actively working on it, but this would be an awesome contribution to the project! (If you are personally interested in contributing, please reach out!)
Can any one provide example on Apache Apollo Queue producer and consumer from JAVA ?
Earlier I was using Apache ActiveMQ but now I want to migrate.
There are several examples in the Apollo distribution. The ones you want to look at are located in the following distribution directories:
examples/openwire/java
examples/stomp/java
examples/mqtt/java
examples/amqp/java
If you are using the protocol supported by Apollo then I dont see any changes required in the producer and consumer if they are already sending messages to ActiveMQ. Except the broker url if that has changed.
you will need to get the following jar files:
https://people.apache.org/~rgodfrey/qpid-java-amqp-1-0-client-jms.html and the javax.jms one. After that it's pretty simple to use the examples that come with apollo.
I start a listener from the bin folder using:
java -cp example/geronimo-jms_1.1_spec-1.1.jar:example/javax.jms-3.1.2.2.jar:example/qpid-amqp-1-0-client-0.22.jar:example/qpid-amqp-1-0-client-jms-0.22.jar:example/qpid-amqp-1-0-common-0.22.jar:. example.Listener topic://event
and similar for the Producer.
Is there a simple, programmatic way to quickly "deploy" and run a standard Java WAR file for local testing without having to install and configure external software packages like Tomcat or Jetty? Ideally something like Jetty's embeddable features but specifically for WAR files.
Java 6 provides the convenient Endpoint class which makes it easy to quickly deploy and test web services, is there something similar for WAR files? For example:
AppServer as = new javax.iwish.AppServer("localhost", 8080);
as.deploy("/", new File("path/to/my.war");
as.start();
I asked too soon, it looks like Jetty does exactly what I need:
Server server = new Server(8080);
server.setHandler(new WebAppContext("foo.war", "/"));
server.start();
Remarkably close to my dreamed up API =D