The listFiles() method of org.apache.commons.net.ftp.FTPClient works fine with Filezilla server on 127.0.0.1 but returns null on the root directory of public FTP servers such as belnet.be.
There is an identical question on the link below but enterRemotePassiveMode() doesn't seem to help.
Apache Commons FTPClient.listFiles
Could it be an issue with list parsing? If so, how can go about solving this?
Edit: Here's a directory cache dump:
FileZilla Directory Cache Dump
Dumping 1 cached directories
Entry 1:
Path: /
Server: anonymous#ftp.belnet.be:21, type: 4096
Directory contains 7 items:
lrw-r--r-- ftp ftp D 28 2009-06-17 debian
lrw-r--r-- ftp ftp D 31 2009-06-17 debian-cd
-rw-r--r-- ftp ftp 0 2010-03-04 13:30 keepalive.txt
drwxr-xr-x ftp ftp D 4096 2010-02-18 14:22 mirror
lrw-r--r-- ftp ftp D 6 2009-06-17 mirrors
drwxr-xr-x ftp ftp D 4096 2009-06-23 packages
lrw-r--r-- ftp ftp D 1 2009-06-17 pub
Here's my code using a wrapper I've made (testing inside the wrapper produces the same results):
public static void main(String[] args) {
FTPUtils ftpUtils = new FTPUtils();
String ftpURL = "ftp.belnet.be";
Connection connection = ftpUtils.getFTPClientManager().getConnection( ftpURL );
if( connection == null ){
System.out.println( "Could not connect" );
return;
}
FTPClientManager manager = connection.getFptClientManager();
FTPClient client = manager.getClient();
try {
client.enterRemotePassiveMode();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if( connection != null ){
System.out.println( "Connected to FTP" );
connection.login("Anonymous", "Anonymous");
if( connection.isLoggedIn() ){
System.out.println( "Login successful" );
LoggedInManager loggedin = connection.getLoggedInManager();
System.out.println( loggedin );
String[] fileList = loggedin.getFileList();
System.out.println( loggedin.getWorkingDirectory() );
if( fileList == null || fileList.length == 0 )
System.out.println( "No files found" );
else{
for (String name : fileList ) {
System.out.println( name );
}
}
connection.disconnect();
if( connection.isDisconnected() )
System.out.println( "Disconnection successful" );
else
System.out.println( "Error disconnecting" );
}else{
System.out.println( "Unable to login" );
}
} else {
System.out.println( "Could not connect" );
}
}
Produces this output:
Connected to FTP
Login succesful
utils.ftp.FTPClientManager$Connection$LoggedInManager#156ee8e
null
No files found
Disconnection successful
Inside the wrapper (attempted using both listNames() and listFiles() ):
public String[] getFileList() {
String[] fileList = null;
FTPFile[] ftpFiles = null;
try {
ftpFiles = client.listFiles();
//fileList = client.listNames();
//System.out.println( client.listNames() );
} catch (IOException e) {
return null;
}
fileList = new String[ ftpFiles.length ];
for( int i = 0; i < ftpFiles.length; i++ ){
fileList[ i ] = ftpFiles[ i ].getName();
}
return fileList;
}
As for FTPClient, it is handled as follows:
public class FTPUtils {
private FTPClientManager clientManager;
public FTPClientManager getFTPClientManager(){
clientManager = new FTPClientManager();
clientManager.setClient( new FTPClient() );
return clientManager;
}
Each FTP server has a different file list layout (yes, it's not part of the FTP standard, it's dumb), and so you have to use the correct FTPFileEntryParser, either by specifying it manually, or allowing CommonsFTP to auto-detect it.
Auto-detection usually works fine, but sometimes it doesn't, and you have to specify it explicitly, e.g.
FTPClientConfig conf = new FTPClientConfig(FTPClientConfig.SYST_UNIX);
FTPClient client = FTPClient();
client.configure(conf);
This explicitly sets the expected FTP server type to UNIX. Try the various types, see how it goes. I tried finding out myself, but ftp.belnet.be is refusing my connections :(
Have you tried checking that you can list the files using normal FTP client? (For some reason, I cannot even connect to the FTP port of "belnet.be".)
EDIT
According to the javadoc for listFiles(), the parsing is done using the FTPFileEntryParser instance provided by the parser factory. You probably need to figure out which of the parsers matches the FTP server's LIST output and configure the factory accordingly.
There was a parsing issue in earlier version of apache-Commons-net , the SYST command which returns the server type when returns null ( abruptly ) was not handled in parsingException . Try using the latest jar of apache-commons-net it may solve your problem.
Related
i have this java web application running on 4 servers.
The newest server ( just setting up ) is failing with the error
"java.lang.NoSuchMethodError: org.htmlparser.lexer.Lexer.parseCDATA()Lorg/htmlparser/Node"
when running the code below.
I have 1 server is running locally on my mac.
2 servers are running Centos 6.10 / java 1.8.0_242 / tomcat-8.5.54
The newest server (the one that is failing ) is running Centos 6.10 / java 1.8.0_242 / tomcat-8.5.54
i have copied all the jars from the working Centos server to the broke one
I am at a loss. Would love to hear some ideas on how to debug/resolve this....
The Code running is pretty simple
Another part that also confuses me, is if the jar was not found wouldnt Parser.createParser blow up and i have added debug code to make sure parser_c is not null
import org.htmlparser.Node;
import org.htmlparser.Parser;
import org.htmlparser.tags.ImageTag;
import org.htmlparser.tags.LinkTag;
import org.htmlparser.util.ParserException;
public class SignatureTools {
public static String getURLFromSignature(String signature) throws ParserException {
System.out.println("[getURLFromSignature]");
if ( signature == null ){ return null;}
Parser parser_c = Parser.createParser(signature, null);
Node nodes_c[] = parser_c.extractAllNodesThatAre(LinkTag.class);
String mkURL = null;
for (Node node : nodes_c) {
if (node != null && node instanceof LinkTag && ((LinkTag) node).getAttribute("href") != null) {
String href = ((LinkTag) node).getAttribute("href");
if ( href.contains("https://www.thedomain.com") ){
mkURL = href;
}
}
}
return URL;
}
}
found the problem..
i used this bit of code and found that Lexer was being loaded from a different jar instead of htmllexer.jar
Lexer lexer = new Lexer();
try {
System.out.println( "Lexer---->" + new File(Lexer.class.getProtectionDomain().getCodeSource().getLocation().toURI()).getPath());
} catch (URISyntaxException e) {
e.printStackTrace();
}
I am using hdfs file watcher service to load a config file as soon it is changed in my flink streaming job.
Source for watcher service : HDFS file watcher
The issue I am facing here is that the watcher service is reacting to change in entire hdfs rather than just the directory I am passing.
My code:
public static void main( String[] args ) throws IOException, InterruptedException, MissingEventsException
{
HdfsAdmin admin = new HdfsAdmin( URI.create("hdfs://stage.my-org.in:8020/tmp/anurag/"), new Configuration() );
DFSInotifyEventInputStream eventStream = admin.getInotifyEventStream();
while( true ) {
EventBatch events = eventStream.take();
for( Event event : events.getEvents() ) {
switch( event.getEventType() ) {
case CREATE:
System.out.print( "event type = " + event.getEventType() );
CreateEvent createEvent = (CreateEvent) event;
System.out.print( " path = " + createEvent.getPath() + "\n");
break;
default:
break;
}
}
}
}
Output from program :
event type = CREATE path = /tmp/anurag/newFile.txt
event type = CREATE path = /tmp/newFile2.txt
Please help me resolve this issue so that I can watch files in the particular directory passed as URI
Thanks in anticipation
Note: If you try to run this program, please run as hdfs user, else you would get org.apache.hadoop.security.AccessControlException
For now, I am using Hadoop API to get file every 30 sec, reading it's modification time, and if it's greater than reloading the file again.
The InotifyEventStream is nothing more than the HDFS events log parsed into an object, it will send all events in HDFS to you no matter which directory you set at constructor, that's one of the reasons why you need to run that code with a supergroup member.
The solution is to filter the events when they come, getting only those from the directory you want to. Something like:
EventBatch events = eventStream.take();
ArrayList<CreateEvent> filteredEvents = new ArrayList();
for( Event event : events.getEvents() ) {
switch( event.getEventType() ) {
case CREATE:
System.out.print( "event type = " + event.getEventType() );
CreateEvent createEvent = (CreateEvent) event;
if (createEvent.getPath() == '/your/desired/path') {
System.out.print( " path = " + createEvent.getPath() + "\n");
filteredEvents.add(createEvent);
}
break;
default:
break;
}
}
I want to perform an one direction rsync between an AWS S3 Bucket and a remote ftp server (accepts ftps) with a java lambda function. So if one file in bucket is deleted the lambda cron should remove it from the remote ftp server.
I read that aws cli offers the function s3 sync. Could this be an option?
best regards
Jannik
This would be pretty straight forward. The Lambda would be setup to be triggered on an S3 delete. The basic code (untested) would be something like:
public class Handler implements RequestHandler<S3Event, String> {
public String handleRequest(S3Event s3event, Context context) {
try {
S3EventNotificationRecord record = s3event.getRecords().get(0);
// Object key may have spaces or unicode non-ASCII characters.
String srcKey = record.getS3().getObject().getUrlDecodedKey();
// now use Apache Commons Net
// (https://commons.apache.org/proper/commons-net/)
// to delete the file on the FTP server
FTPClient ftpClient = new FTPClient();
ftpClient.connect(server, port);
int replyCode = ftpClient.getReplyCode();
if (!FTPReply.isPositiveCompletion(replyCode)) {
contect.getLogger().log("SFTP Connect failed");
return;
}
boolean success = ftpClient.login(user, pass);
if (!success) {
contect.getLogger().log("Could not login to the FTP server");
return;
}
String fileToDelete = "/some/ftp/directory/" + srcKey;
boolean deleted = ftpClient.deleteFile(fileToDelete);
if (deleted) {
contect.getLogger().log("The file was deleted successfully.");
} else {
contect.getLogger().log("Could not delete the file, it may not exist.");
}
}
catch (IOException e) {
throw new RuntimeException(e);
}
}
On the S3 side, you will need to enable your S3 bucket to send a delete event to your Lambda. This can be done in the AWS console by selecting the bucket and in the advanced section, add select Events, add a notification, select "Permanently deleted" (or "All object delete events") and add your Lambda.
I have a problem with CRM Dynamics Online 2016 Azure SDK for Java.
I can connect to Azure Service Bus, I can see queues and message count in queues, but cannot receive messages. Message Id is null and message body contain 500 error
500The server was unable to process the
request; please retry the operation. If the problem persists, please
contact your Service Bus administrator and provide the tracking id.
TrackingId:acf8a543-33c9-486d-b13b-443823e6c394_G9,TimeStamp:4/13/2016
7:26:22 AM. If the problem persists, please contact
your Service Bus administrator and provide the tracking id.
TrackingId:acf8a543-33
Is there any working sample on the Internet to solve the problem?
Test code:
#Test
public void readAllExistedMessagesFromAllQueue() {
try {
ServiceBusContract serviceBusContract = ServiceBusConfiguration.configureWithConnectionString(null, Configuration.load(), ASB_CONNECTION_STRING).create(ServiceBusContract.class);
ReceiveMessageOptions opts = ReceiveMessageOptions.DEFAULT;
opts.setReceiveMode(ReceiveMode.PEEK_LOCK);
ListQueuesResult result = serviceBusContract.listQueues();
if (result != null && result.getItems().size() > 0) {
for (QueueInfo queueInfo : result.getItems()) {
logger.debug("queu: " + queueInfo.getPath() + " MessageCount: " + queueInfo.getMessageCount());
for (int i = 0; i < result.getItems().size(); i++) {
BrokeredMessage message = serviceBusContract.receiveQueueMessage(queueInfo.getPath(),
opts).getValue();
if (message == null) {
continue;
}
System.out.print("__________________________________________");
System.out.println("MessageID: " + message.getMessageId());
System.out.print("From queue: ");
byte[] b = new byte[200];
String s = null;
int numRead = message.getBody().read(b);
while (-1 != numRead) {
s = new String(b);
s = s.trim();
System.out.print(s);
numRead = message.getBody().read(b);
}
System.out.println();
}
}
}
} catch (IOException e) {
logger.error(e);
} catch (ServiceException e) {
logger.error(e);
}
}
Per my experience, according to the source code and javadocs of Service Bus, the ServiceBusContract is a Java interface, that you can't directly create an instance of the interface ServiceBusContract.
So please try to use the code below from the section Create a queue of the document "How to use Service Bus queues".
Configuration config =
ServiceBusConfiguration.configureWithSASAuthentication(
"<your-servicebus-namespace>",
"RootManageSharedAccessKey",
"<SAS-key-value>",
".servicebus.windows.net"
);
ServiceBusContract serviceBusContract = ServiceBusService.create(config);
Update
You can find the SharedAccessKeyName & SharedAccessKey in the connection string via click the button below at the bottom of your service bus page.
Then, show the view below and copy the CONNECTION STRING.
The connection string like this below.
Endpoint=sb://<your-servicebus-namespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<SAS-key-value>
Please copy the correct part of connection string instead of the related part of the code.
Iam not able to upload files in FireFox and safari but iam able to do it successfully in explorer.
When i tried to debug i found out that in case of IE the upload browser is giving the entire file as eg C:\Documents and Settings\jjayashree\My Documents\price.csv
but where as in FF and safari the upload widget is just giving the file name with no extension.
previously code was like this
if (fileName.contains("\")) {
index = fileName.lastIndexOf("\");
}
if (this.fileName != null && this.fileName.trim().length() > 0 && index >= 0) {
this.fileName = this.fileName.substring(index + 1, this.fileName.length());
int dotPosition = fileName.lastIndexOf('.');
String extension = fileName.substring(dotPosition + 1, fileName.length());
try {
if (profileType.equalsIgnoreCase("sampleProfile")) {
if (extension.equalsIgnoreCase("csv")) {
//fileNameTextBox.setText(this.fileName);
this.form.submit();
} else {
new CustomDialogBox(Nexus.INFO_MESSAGE, MessageConstants.SPECIFY_FILE_NAME_MSG).show();
}
}
} catch (Exception e) {
Window.alert("SPECIFY_VALID_FILE_NAME_MSG");
}
} else {
Window.alert("SPECIFY_A_FILE_MSG");
}
i changed it as
if (this.fileName != null && this.fileName.trim().length() > 0) {
this.fileName = this.fileName.substring(this.fileName.lastIndexOf("\") + 1, this.fileName.length());
}
i found it working but when the same is deployed in linux iam getting an error
I also hav a doubt becos in the doPost of servlet iam using fileName.replace("\", "/");
is this the problem. . How wil mozilla encounter this fileName.replace() wil it just see and find nothing can be replced and go or wil it throw any kind of Exception
Maybe try gwtupload? It simplifies file loading to one function call, and handles all the backend for you. It's a little complicated to get working but there's a tutorial on the site on how to do it.
http://code.google.com/p/gwtupload/