I create three virtual machines. I used with one master, one slave and client node. And then I wrote java program in client machine.This program is used for writing file from client local file system to master hdfs.
I used hadoop is my client host name and server1 is my namenode host name
Here is my code:
Configuration configuration=new Configuration();
InputStream inputStream =new BufferedInputStream(new FileInputStream("HadoopFile.txt"));
FileSystem hdfs = FileSystem.get(new URI("hdfs://server1:9000"),configuration);
OutputStream outputStream = hdfs.create(new Path(hdfs://server1:9000/home/hadoop/HadoopF.txt));
try { IOUtils.copyBytes(inputStream,outputStream,4096,false); }
After I run this program, I encounter error like:
Call from hadoop/127.0.0.1 to server1:9000 failed to connection exception
Related
server A: 192.168.96.130, OS: centos7.x
server B: Localhost,my computer, OS: windows10
I install Hadoop3.1.2 on serverA, and write Java Application to write data into HDFS on server A.
When the Java Applicataion is deployed on server A, it can write files with content onto HDFS successfully.
When the Java Applicataion is deployed on server B, It can write files onto HDFS, But can't write the content in the file. Always get the error:
2020-03-18 20:56:43,460 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000, call Call#4 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.96.1:53463
java.io.IOException: File /canal/canal_1/canal_1-2020-3-19-4.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2121)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:295)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:561)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
And below is my Java Application code:
Configuration conf = new Configuration();
FileSystem fs= FileSystem.get(new URI("hdfs://192.168.96.1:9000/"),conf,"root");
FSDataOutputStream out = fs.create(new Path("/canal/canal_1/canal_1-2020-03-10.txt"));
out.writeBytes("15, kevin15, 2020.3.15");
out.flush();
out.close();
fs.close();
How to solve this probem?
I think you should check your cluster healthy at first.http://namenode1:50070
then maybe you have not close your iptables ,so you cannot telnet the port when you on server2. you can try to execute command telnet SERVER1_IP 50020 on server1 and server2 to check it .
I'm trying to access files directly from an SFTP server, using Docker.
The following works:
import static java.nio.file.Paths.get;
public File[] copyExtractFiles() {
String command = "sftp -i case-loader/./docker/config -o StrictHostKeyChecking=no -P 2222 sftp#localhost:incoming/*.xml src/test/resources/extract";
Process p = new ProcessBuilder("bash", "-c", command).start();
p.waitFor();
BufferedReader stdOutput = new BufferedReader(new InputStreamReader(p.getInputStream(), Charset.forName(CHARSET_NAME)));
BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream(), Charset.forName(CHARSET_NAME)));
return get("src/test/resources/extract").toFile().listFiles();
}
This transfers an XML file from the incoming directory on the Docker image to the src/test/resources/extract directory, and then lists the files.
However, I do not have access to the local file system and so want to access the files directly on the SFTP server. Is this possible? What do I need to change?
Use SFTP library, like JSch, instead of driving an external console application. Then you will be able to download the file list to memory.
Java - download from SFTP directly to memory without ever writing to file.
I have searched a lot but couldn't get a solution for this. I need to copy a file from local windows machine to remote windows machine using java program. I have tried with JSch,
JSch jsch = new JSch();
Session session = null;
session = jsch.getSession("username","hostname",22);
session.setPassword("password");
session.setConfig("StrictHostKeyChecking", "no");
session.connect();
ChannelSftp channel = null;
channel = (ChannelSftp)session.openChannel("sftp");
channel.connect();
File localFile = new File("filePath");
//If you want you can change the directory using the following line.
channel.cd("E:/xxx");
channel.put(new FileInputStream(localFile),localFile.getName());
channel.disconnect();
session.disconnect();
While executing the above code, i am facing the below error,
Exception in thread "main" 2: No such file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846)
at com.jcraft.jsch.ChannelSftp._realpath(ChannelSftp.java:2340)
at com.jcraft.jsch.ChannelSftp.cd(ChannelSftp.java:342)
I have cygwin installed in remote windows machine. It seems Jsch is not able to find the windows path. The same code works properly when copying files from windows machine to linux machine.
Please tel me a solution for the above problem or is there any other options for this to achieve in java ? Thanks
In order to resolve a Windows path with a drive letter you may need to use the /cygdrive prefix. In this case, your cd method call should be called with the parameter /cygdrive/e/xxx.
I'm trying to retrieve file(.dat) with size of 1 GB, but, after retrieving some bytes from server it shows no connection of Ftp Server but, this program throws no error and hangs, my code runs perfactly fine for small file but, for big file it fails
code is as follows
//Ftp Connection
ftpClient.connect(server, port);
ftpClient.login(username, password);
ftpClient.enterLocalPassiveMode();
ftpClient.setFileType(FTP.BINARY_FILE_TYPE);
//File Downloading
File downloadFile = new File(savePath);
OutputStream outputStream = new BufferedOutputStream(new FileOutputStream(downloadFile));
isFileDownloaded= ftpClient.retrieveFile(remoteFilePath, outputStream);`
I want to copy log files from the windows/unix environment to HDFS in a specific directory structure. I know that I can do copyFromLocal in the hadoop shell but is it possible to do through a java code using Mapper.
If you mean copying a local file(or a directory) from local machine to HDFS, here is the code:
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path localPath = new Path("your_local_path");
Path remotePath = new Path("your_hdfs_path");
fs.copyFromLocalFile(localPath, remotePath);