I wrote a small Java command line program to test sending emails from a remote server. I'm getting the dreaded "NoClassDefFoundError" and I can't figure out why.
The server is running:
SunOS 5.10 Generic January 2005
Java 1.5.0_30-b03 ( Sun, standard )
My java program is called
SendEmailACME
The error message is
Exception in thread "main" java.lang.NoClassDefFoundError: javax/activation/DataSource
The complete output from the run of the program is:
bash-3.00$ javac SendEmailACME.java
bash-3.00$ java SendEmailACME
SendEmailACME: Classpath: .:/users/steve/TestProgramsLib/mail.jar:users/steve/TestProgramsLib/activation.jar
DEBUG: setDebug: JavaMail version 1.4.4
Exception in thread "main" java.lang.NoClassDefFoundError: javax/activation/DataSource
at SendEmailACME.main(SendEmailACME.java:47)
bash-3.00$
I ran
java -verbose SendEmailACME
The ouput was too long for stackoverflow. All it included was the regular output, plus a bunch of messages about java loading all of its regular libraries, the libraries from mail.jar, but I didn't see any from javax.activation.*
Output from "$ echo $CLASSPATH" is:
bash-3.00$ echo $CLASSPATH
.:/users/steve/TestProgramsLib/mail.jar:users/steve/TestProgramsLib/activation.jar
bash-3.00$
My home directory is
/users/steve
It contains these two directories
TestPrograms
TestProgramsLib
The first has my program SendEmailACME.java, SendEmailACME.class/
The second has the following jars in it:
bash-3.00$ ls -l
total 1102
-rw-r--r-- 1 steve acme 55932 Apr 19 2006 activation.jar
-rw-r--r-- 1 steve acme 494975 Jan 14 2011 mail.jar
bash-3.00$
This is the source code of my command line program SendEmailACME:
import javax.mail.*;
import javax.mail.internet.*;
import javax.mail.Authenticator;
import javax.mail.PasswordAuthentication;
import java.util.Properties;
public class SendEmailACME {
public static void main(String[] args) throws Exception{
String smtpServer = "msg.abc.acme.com";
int port = 25;
String userid = "acme.staffdirectory";
String password = "password";
String contentType = "text/html";
String subject = "test: Send An Email, From A Java Client Using msg.abc.acme.com";
String from = "ACME.Staff.Directory#acme.com";
String to = "steve#acme.com,joerre123#gmail.com,fake.mail#acme.com,bogus#fauxmail.com";
String body = "<h1>Test. An Email, From A Java Client Using msg.abc.acme.com.</hi>";
System.out.println("SendEmailACME: Classpath: " + System.getProperty("java.class.path"));
Properties props = new Properties();
props.put("mail.transport.protocol", "smtp");
props.put("mail.smtp.auth", "true");
props.put("mail.smtp.starttls.enable","true");
props.put("mail.smtp.host", smtpServer);
Session mailSession = Session.getInstance(props);
// Get runtime more runtime output when attempting to send an email
mailSession.setDebug(true);
MimeMessage message = new MimeMessage(mailSession);
message.setFrom(new InternetAddress(from));
message.setRecipients(Message.RecipientType.TO, to);
message.setSubject(subject);
message.setContent(body,contentType);
Transport transport = mailSession.getTransport();
transport.connect(smtpServer, port, userid, password);
transport.sendMessage(message,message.getRecipients(Message.RecipientType.TO));
transport.close();
}// end function main()
}// end class SendEmailACME
Here is the output from running a command to see what is inside activation.jar:
bash-3.00$ jar -tf activation.jar
META-INF/MANIFEST.MF
META-INF/SUN_MICR.SF
META-INF/SUN_MICR.RSA
META-INF/
META-INF/mailcap.default
META-INF/mimetypes.default
javax/
javax/activation/
javax/activation/ActivationDataFlavor.class
javax/activation/MimeType.class
javax/activation/MimeTypeParameterList.class
javax/activation/MimeTypeParseException.class
javax/activation/CommandInfo.class
javax/activation/DataHandler$1.class
javax/activation/DataHandler.class
javax/activation/DataSource.class
javax/activation/CommandMap.class
javax/activation/DataContentHandler.class
javax/activation/DataContentHandlerFactory.class
javax/activation/CommandObject.class
javax/activation/DataHandlerDataSource.class
javax/activation/DataSourceDataContentHandler.class
javax/activation/ObjectDataContentHandler.class
javax/activation/FileDataSource.class
javax/activation/FileTypeMap.class
javax/activation/MailcapCommandMap.class
javax/activation/MimetypesFileTypeMap.class
javax/activation/SecuritySupport$1.class
javax/activation/SecuritySupport$2.class
javax/activation/SecuritySupport$3.class
javax/activation/SecuritySupport$4.class
javax/activation/SecuritySupport$5.class
javax/activation/SecuritySupport.class
javax/activation/URLDataSource.class
javax/activation/UnsupportedDataTypeException.class
com/
com/sun/
com/sun/activation/
com/sun/activation/registries/
com/sun/activation/registries/MailcapFile.class
com/sun/activation/registries/MailcapParseException.class
com/sun/activation/registries/MimeTypeFile.class
com/sun/activation/registries/MimeTypeEntry.class
com/sun/activation/registries/LineTokenizer.class
com/sun/activation/registries/LogSupport.class
com/sun/activation/registries/MailcapTokenizer.class
com/sun/activation/viewers/
com/sun/activation/viewers/ImageViewer.class
com/sun/activation/viewers/ImageViewerCanvas.class
com/sun/activation/viewers/TextEditor.class
com/sun/activation/viewers/TextViewer.class
bash-3.00$
Everything compiles fine, but it can't seem to find javax.activation.DataSource despite activation.jar being in the classpath
I do not have access to the jdk_home/jre/lib/ext directory.
I have been attempting to execute SendEmailACME from my directory
/users/steve/TestPrograms
Thanks in advance for any help
Steve
bash-3.00$ echo $CLASSPATH
.:/users/steve/TestProgramsLib/mail.jar:users/steve/TestProgramsLib/activation.jar
You appear to be missing a / between mail.jar: and users/steve. This means java is looking in the wrong place for activation.jar (in ./users rather than /users).
Your CLASSPATH doesn't contain JDK libraries where javax.* libraries are placed.
Related
Good evening,
I have been facing this error for a couple of days by now, and despite looking for a solution all over the web, I coul'd fix this error.
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttribute;
import javax.naming.directory.BasicAttributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
public class LDAPtest {
public static void main(String[] args) {
try {
String keystorePath = "C:/Program Files/Java/jdk-13.0.2/lib/security/cacerts";
System.setProperty("javax.net.ssl.keyStore", keystorePath);
System.setProperty("javax.net.ssl.keyStorePassword", "changeit");
Hashtable<String, String> ldapEnv = new Hashtable<>();
ldapEnv.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
ldapEnv.put(Context.PROVIDER_URL, "ldaps://localhost:10636");
ldapEnv.put(Context.SECURITY_AUTHENTICATION,"simple");
ldapEnv.put(Context.SECURITY_PRINCIPAL,"uid=admin,ou=system");
ldapEnv.put(Context.SECURITY_CREDENTIALS,"secret");
DirContext connection = new InitialDirContext(ldapEnv);
System.out.println("Benvenuto " + connection);
NamingEnumeration enm = connection.list("");
while (enm.hasMore()) {
System.out.println(enm.next());
}
enm.close();
connection.close();
}catch(Exception e) {
e.printStackTrace();
}
}
}
This code is actually working when SSL is not tested, replacing the
ldapEnv.put(Context.PROVIDER_URL, "ldaps://localhost:10636");
with
ldapEnv.put(Context.PROVIDER_URL, "ldap://localhost:10389");
I made the setup for the LDAP server with Apache Directory Studio, and followed this tutorial here in order to get the LDAPS to work:
http://directory.apache.org/apacheds/basic-ug/3.3-enabling-ssl.html
So I made the certificate, even installed it and imported it with keytool into cacerts.
I enabled portforwarding for the chosen port (10636), but still, I'm getting this exception:
javax.naming.CommunicationException: simple bind failed: localhost:10636 [Root exception is
java.net.SocketException: Connection or outbound has closed]
at java.naming/com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:219)
at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2795)
at java.naming/com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:320)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxFromUrl(LdapCtxFactory.java:225)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:189)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:243)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:154)
at java.naming/com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:84)
at java.naming/javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:730)
at java.naming/javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:305)
at java.naming/javax.naming.InitialContext.init(InitialContext.java:236)
at java.naming/javax.naming.InitialContext.<init>(InitialContext.java:208)
at java.naming/javax.naming.directory.InitialDirContext.<init>(InitialDirContext.java:130)
at Prova3.main(Prova3.java:31)
Caused by: java.net.SocketException: Connection or outbound has closed
at java.base/sun.security.ssl.SSLSocketImpl$AppOutputStream.write(SSLSocketImpl.java:1246)
at java.base/java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:81)
at java.base/java.io.BufferedOutputStream.flush(BufferedOutputStream.java:142)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:398)
at java.naming/com.sun.jndi.ldap.Connection.writeRequest(Connection.java:371)
at java.naming/com.sun.jndi.ldap.LdapClient.ldapBind(LdapClient.java:359)
at java.naming/com.sun.jndi.ldap.LdapClient.authenticate(LdapClient.java:214)
... 13 more
Thank you in advance
For Googlers:
simple bind failed errors are almost always related to SSL connection.
With nc or telnet, check whether a connection can be established between client and remote host and port.
With SSLPoke.java (a simple Java class to check SSL connection), check whether certificates are correctly imported and used, also check correct TLS version. Use something like java -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -Djavax.net.debug=all SSLPoke google.com 443 > log.txt 2>&1.
Look for:
Warning: no suitable certificate found - continuing without client authentication = check whether you have set javax.net.ssl.trustStore
Fatal (HANDSHAKE_FAILURE): Couldn't kickstart handshaking = could be mismatched TLS versions
Also check whether your intermediate CA is expired
I am developing a network monitoring solution for my Java application so I can sniff packets on my machine interfaces and dump the result in rolling PCAP files. When launching the tcpdump command (using sudo) from the Java code, I get tcpdump: /path/to/app/log/GTP00: Permission denied
DETAILS
The command is executed using Runtime.getRuntime().exec(command) where command is a String valued sudo tcpdump -i eth0 -w /path/to/app/log/GTP -W 50 -C 20 -n net 10.246.212.0/24 and ip
The user launching the Java app is "testUser" which belongs to group "testGroup". This user is allowed to sudo tcpdump.
The destination dir has the following attributes:
[testUser#node ~]$ ls -ld /path/to/app/log
drwxrwxr-x. 2 testUser testGroup 4096 Feb 4 15:40 /path/to/app/log
MORE DETAILS
Launching the command from the command line SUCCESFULLY creates the pcap file in the specified folder.
[testUser#node ~]$ ls -l /path/to/app/log/GTP00
-rw-r--r--. 1 tcpdump tcpdump 1276 Feb 4 16:12 /path/to/app/log/GTP00
I have developed a simplified Java app for testing purposes
package execcommand;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ExecCommand {
public static void main(String[] args) {
try {
String command;
String line;
String iface = "eth0";
String capturePointName = "GTP";
String pcapFilterExpression = "net 10.246.212.0/24 and ip";
int capturePointMaxNumberOfFilesKept = 50;
int capturePointMaxSizeOfFilesInMBytes = 20;
command = "sudo tcpdump -i " + iface + " -w /path/to/app/log/"
+ capturePointName + " -W " + capturePointMaxNumberOfFilesKept + " -C "
+ capturePointMaxSizeOfFilesInMBytes + " -n " + pcapFilterExpression;
Process process = Runtime.getRuntime().exec(command);
BufferedReader br = new BufferedReader(new InputStreamReader(process.getErrorStream()));
while ((line = br.readLine()) != null) {
System.err.println(line);
}
} catch (IOException ex) {
Logger.getLogger(ExecCommand.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
This test program, launched by the same user, SUCCESFULLY creates the pcap file in the specified folder.
[testUser#node ~]$ ls -l /path/to/app/log/GTP00
-rw-r--r--. 1 tcpdump tcpdump 1448 Feb 4 16:21 /path/to/app/log/GTP00
Then, I can infer that the problem is somehow restricted to my Java app. This is how my Java app is launched:
exec java -Dknae_1 -Djavax.net.ssl.trustStorePassword=<trust_pass> -Djavax.net.ssl.trustStore=/path/to/app/etc/certificates/truststore -Djavax.net.ssl.keyStorePassword=<key_store_pass> -Djavax.net.ssl.keyStore=/path/to/app/etc/certificates/keystore -d64 -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8887,suspend=y -XX:-UseLargePages -Xss7m -Xmx64m -cp /path/to/app/lib/knae.jar:/path/to/app/lib/xphere_baseentity.jar:/path/to/app/lib/mysql.jar:/path/to/app/lib/log4j-1.2.17.jar:/path/to/app/lib/tools.jar:/path/to/app/conf:/path/to/app/lib/pcap4j-core-1.7.5.jar:/path/to/app/lib/pcap4j-packetfactory-static-1.7.5.jar:/path/to/app/lib/jna-5.1.0.jar:/path/to/app/lib/slf4j-api-1.7.25.jar:/path/to/app/lib/slf4j-simple-1.7.25.jar com.app.package.knae.Knae knae_1
UPDATE
I am able to write the pcap file within /tmp.
I have also tried giving 777 permissions to /path/to/app/log to no avail.
These are the attibutes of both dirs:
[testUser#node ~]$ ls -ld /tmp
drwxrwxrwt. 10 root root 4096 Feb 6 10:13 /tmp
[testUser#node ~]$ ls -ld /path/to/app/log
drwxrwxrwx. 2 testUser testGroup 4096 Feb 6 09:25 /path/to/app/log
I will provide any additional information as needed.
Why is tcpdump complaining about not being able to write this file?
Use absolute paths in command line instead of "sudo" and "tcpdump"
Use ProcessBuilder.class instead of Runtime.exec() because you can specify the working directory, you can use spaces in options and more.
In tcpdump command you have to use -Z flag to specify user because PCAP uses different than caller one. Check this link on ServerFault: tcpdump permisson denied
I am trying to install myApp in Websphere 8.5 running over zOS/390. I can't imagine a more simple jython than below and it is returning "java.util.zip.ZipException: error in opening zip file". I am sure that the ear file is correct. Any idea about possible reason for ZipException will be appreciatted. Naturally, the server is up and running.
The Jython script:
000001,import sys
000002,EARFILE = "/usr/MyCompanyApps/MyArea/originEAR/MyAppEAR.ear"
000003,APPOPTS = "-appname "
000004,APPOPTS = APPOPTS + "dMYAPP "
000005,APPOPTS = APPOPTS + "-installed.ear.destination "
000006,APPOPTS = APPOPTS + "/WebSphereDevelopment/MYAPP/dtl/currr/deployment/ "
000007,APPOPTS = APPOPTS + "-MapModulesToServers [ "
000008,APPOPTS = APPOPTS + "MyApp MyApp.war,WEB-INF/web.xml WebSphere:"
000009,APPOPTS = APPOPTS + "cell=dtl85cel,node=wlemyAppa,server=WLEMYAPP] "
000010,AdminApp.install(EARFILE, APPOPTS)
The detailed trace log:
000064,java.lang.RuntimeException: java.lang.RuntimeException: Deploying /WebSp
000065," follows:
000066,
000067, com.ibm.websphere.management.application.client.AppDeploymentException:
000068,,at com.ibm.websphere.management.application.AppManagementFactory.handle
000069,,at com.ibm.websphere.management.application.AppManagementFactory.readAr
000070,,at com.ibm.websphere.management.application.AppManagementFactory.readAr
000071,,at com.ibm.ws.scripting.AdminAppClient.getController(AdminAppClient.jav
000072,,at com.ibm.ws.scripting.AdminAppClient.commonPrepare(AdminAppClient.jav
000073,,at com.ibm.ws.scripting.AdminAppClient.doInstall(AdminAppClient.java:22
000074,,at com.ibm.ws.scripting.AdminAppClient.doInstall(AdminAppClient.java:20
000075,,at com.ibm.ws.scripting.AdminAppClient.install(AdminAppClient.java:1414
000076,,at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
000077,,at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
000078,,at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
000079,,at java.lang.reflect.Method.invoke(Method.java:620)
…
000125,Caused by: java.lang.RuntimeException: Deploying /WebSphere/was85/dtl85c
000126,,at com.ibm.ws.sip.application.frameworkext.SarToEarConverterTask.isConv
000127,,at com.ibm.ws.sip.application.frameworkext.SarToEarConverterTask.execut
000128,,at com.ibm.ws.management.application.client.AppInstallHelper.processEar
000129,,at com.ibm.ws.management.application.client.AppInstallHelper.processEar
000130,,at com.ibm.ws.management.application.client.AppInstallHelper.getAppDepl
000131,,at com.ibm.websphere.management.application.AppManagementFactory.readAr
000132,,... 55 more
000133,Caused by: java.util.zip.ZipException: error in opening zip file
000134,,at java.util.zip.ZipFile.open(Native Method)
000135,,at java.util.zip.ZipFile.<init>(ZipFile.java:231)
000136,,at java.util.zip.ZipFile.<init>(ZipFile.java:161)
000137,,at java.util.zip.ZipFile.<init>(ZipFile.java:132)
000138,,at com.ibm.ws.sip.application.frameworkext.SarToEarConverterTask.isCon
000139,,... 60 more
000140,
000141,Ý11/9/15 11:14:24:931 CST¨ 00000001 AbstractShell E WASX7120E: Diagno
000142,java.lang.RuntimeException: java.lang.RuntimeException: Deploying /WebS
000143," follows:
000144,
000145, com.ibm.websphere.management.application.client.AppDeploymentException
I tried via Admin Console Wizzard and I got this message:
The following exception occurred. Check log for details.
com.ibm.websphere.management.application.client.AppDeploymentException: [Root exception is java.lang.RuntimeException: Deploying /WebSphere/was85/dtl85cel/ledm85nd/DeploymentManager/profiles/default/wstemp/867530631/upload/MyAppEAR.ear failed.]
Firstly, thank you to all that try to help me here. I want to let here what has fixed my issue for future searchers: I am transfering the ear file from my windows machine to mainframe via Open Text FTP feature and, in my configurations, it was set up to Auto-Select. There are two file formats allowed: Binary and ASCII. Probably, autoselect was setting to ASCII. The correct is Binary.
Few years back I remember we had this issue.
But at that time our /tmp filesystem was 98%, we cleared it and ran the job again it was successful.
Also other point is on permission,you can clear the wstemp & re-run.
Oh ok.
Can you delete the contents of the wstemp and try to redeploy.
Wstemp folder contains Websphere temp workspace files.
Also to advise - can you try to deploy the EAR file manually via Admin console ?
I'm Having a problem establishing a connection to SAP in my java program.
I'm following the examples that come in the JCO download but i always get this error:
com.sap.conn.jco.JCoException: (102) RFC_ERROR_COMMUNICATION: Connect to SAP gateway failed
Connection parameters: TYPE=A DEST=ABAP_AS_WITHOUT_POOL ASHOST=xx.xx.x.xx SYSNR=00 PCS=1
LOCATION CPIC (TCP/IP) on local host with Unicode
ERROR partner 'xx.xx.x.xx:3300' not reached
TIME Wed Jul 08 08:18:28 2015
RELEASE 711
COMPONENT NI (network interface)
VERSION 39
RC -10
MODULE nixxi.cpp
LINE 3147
DETAIL NiPConnect2: xx.xx.x.xx:3300
SYSTEM CALL connect
ERRNO 10060
ERRNO TEXT WSAETIMEDOUT: Connection timed out
COUNTER 2
I don't know what it can be, i'm writing down the correct connection properties (ashost,user,passwd,sysnr,etc).
Has anybody else has had a problem like this?
This is my connection code:
Properties connectProperties = new Properties();
connectProperties.setProperty(DestinationDataProvider.JCO_ASHOST, "xx.xx.x.xx");
connectProperties.setProperty(DestinationDataProvider.JCO_SYSNR, "00");
connectProperties.setProperty(DestinationDataProvider.JCO_CLIENT, "020");
connectProperties.setProperty(DestinationDataProvider.JCO_USER, "xxxxxx");
connectProperties.setProperty(DestinationDataProvider.JCO_PASSWD, "xxxxxxx");
connectProperties.setProperty(DestinationDataProvider.JCO_LANG, "en");
createDataFile(ABAP_AS, "jcoDestination", connectProperties);
After that i just instantiate the object with those properties and call the method connect that is written like this:
JCoDestination destination = JCoDestinationManager.getDestination(ABAP_AS);
System.out.println("Attributes:");
System.out.println(destination.getAttributes());
System.out.println();
I'm working on Java, using netbeans, the sapjco3.jar is added to my libraries.
Do i have to do anything with the dll file that comes?
I’ve downloaded and started up Cloudera's Hadoop Demo VM for CDH4 (running Hadoop 2.0.0). I’m trying to write a Java program that will run from my windows 7 machine (The same machine/OS that the VM is running in). I have a sample program like:
public static void main(String[] args) {
try{
Configuration conf = new Configuration();
conf.addResource("config.xml");
FileSystem fs = FileSystem.get(conf);
FSDataOutputStream fdos=fs.create(new Path("/testing/file01.txt"), true);
fdos.writeBytes("Test text for the txt file");
fdos.flush();
fdos.close();
fs.close();
}catch(Exception e){
e.printStackTrace();
}
}
My config.xml file only has on property defined: fs.default.name=hdfs://CDH4_IP:8020.
When I run it I’m getting the following exception:
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /testing/file01.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2170)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:471)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
at org.apache.hadoop.ipc.Client.call(Client.java:1160)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy9.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy9.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:290)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1003)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)
I’ve looked around the internet and it seem that this happens when disk space is low but that’s not the case for me when I run "hdfs dfsadmin -report" I get the following:
Configured Capacity: 25197727744 (23.47 GB)
Present Capacity: 21771988992 (20.28 GB)
DFS Remaining: 21770715136 (20.28 GB)
DFS Used: 1273856 (1.21 MB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Decommission Status : Normal
Configured Capacity: 25197727744 (23.47 GB)
DFS Used: 1273856 (1.21 MB)
Non DFS Used: 3425738752 (3.19 GB)
DFS Remaining: 21770715136 (20.28 GB)
DFS Used%: 0.01%
DFS Remaining%: 86.4%
Last contact: Fri Jan 11 17:30:56 EST 201323 EST 2013
I can also run this code just fine from with in the VM. I’m not sure what the problem is or how to fix it. This is my first time using hadoop so I’m probably missing something basic. Any ideas?
Update
The only thing I see in the logs is an exception similar to the one on get on the client:
java.io.IOException: File /testing/file01.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2170)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:471)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
I tried changing the permissions on the data directory (/var/lib/hadoop-hdfs/cache/hdfs/dfs/data) and that didn't fix it (I went so far as giving full access to everyone).
I notice that when I'm browsing the HDFS via the HUE web app I see that the folder structure was created and that the file does exist but it is empty. I tried putting the file under the default user directory by using
FSDataOutputStream fdos=fs.create(new Path("testing/file04.txt"), true);
instead of
FSDataOutputStream fdos=fs.create(new Path("/testing/file04.txt"), true);
Which makes the file path become "/user/dharris/testing/file04.txt" ('dharris' is my windows user). But that gave me the same kind of error.
I got a same problem.
In my case, a key of the problem was following error message.
There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
It means that your hdfs-client couldn't connect to your datanode with 50010 port.
As you connected to hdfs namenode, you could got a datanode's status. But, your hdfs-client would failed to connect to your datanode.
(In hdfs, a namenode manages file directories, and datanodes. If hdfs-client connect to a namnenode, it will find a target file path and address of datanode that have the data. Then hdfs-client will communicate with datanode. (You can check those datanode uri by using netstat. because, hdfs-client will be trying to communicate with datanodes using by address informed by namenode)
I solved that problem by:
opening 50010(dfs.datanode.address) port in a firewall.
adding propertiy "dfs.client.use.datanode.hostname", "true"
adding hostname to hostfile in my client PC.
I'm sorry for my poor English skill.
Go to linux VM and check the hostname and iP ADDRESS(use ifconfig cmd).
Then in the linux vm edit /etc/host file with
IPADDRESS (SPALCE) hostname
example :
192.168.110.27 clouderavm
and change the all your hadoop configuration files like
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
change localhost or localhost.localdomain or 0.0.0.0 to your hostname
then Restart cloudera manger.
in the windows machine edit C:\Windows\System32\Drivers\etc\hosts
add one line at the end with
you vm machine ip and hostname (same as you done on the /etc/host file in the vm)
VMIPADRESS VMHOSTNAME
example :
192.168.110.27 clouderavm
then check now, it should work, for detail configuration check following VIDEO from you tube
https://www.youtube.com/watch?v=fSGpYHjGIRY
add given property in hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
and add this file also in your program
conf.addResource("hdfs-site.xml");
stop hadoop
stop-all.sh
then start
start-all.sh
I ran into the similar issue and have two pieces of information may help you.
The first thing I realized is I was using ssh tunnel to access the name node and when the client code tries to access data node it can not find the data node due to the tunnel somehow messed up the communication. I then run the client on the same box as the hadoop name node and it solved the problem. In short, non-standard network configuration confused hadoop to find the data node.
The reason I used ssh tunnel is I can't access name node remotely and I thought it was due to port restriction by admin, so I used ssh tunnel to bypass the restriction. But it turns out to be a misconfiguration of hadoop.
In core-site.xml after I changed
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
to
<value>hdfs://host_name:9000</value>
I no longer need the ssh turnnel and I can access the hdfs remotely.
Since I found many questions like this one in my search for having the exact same issue I thought I would share what finally worked for me. I found this forum post on Hortonworks: https://community.hortonworks.com/questions/16837/cannot-copy-from-local-machine-to-vm-datanode-via.html
The answer was truly understanding what calling new Configuration() means and setting the correct parameters as I needed them. In my case it was exactly the one mentioned in that post. So my working code looks like this.
try {
Configuration config = new Configuration();
config.set("dfs.client.use.datanode.hostname", "true");
Path pdFile = new Path("stgicp-" + pd);
FileSystem dFS = FileSystem.get(new URI("hdfs://" + HadoopProperties.HIVE_HOST + ":" + HadoopProperties.HDFS_DEFAULT_PORT), config,
HadoopProperties.HIVE_DEFAULT_USER);
if (dFS.exists(pdFile)) {
dFS.delete(pdFile, false);
}
FSDataOutputStream outStream = dFS.create(pdFile);
for (String sjWLR : processWLR.get(pd)) {
outStream.writeBytes(sjWLR);
}
outStream.flush();
outStream.close();
dFS.delete(pdFile, false);
dFS.close();
} catch (IOException | URISyntaxException | InterruptedException e) {
log.error("WLR file processing error: " + e.getMessage());
}
in the hadoop configuration, default replication is set to 3. check it once and change accordingly to your requirements
You can try deleting the data (dfs/data) folder manually and formating the namenode. You can then start hadoop.
From error message replication factor seems to be fine i.e.1.
It Seems datanode is properly functioning or have permission issues.
Check the permissions and check the status of datanode form the user, you are trying to run hadoop.
I had a similar problem, in my case I just emptied the following folder ${hadoop.tmp.dir}/nm-local-dir/usercache/{{hdfs_user}}/appcache/
It appears to be some issue with the FS.
Either the parameters in cross-site.xml are not matching the file it is trying to read
OR
there is some common mismatch in the path (I see there being a WINDOWS reference).
you can use cygwin tool to setup the path and place it where the datanodes and temp file locations are placed and that should sufficiently do the trick
Location : $/bin/cygpath.exe
P.S. Replication does NOT seem to be the primary issue here according to me
Here is how I create files in the HDFS:
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
FileSystem hdfs = FileSystem.get(context.getConfiguration());
Path outFile=new Path("/path to store the output file");
String line1=null;
if (!hdfs.exists(outFile)){
OutputStream out = hdfs.create(outFile);
BufferedWriter br = new BufferedWriter(new OutputStreamWriter(out, "UTF-8"));
br.write("whatever data"+"\n");
br.close();
hdfs.close();
}
else{
String line2=null;
BufferedReader br1 = new BufferedReader(new InputStreamReader(hdfs.open(outFile)));
while((line2=br1.readLine())!=null){
line1=line1.concat(line2)+"\n";
}
br1.close();
hdfs.delete(outFile, true);
OutputStream out = hdfs.create(outFile);
BufferedWriter br2 = new BufferedWriter(new OutputStreamWriter(out, "UTF-8"));
br2.write(line1+"new data"+"\n");
br2.close();
hdfs.close();
}