I am trying to run a hadoop 2.2.0 mapreduce job on my local single node cluster installed by following this tutorial:
http://codesfusion.blogspot.co.at/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
Though on the server side the following exception is thrown:
org.apache.hadoop.ipc.RpcNoSuchProtocolException: Unknown protocol: org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:566)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
Is there a way for me to configure Protobuf RPC to be available on server side? Do I need the hadoop native libraries for this? Or can I switch somehow on the client side to Writables/Avro RPC?
Ok, found the reason, I connected to the wrong port for the yarn resourcemanager. The correct configuration is:
yarn.resourcemanager.address=localhost:8032
In my case, I was getting the same error in the logs when there was not enough memory between the Application Master and the YARN containers. Reduced the yarn.app.mapreduce.am.resource.mb property and it worked on my single node installation.
Related
I am attempting to connect to a cloudera environment using Kafka and stream data from a topic. I have been able to successfully do this in java but not python. Python appears to connect but it unable to receive the logs. I don't believe my paths, or servers are incorrect because I have connect via java with the same information.
I have done this successfully before with another cloudera environment, in python, and I'm basically copying and pasting from that code. With that being said is it possible that there are some settings in cloudera for this environment that are preventing me from receiving the logs via python?.
with java:
from java.lang import System
System.setProperty('java.security.auth.login.config','<path to jaas.conf>')
System.setProperty('java.security.krb5.conf','<path to krb5.conf>')
broker=['<broker1>:9092','<broker2>:9092','<broker3>:9092']
try:
consumer=KafkaConsumer(bootstrap_servers=broker,
sasl_kerberos_service_name='kafka',
auto_offset_reset='earliest',api_version=(1,0,1),
session_timeout_ms= 30000,enable_auto_commit=True,
sasl_mechanism='GSSAPI',
security_protocol='SASL_PLAINTEXT')
except Exception as e:
message_consumer="Error connecting to kafka"+e.message
sendAlertEmail(message_consumer)
logger1.error("Failed to connect to brokers"+e.message)
To test the program I do,
for message in consumer:
print(message)
When i attempt to access the environment it never makes it into the loop. However, I know there are logs for the topic.
I have a flink project that is connecting to nifi to pull data. The setup to pull get the datastream works just fine when running locally.
.url("http://1.2.3.4:8080/nifi")
.portName("MyPortName")
.requestBatchCount(5)
.buildConfig();
But when I add the .jar to the remote cluster and run the job it throws this:
java.net.UnknownHostException
at sun.nio.ch.Net.translateException(Net.java:177)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:127)
at org.apache.nifi.remote.client.socket.EndpointConnectionPool.establishSiteToSiteConnection(EndpointConnectionPool.java:712)
at org.apache.nifi.remote.client.socket.EndpointConnectionPool.establishSiteToSiteConnection(EndpointConnectionPool.java:685)
at org.apache.nifi.remote.client.socket.EndpointConnectionPool.getEndpointConnection(EndpointConnectionPool.java:301)
at org.apache.nifi.remote.client.socket.SocketClient.createTransaction(SocketClient.java:129)
at org.apache.flink.streaming.connectors.nifi.NiFiSource.run(NiFiSource.java:90)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55)
at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56)
at org.apache.flink.streaming.runtime.tasks.StoppableSourceStreamTask.run(StoppableSourceStreamTask.java:39)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655)
at java.lang.Thread.run(Thread.java:745)
The only reason I can find for an UnknownHostException is that it is because the IP of the host name can't be resolved, but I am giving the IP already. There was an issue earlier with it being unable to connect to nifi because I have to set what IP is allowed to access the nifi instance. So I added the AWS server as allowed and it fixed that, but obviously I have this now.
Any help is greatly appreciated!
I figured the problem. I had my nifi cluster and my flink cluster in different regions. Moved the flink cluster to the same region and used either the public or private url for the cluster and it works fine.
I am really stuck with an AIX java issue. I have an issue here that doesn't match other issues on SO and on the Web.
My application code runs fine on another AIX server with the same exact JRE - IBM AIX Java 1.8, but does not run on the server that I need it to.
Both servers are AIX 7.1, running the same JAR and same JRE from the same tarball.
I'm getting the following error when using a Spring RestTemplate.exchange() to retrieve and unmarshal some JSON.
It must be a server configuration issue, but I'm very stuck and would appreciate any help!
Caused by: java.net.SocketException: A system call received a parameter that is not valid.
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:127)
at java.net.SocketInputStream.read(SocketInputStream.java:181)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at com.ibm.jsse2.a.a(a.java:209)
at com.ibm.jsse2.a.b(a.java:41)
at com.ibm.jsse2.a.a(a.java:193)
at com.ibm.jsse2.as.a(as.java:268)
at com.ibm.jsse2.as.a(as.java:745)
at com.ibm.jsse2.e.read(e.java:56)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:257)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:297)
at java.io.BufferedInputStream.read(BufferedInputStream.java:356)
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:564)
at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:621)
at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:708)
at java.io.FilterInputStream.read(FilterInputStream.java:144)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3352)
at java.io.FilterInputStream.read(FilterInputStream.java:144)
at java.io.PushbackInputStream.read(PushbackInputStream.java:197)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.loadMore(UTF8StreamJsonParser.java:178)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.parseEscapedName(UTF8StreamJsonParser.java:1749)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.slowParseName(UTF8StreamJsonParser.java:1654)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._parseName(UTF8StreamJsonParser.java:1484)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:700)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:120)
at com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:149)
at com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:18)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2993)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2158)
at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.readJavaType(AbstractJackson2HttpMessageConverter.java:222)
It's probably a firewall issue.
Communication is cut abruptly by some sort of firewall, so the socket is closed by the OS, which gives error when you try to read from it.
I am getting "Permission denied (publickey)" error when starting hadoop multi node cluster in AWS. But when i do ssh to each individual slave node without starting the cluster then i am able to access them. I did all the settings correct and checked twice.Any help on what may be wrong?
The problem was i created a new user i.e. hduser and then configured hadoop in it.
I did all the setup(hadoop configuration) in Ubuntu user(default for ec2 Ubuntu instances) and it worked. I think its better to use default users's in AWS instances then creating any new one and then struggling to get permissions and other errors.
Here is what I have done in a nutshell:
STEP1: I have successfully configured hadoop 2.6 on my laptop (single node) and ran a sample mapreduce job.
STEP2: I cloned tez repository and successfully built the 0.8.0 version and copied the jarfiles into HDFS and exports the required variables. I also changed the value of variable mapreduce.framework.name to yarn-tez in the mapred-site.xml.
But when I want to run a tez orderedwordcount job, I got this error:
15/07/04 18:45:03 INFO ipc.Client: Retrying connect to server: hostname/hostIP:57339.
Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/07/04 18:45:12 INFO client.DAGClientImpl: DAG completed. FinalState=FAILED
I have checked resource manager and it is listening on port 8030.
But it seems the client tries to connect to a random port. is it correct?
What can I do to get it work correctly?
It seems that it was the problem of this version (0.8.0) connecting to the resource manager. I compiled and integrated the previous stable release (0.7.0) and everything is good to go now. I hope that they will figure the problem out.
From your logs it seems a Firewall issue rather than issue with Tez version. And it is irrespective of Tez, even if you run Hadoop only you can face this.
Hadoop uses multiple ports for communication with clients and between service components. To enable Hadoop communication, open the specific ports that Hadoop uses.
To open specific ports, you can set the access rules in Windows. For example, the following command will open up port 80 in the active Windows Firewall:
netsh advfirewall firewall add rule name=AllowRPCCommunication dir=in action=allow protocol=TCP localport=80
For more see here http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0-Win/bk_HDP_Install_Win/content/ref-79239257-778e-42a9-9059-d982d0c08885.1.html