Dfs and Mapreduce in hadoop 2.4.1 - java

Am using hadoop 2.4.1, When I try to use dfs in hadoop 2.4.1, Everything is working fine. I always use start-dfs.sh script to start so the following services will be up and running in the system
datanode, namenode and secondary namenode - which is exactly fine
Yesterday, I try to configure the mapred.xml in etc/hadoop/mapred.xml as following
**conf/mapred-site.xml:**
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
and I did the following
1.Formatted the namenode
2. I started start-all.sh
When I look into the logs, only following logs are available,
1. hadoop-datanode.log + out
2. hadoop-namenode.log + out
3. hadoop-secondarynamenode.log + out
4. yarn-nodemanager.log + out
5. yarn-resourcemanager.log + out
When I gave jps, only following services were running,
1. secondarynamenode
2. namenode
3. datanode
4. nodemanager
5. resourcemanager
I dont find the job tracker there, moreover mapreduce logs are also not available, Is that we need to specify something for mapreduce in haddop 2.4.1
Additional info, I checked with web console port of 50030 - job tracker, which is not available,
I grepped with the port check of 9001 nothing is running
Anyhelp is accepted pls

From Hadoop 2.0 onwards mapreduce default processing framework has been changed from classic mapreduce to YARN. When you use start-all.sh for starting hadoop, it internally invokes start-yarn.sh and start-dfs.sh.
If you wanted to use mapreduce instead of yarn, use should start dfs and mapreduce service separately using start-dfs.sh and start-mapred.sh( mapreduce1 binaries are located inside the directory ${HADOOP_HOME}/bin-mapreduce1 and all configuration files are under the directory ${HADOOP_HOME}/etc/hadoop-mapreduce1).
All YARN and HDFS start up scrips are located in the sbin directory in your hadoop home, where you cannot find start-mapred.sh script. start-mapred.sh script is in the directory bin-mapreduce1.

Related

What is a simple, effective way to debug custom Kafka connectors?

I'm working a couple of Kafka connectors and I don't see any errors in their creation/deployment in the console output, however I am not getting the result that I'm looking for (no results whatsoever for that matter, desired or otherwise). I made these connectors based on Kafka's example FileStream connectors, so my debug technique was based off the use of the SLF4J Logger that is used in the example. I've searched for the log messages that I thought would be produced in the console output, but to no avail. Am I looking in the wrong place for these messages? Or perhaps is there a better way of going about debugging these connectors?
Example uses of the SLF4J Logger that I referenced for my implementation:
Kafka FileStreamSinkTask
Kafka FileStreamSourceTask
I will try to reply to your question in a broad way. A simple way to do Connector development could be as follows:
Structure and build your connector source code by looking at one of the many Kafka Connectors available publicly (you'll find an extensive list available here: https://www.confluent.io/product/connectors/ )
Download the latest Confluent Open Source edition (>= 3.3.0) from https://www.confluent.io/download/
Make your connector package available to Kafka Connect in one of the following ways:
Store all your connector jar files (connector jar plus dependency jars excluding Connect API jars) to a location in your filesystem and enable plugin isolation by adding this location to the
plugin.path property in the Connect worker properties. For instance, if your connector jars are stored in /opt/connectors/my-first-connector, you will set plugin.path=/opt/connectors in your worker's properties (see below).
Store all your connector jar files in a folder under ${CONFLUENT_HOME}/share/java. For example: ${CONFLUENT_HOME}/share/java/kafka-connect-my-first-connector. (Needs to start with kafka-connect- prefix to be picked up by the startup scripts). $CONFLUENT_HOME is where you've installed Confluent Platform.
Optionally, increase your logging by changing the log level for Connect in ${CONFLUENT_HOME}/etc/kafka/connect-log4j.properties to DEBUG or even TRACE.
Use Confluent CLI to start all the services, including Kafka Connect. Details here: http://docs.confluent.io/current/connect/quickstart.html
Briefly: confluent start
Note: The Connect worker's properties file currently loaded by the CLI is ${CONFLUENT_HOME}/etc/schema-registry/connect-avro-distributed.properties. That's the file you should edit if you choose to enable classloading isolation but also if you need to change your Connect worker's properties.
Once you have Connect worker running, start your connector by running:
confluent load <connector_name> -d <connector_config.properties>
or
confluent load <connector_name> -d <connector_config.json>
The connector configuration can be either in java properties or JSON format.
Run
confluent log connect to open the Connect worker's log file, or navigate directly to where your logs and data are stored by running
cd "$( confluent current )"
Note: change where your logs and data are stored during a session of the Confluent CLI by setting the environment variable CONFLUENT_CURRENT appropriately. E.g. given that /opt/confluent exists and is where you want to store your data, run:
export CONFLUENT_CURRENT=/opt/confluent
confluent current
Finally, to interactively debug your connector a possible way is to apply the following before starting Connect with Confluent CLI :
confluent stop connect
export CONNECT_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
confluent start connect
and then connect with your debugger (for instance remotely to the Connect worker (default port: 5005). To stop running connect in debug mode, just run: unset CONNECT_DEBUG; unset DEBUG_SUSPEND_FLAG; when you are done.
I hope the above will make your connector development easier and ... more fun!
i love the accepted answer. one thing - the environment variables didn't work for me... i'm using confluent community edition 5.3.1...
here's what i did that worked...
i installed the confluent cli from here:
https://docs.confluent.io/current/cli/installing.html#tarball-installation
i ran confluent using the command confluent local start
i got the connect app details using the command ps -ef | grep connect
i copied the resulting command to an editor and added the arg (right after java):
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
then i stopped connect using the command confluent local stop connect
then i ran the connect command with the arg
brief intermission ---
vs code development is led by erich gamma - of gang of four fame, who also wrote eclipse. vs code is becoming a first class java ide see https://en.wikipedia.org/wiki/Erich_Gamma
intermission over ---
next i launched vs code and opened the debezium oracle connector folder (cloned from here) https://github.com/debezium/debezium-incubator
then i chose Debug - Open Configurations
and entered the highlighted debugging configuration
and then run the debugger - it will hit your breakpoints !!
the connect command should look something like this:
/Library/Java/JavaVirtualMachines/jdk1.8.0_221.jdk/Contents/Home/bin/java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/logs -Dlog4j.configuration=file:/Users/myuserid/confluent-5.3.1/bin/../etc/kafka/connect-log4j.properties -cp /Users/myuserid/confluent-5.3.1/share/java/kafka/*:/Users/myuserid/confluent-5.3.1/share/java/confluent-common/*:/Users/myuserid/confluent-5.3.1/share/java/kafka-serde-tools/*:/Users/myuserid/confluent-5.3.1/bin/../share/java/kafka/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/dependant-libs-2.12.8/*:/Users/myuserid/confluent-5.3.1/bin/../support-metrics-client/build/libs/*:/usr/share/java/support-metrics-client/* org.apache.kafka.connect.cli.ConnectDistributed /var/folders/yn/4k6t1qzn5kg3zwgbnf9qq_v40000gn/T/confluent.CYZjfRLm/connect/connect.properties
Connector module is executed by the kafka connector framework. For debugging, we can use the standalone mode. we can configure IDE to use the ConnectStandalone main function as entry point.
create debug configure as the following. Need remember to tick "Include dependencies with "Provided" scope if it is maven project
connector properties file need specify the connector class name "connector.class" for debugging
worker properties file can copied from kafka folder /usr/local/etc/kafka/connect-standalone.properties

Oozie: Launch Map-Reduce from Oozie <java> action?

I am trying to execute a Map-Reduce task in an Oozie workflow using a <java> action.
O'Reilley's Apache Oozie (Islam and Srinivasan 2015) notes that:
While it’s not recommended, Java action can be used to run Hadoop MapReduce jobs because MapReduce jobs are nothing but Java programs after all. The main class invoked can be a Hadoop MapReduce driver and can call Hadoop APIs to run a MapReduce job. In that mode, Hadoop spawns more mappers and reducers as required and runs them on the cluster.
However, I'm not having success using this approach.
The action definition in the workflow looks like this:
<java>
<!-- Namenode etc. in global configuration -->
<prepare>
<delete path="${transformOut}" />
</prepare>
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>default</value>
</property>
</configuration>
<main-class>package.containing.TransformTool</main-class>
<arg>${transformIn}</arg>
<arg>${transformOut}</arg>
<file>${avroJar}</file>
<file>${avroMapReduceJar}</file>
</java>
The Tool implementation's main() implementation looks like this:
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new TransformTool(), args);
if (res != 0) {
throw new Exception("Error running MapReduce.");
}
}
The workflow will crash with the "Error running MapReduce" exception above every time; how do I get the output of the MapReduce to diagnose the problem? Is there a problem with using this Tool to run a MapReduce application? Am I using the wrong API calls?
I am extremely disinclined to use the Oozie <map-reduce> action, as each action in the workflow relies on several separately versioned AVRO schemas.
What's the issue here? I am using the 'new' mapreduce API for the task.
Thanks for any help.
> how do I get the output of the MapReduce...
Back to the basics.
Since you don't care to mention which version of Hadoop and which version of Oozie you are using, I will assume a "recent" setup (e.g. Hadoop 2.7 w/ TimelineServer and Oozie 4.2). And since you don't mention which kind of interface you use (command-line? native Oozie/Yarn UI? Hue?) I will give a few examples using good'old'CLI.
> oozie jobs -localtime -len 10 -filter name=CrazyExperiment
Shows the last 10 executions of "CrazyExperiment" workflow, so that you can inject the appropriate "Job ID" in next commands.
> oozie job -info 0000005-151217173344062-oozie-oozi-W
Shows the status of that execution, from Oozie point of view. If your Java action is stuck in PREP mode, then Oozie failed to submit it to YARN; otherwise you will find something like job_1449681681381_5858 under "External ID". But beware! The job prefix is a legacy thing; the actual YARN ID is application_1449681681381_5858.
> oozie job -log 0000005-151217173344062-oozie-oozi-W
Shows the Oozie log, as could be expected.
> yarn logs -applicationId application_1449681681381_5858
Shows the consolidated logs for AppMaster (container #1) and Java action Launcher (container #2) -- after execution is over. The stdout log for Launcher contains a whole shitload of Oozie debug stuff, the real stdout is at the very bottom.
In case your Java action successfully spawned another YARN job, and you were careful to display the child "application ID", you should be able to retrieve it there and run another yarn logs command against it.
Enjoy your next 5 days of debugging ;-)

Apache Twill HelloWorld application fails - jar not found

Has anyone run into problems running the HelloWorld Twill example? My Application gets accepted but then transitions to the "FAILED" state.
Yarn application HelloWorldRunnable application_1406337868863_0013 completed with status FAILED
The YARN Web UI shows this as the error:
Application application_1406337868863_0013 failed 2 times due to AM Container for appattempt_1406337868863_0013_000002 exited with exitCode: -1000 due to: File file:/twill/HelloWorldRunnable/2ba08d9f-ca23-4363-a7be-426b93c88de2/appMaster.775a1137-6134-46e2-b270-fc466ce7fe91.jar does not exist
.Failing this attempt.. Failing the application.
Does YARN expect to find this jar on HDFS at the location above? It seems like the jar gets copied to my local FS at the location specified above but not to HDFS.
Looks like you don't have the hadoop conf directory (e.g. /etc/hadoop/conf) in the classpath so that the local file system (file:/twill/...) is used instead of HDFS.

How to configure hadoop with eclipse

I am new in hadoop I have downloaded the hortonworks sanbox image and mounted that with virtualBox. And sanbox ui is coming in the localhost when I am typing 192.168.56.101/ in the Chrome. Also I am able to log in to hadoop shell with hue/hadoop username password. Now I want to run a simple program in eclipse. I have added hadoop-0.18.3-eclipse-plugin to the eclipse and then tried the following steps.
1.choosed map/reduce from eclipse.
2.went to hadoop location editer
localhost name:localhost
under map/reduce master
port:9000
under DFS master
port:9001
But I am getting this error
Cannot connect to the Map/Reduce location: localhost Call to
localhost/127.0.0.1:9001 failed on connection exception:
java.net.ConnectException: Connection refused: no further information
Virtual box is running.
Add required hadoop dependancy jar files to your eclipse class path.
In your main method of your mapreduce program add these lines
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://localhost:50000");
conf.set("mapreduce.job.tracker", "localhost:50001");
if you are running in virtual machine change the localhost to
required ip address (where hadoop demon runs).you can get the ip
address bytyping ifconfig
run the mapreduce program as simple java program
.you will get the output in the eclipse console.

Hadoop with eclipse is not connecting

I am using ubuntu 12.04. I am trying to connect hadoop in eclipse.Successfully installed plugin for 1.04. I am using java 1.7 for this.
My configuration data are
username:hduser,locationname:test,map/reduce host port are localhost:9101 and M/R masterhost localhost:9100.
My temp directory is /app/hduser/temp.
As per this location I set advanced parameters.But I was not able to set fs.s3.buffer.dir as there was no such directory created like /app/hadoop/tmp//s3. unable to set map reduce master directory.I only found local directory. I didnot find mapred.jobtracker.persist.job.dir. And also map red temp dir.
When I ran hadoop in pseudo distributed mode I didnot found any datanode running also with JPS.
I am not sure what is the problem here.In eclipse I got the error while setting the dfs server.I got the message like...
An internal error occurred during: "Connecting to DFS test".
org/apache/commons/configuration/Configuration
Thanks all
I was facing the same issue. Later found this:
Hadoop eclipse mapreduce is not working?
The main blog post is this. HTH someone who is looking for a solution.

Categories

Resources