Java program terminating after ObjectMapper.writeValue(System.out, responseData) - Jackson Library - java

I'm using the Jackson library to create JSON objects, but when I use the mapper.writeValue(System.out, responseData) function, the program terminates. Here is my code:
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.codehaus.jackson.JsonGenerationException;
import org.codehaus.jackson.map.JsonMappingException;
import org.codehaus.jackson.map.ObjectMapper;
public class Test {
public static void main(String[] args){
new Test().test();
}
public void test() {
ObjectMapper mapper = new ObjectMapper();
Map<String, Object> responseData = new HashMap<String, Object>();
responseData.put("id", 1);
try {
mapper.writeValue(System.out, responseData);
System.out.println("done");
} catch (JsonGenerationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JsonMappingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}.
}
After this executes, the console shows {"id":1}, but does not show "done".

The problem is with the Jackson implementation, as ObjectMapper._configAndWriteValue calls UtfGenerator.close(), which calls PrintStream.close().
I'd log an issue at https://jira.codehaus.org/browse/JACKSON
To change the default behavior of target being closed you can do the following:
mapper.configure(JsonGenerator.Feature.AUTO_CLOSE_TARGET, false);

While declaring variable names in your data files/getter files, the first letter should be small.

Related

Java status 400 for URL

Hello - I'm trying to download a file using Apache commons fileUtils but it always ends up getting a 400 error. The file's URL is valid because I successfully downloaded it many times using the browser. Any ideas?
java.io.IOException: Server returned HTTP response code: 400 for URL:
http://www.nikaia-hosp.gr/img/ΤΕΛΙΚΕΣ ΠΡΟΔΙΑΓΡΑΦΕΣ ΓΙΑ ΥΠΕΡΗΧΟ
ΓΥΝΑΙΚΟΛΟΓΙΚΟ ΜΑΙΕΥΤΙΚΟ ΠΡΟΓΕΝΝΗΤΙΚΟΥ ΕΛΕΓΧΟΥ.pdf at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at java.net.URL.openStream(URL.java:1045) at
org.apache.commons.io.FileUtils.copyURLToFile(FileUtils.java:1478) at
com.nikaia.main.NikaiaReader.Downloader.download(Downloader.java:17)
at com.nikaia.main.NikaiaReader.Downloader.main(Downloader.java:32)
import java.io.File;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import org.apache.commons.io.FileUtils;
public class Downloader {
public static void download(String url,String filename){
//System.out.println("filename is : "+filename);
try {
// FileUtils.copyURLToFile(new URL(url), new File("C:/downloads/"+filename));
FileUtils.copyURLToFile(new URL(url), new File(PropertyReader.readProperty("ExtractedFilesPath")+"/"+filename));
try {
Thread.sleep(Integer.parseInt(PropertyReader.readProperty("downloadTimeout"))*1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String ar[]){
download("http://www.nikaia-hosp.gr/img/ΤΕΛΙΚΕΣ ΠΡΟΔΙΑΓΡΑΦΕΣ ΓΙΑ ΥΠΕΡΗΧΟ ΓΥΝΑΙΚΟΛΟΓΙΚΟ ΜΑΙΕΥΤΙΚΟ ΠΡΟΓΕΝΝΗΤΙΚΟΥ ΕΛΕΓΧΟΥ.pdf","stupid.pdf");
}
}
OK answer found , I checked the encoded browser url and the url that UTF-8 java returns and the difference was that browser had %20 in the url but java had + .
I replaced all + with %20 in java and its working.
import java.io.File;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLEncoder;
import org.apache.commons.io.FileUtils;
public class Downloader {
public static void download(String url, String filename) {
try {
String base = "http://www.nikaia-hosp.gr/img/";
if (url.contains("http://www.nikaia-hosp.gr/img/")) {
FileUtils.copyURLToFile(
new URL(base + URLEncoder.encode(url.replace(base, ""), "UTF-8").replaceAll("\\+", "%20")),
new File(PropertyReader.readProperty("ExtractedFilesPath") + "/" + filename));
} else {
FileUtils.copyURLToFile(new URL(url),
new File(PropertyReader.readProperty("ExtractedFilesPath") + "/" + filename));
}
try {
Thread.sleep(Integer.parseInt(PropertyReader.readProperty("downloadTimeout")) * 1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String ar[]) {
download(
"http://www.nikaia-hosp.gr/img/ΤΕΛΙΚΕΣ ΠΡΟΔΙΑΓΡΑΦΕΣ ΓΙΑ ΥΠΕΡΗΧΟ ΓΥΝΑΙΚΟΛΟΓΙΚΟ ΜΑΙΕΥΤΙΚΟ ΠΡΟΓΕΝΝΗΤΙΚΟΥ ΕΛΕΓΧΟΥ.pdf",
"stupid.pdf");
}
}
this work for me, the problem was the encoding, you need encode only the path of the url
InputStream in = new URL(url).openStream();
FileUtils.copyToFile(in,new File(filename));
first open a Stream with the url and then copy this stream data in a file. using copyToFile method
your code will be
public static void download(String url,String filename){
try {
//changed this 2 lines
URL encodeUrl = new URL(UriUtils.encodePath(url, "UTF-8"));
InputStream in = encodeUrl.openStream();
FileUtils.copyToFile(in, new File(PropertyReader.readProperty("ExtractedFilesPath")+"/"+filename));
try {
Thread.sleep(Integer.parseInt(PropertyReader.readProperty("downloadTimeout"))*1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void main(String ar[]){
download("http://www.nikaia-hosp.gr/img/ΤΕΛΙΚΕΣ ΠΡΟΔΙΑΓΡΑΦΕΣ ΓΙΑ ΥΠΕΡΗΧΟ ΓΥΝΑΙΚΟΛΟΓΙΚΟ ΜΑΙΕΥΤΙΚΟ ΠΡΟΓΕΝΝΗΤΙΚΟΥ ΕΛΕΓΧΟΥ.pdf","stupid.pdf");
}
and add this dependency to your pom.xml
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>3.0.4.RELEASE</version>
</dependency>
this does the magic.
UriUtils.encodePath(host+path, "UTF-8");

Error: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable

I am new to hadoop and trying to run a sample program from book. I am facing error
Error: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable
Below is my code
package com.hadoop.employee.salary;
import java.io.IOException;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class AvgMapper extends Mapper<LongWritable,Text,Text,FloatWritable>{
public void Map(LongWritable key,Text empRec,Context con) throws IOException,InterruptedException{
String[] word = empRec.toString().split("\\t");
String sex = word[3];
Float salary = Float.parseFloat(word[8]);
try {
con.write(new Text(sex), new FloatWritable(salary));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
package com.hadoop.employee.salary;
import java.io.IOException;
import org.apache.hadoop.io.FloatWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class AvgSalReducer extends Reducer<Text,FloatWritable,Text,Text> {
public void reduce(Text key,Iterable<FloatWritable> valuelist,Context con)
throws IOException,
InterruptedException
{
float total =(float)0;
int count =0;
for(FloatWritable var:valuelist)
{
total += var.get();
System.out.println("reducer"+var.get());
count++;
}
float avg =(float) total/count;
String out = "Total: " + total + " :: " + "Average: " + avg;
try {
con.write(key,new Text(out));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
package com.hadoop.employee.salary;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class AvgSalary {
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
if(args.length!=2)
{
System.out.println("Please provide twp parameters");
}
Job job = new Job();
job.setJarByClass(AvgSalary.class);//helps hadoop in finding the relevant jar if there are multiple jars
job.setJobName("Avg Salary");
job.setMapperClass(AvgMapper.class);
job.setReducerClass(AvgSalReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
//job.setMapOutputKeyClass(Text.class);
//job.setMapOutputValueClass(FloatWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
try {
System.exit(job.waitForCompletion(true)?0:1);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
In your mapper you've called the map method Map, it should be map. Because of this it will be calling the default implementation, since you aren't overriding the map method. Which results in the same input key/value types coming in being emitted, thus they key is a LongWritable.
Changing the name to map should fix this error.

How to save received messages in separate files with messagelistener

Disk.class implementation
package server;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.ObjectMessage;
import services.CustomerData;
public class Disk implements MessageListener{
private int index;
private FileWriter f;
private BufferedWriter b;
public Disk(int i){
this.index=i;
try {
f = new FileWriter("disk"+i+".txt",true);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
b = new BufferedWriter(f);
}
#Override
public void onMessage(Message m) {
try {
if(m instanceof ObjectMessage){
CustomerData c = (CustomerData) ((ObjectMessage) m).getObject();
b.write(c.getSurname()+" "+c.getName()+" "+c.getAge());
b.newLine();
b.flush();
System.out.println("disk"+index+".txt saved");
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
So, what happens is that every message received by every message listener is saved in the same file (the last indexed disk.txt file) but I want to save them in every single file, from 0 to N. N txt files are created but they are not modified except the last one.
EDIT: I added the FileWriter and BufferedWriter in the Disk contructor but it will create N files but modify the last one only.
Main class there Disk is created:
package server;
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.rmi.RemoteException;
import java.util.Hashtable;
import javax.jms.JMSException;
import javax.jms.Session;
import javax.jms.Topic;
import javax.jms.TopicConnection;
import javax.jms.TopicConnectionFactory;
import javax.jms.TopicSession;
import javax.jms.TopicSubscriber;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
public class StorageServer {
public static final int N = 10;
public static void main(String[] args) throws RemoteException {
Hashtable<String,String> prop = new Hashtable<String,String>();
prop.put("java.naming.factory.initial", "org.apache.activemq.jndi.ActiveMQInitialContextFactory");
prop.put("java.naming.provider.url", "tcp://127.0.0.1:61616");
prop.put("topic.req", "requests");
System.setProperty("org.apache.activemq.SERIALIZABLE_PACKAGES","*");
try {
Context jndiCon = new InitialContext(prop);
TopicConnectionFactory tConnFact = (TopicConnectionFactory) jndiCon.lookup("TopicConnectionFactory");
TopicConnection tConn = tConnFact.createTopicConnection();
TopicSession tSess = tConn.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = (Topic) jndiCon.lookup("req");
TopicSubscriber subscriber = tSess.createSubscriber(topic);
tConn.start();
for(int i=0; i<N; i++){
//FileWriter file = new FileWriter("disk"+i+".txt",true);
subscriber.setMessageListener(new Disk(i));
System.out.println("New disk"+i+" started");
}
} catch (NamingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
You have a single TopicSubscriber which has a single MessageListener (hence the setMessageListener and not addMessageListener). You need to create a separate TopicSubscriber for each listener with
for(int i=0; i<N; i++){
TopicSubscriber subscriber = tSess.createSubscriber(topic);
subscriber.setMessageListener(new Disk(i));
System.out.println("New disk"+i+" started");
}
I'd also recommend avoiding using the FileWriter (and FileReader) class, because it uses the platform encoding. This can cause surprises when platform (or its encoding) changes. The equivalent, but longer and safer way is:
BufferedWriter out = new BufferedWriter(new OutputStreamWriter(new FileOutputStream("whatever.txt"), "UTF-8"));
With UTF-8 being a safe encoding to use.

Mapreduce Combiner

I have a simple mapreduce code with mapper, reducer and combiner.
The output from mapper is passed to combiner. But to the reducer, instead of output from combiner,output from mapper is passed.
Kindly help
Code:
package Combiner;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class AverageSalary
{
public static class Map extends Mapper<LongWritable, Text, Text, DoubleWritable>
{
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException
{
String[] empDetails= value.toString().split(",");
Text unit_key = new Text(empDetails[1]);
DoubleWritable salary_value = new DoubleWritable(Double.parseDouble(empDetails[2]));
context.write(unit_key,salary_value);
}
}
public static class Combiner extends Reducer<Text,DoubleWritable, Text,Text>
{
public void reduce(final Text key, final Iterable<DoubleWritable> values, final Context context)
{
String val;
double sum=0;
int len=0;
while (values.iterator().hasNext())
{
sum+=values.iterator().next().get();
len++;
}
val=String.valueOf(sum)+":"+String.valueOf(len);
try {
context.write(key,new Text(val));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static class Reduce extends Reducer<Text,Text, Text,Text>
{
public void reduce (final Text key, final Text values, final Context context)
{
//String[] sumDetails=values.toString().split(":");
//double average;
//average=Double.parseDouble(sumDetails[0]);
try {
context.write(key,values);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public static void main(String args[])
{
Configuration conf = new Configuration();
try
{
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: Main <in> <out>");
System.exit(-1); }
Job job = new Job(conf, "Average salary");
//job.setInputFormatClass(KeyValueTextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
job.setJarByClass(AverageSalary.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Combiner.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
System.exit(job.waitForCompletion(true) ? 0 : -1);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
The #1 rule of Combiners are: do not assume that the combiner will run. Treat the combiner only as an optimization.
The Combiner is not guaranteed to run over all of your data. In some cases when the data doesn't need to be spilled to disk, MapReduce will skip using the Combiner entirely. Note also that the Combiner may be ran multiple times over subsets of the data! It'll run once per spill.
In your case, you are making this bad assumption. You should be doing the sum in the Combiner AND the Reducer.
Also, you should follow #user987339's answer as well. The input and output of the combiner needs to be identical (Text,Double -> Text,Double) and it needs to match up with the output of the Mapper and the input of the Reducer.
It seems that you forgot about important property of a combiner:
the input types for the key/value and the output types of the
key/value need to be the same.
You can't take in a Text/DoubleWritable and return a Text/Text. I suggest you to use Text Instead DoubleWritable, and do proper parsing inside Combiner.
If a combine function is used, then it is the same form as the reduce function (and is
an implementation of Reducer), except its output types are the intermediate key and
value types (K2 and V2), so they can feed the reduce function:
map: (K1, V1) → list(K2, V2)
combine: (K2, list(V2)) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
Often the combine and reduce functions are the same, in which case, K3 is the same as
K2, and V3 is the same as V2.
Combiner will not work always when you run mapreduce.
If there is at least three spill files (output of mapper written to local-disk) the combiner will execute so that the size of file can be reduced so that it can be easily transferred to reduce node.
The number of spills for which a combiner need to run can be set through min.num.spills.for.combine property

Cannot Start a Method In a Different Class Using a Library

I've just created my first library for an Android app I've built (I have code I need to reuse in the future across different apps) and I need to start a method I have in the main project from the library - however when I attempt to do so using the line:
com.project.sample.datasettings.UpdateActivity.success();
I'm getting a compiler error stating:
com.project.sample.UpdateActivity Cannot Be Resolved To A Type
SOURCE:
package com.project.sample.networktasklibrary;
import java.io.BufferedInputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.SocketTimeoutException;
import java.net.URL;
import java.util.zip.GZIPInputStream;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLException;
import com.project.sample.networktasklibrary.XmlParserHandlerFinal;
import com.project.sample*;
import org.xml.sax.SAXException;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
// this class performs the call to webservice in the background
public class NetworkTask extends AsyncTask<String, String, InputStream> {
private static final String LOG_TAG = "STDataSettings";
private static final String TAG_RESULT = "success";
private static InputStream stream;
#Override
protected InputStream doInBackground(String... params) {
try {
stream = getQueryResults("https://dl.dropboxusercontent.com/u/31771876/GetPhoneSettings-ST-rsp-eng.xml");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (SAXException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return stream;
}
/*
* Sends a query to server and gets back the parsed results in a bundle
* urlQueryString - URL for calling the webservice
*/
protected static synchronized InputStream getQueryResults(
String urlQueryString) throws IOException, SAXException,
SSLException, SocketTimeoutException, Exception {
Bundle queryResults = new Bundle();
HttpsURLConnection https = null;
String uri = urlQueryString;
URL urlo = new URL(uri);
https = (HttpsURLConnection) urlo.openConnection();
https.setConnectTimeout(50000); // 20 second timeout
https.setRequestProperty("Connection", "Keep-Alive");
try {
https = (HttpsURLConnection) urlo.openConnection();
if ("gzip".equals(https.getContentEncoding())) {
stream = new GZIPInputStream(stream);
} else
stream = https.getInputStream();
} catch (SSLException e) {
Log.e(LOG_TAG, e.toString());
e.printStackTrace();
} catch (SocketTimeoutException e) {
Log.e(LOG_TAG, e.toString());
e.printStackTrace();
} catch (IOException e) {
Log.e(LOG_TAG, e.toString());
e.printStackTrace();
} catch (Exception e) {
Log.e(LOG_TAG, e.toString());
e.printStackTrace();
} finally {
}
String queryResult = null;
queryResults.putString(TAG_RESULT, queryResult);
return stream;
}
public InputStream getInputStream() {
return stream;
}
protected void onPostExecute(InputStream queryResults) {
// TODO Auto-generated method stub
super.onPostExecute(queryResults);
com.project.sample.datasettings.UpdateActivity.success();
}
}
UPDATE ACTIVITY CODE SAMPLE:
public void success() {
// to parse the response
try {
handler.getQueryResponse(stream);
} catch (SAXException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// to set method to save the ArryaLists from the parser
setArrayList();
Intent i = new Intent(this, ConfigFinalActivity.class);
startActivity(i);
}
Have you ever tried to add the library to the main project?
Right click the Project name -> Property -> Android -> Add. Then choose the library project.

Categories

Resources