I am trying to print multiple copies of a pdf document. After googling around a bit I found that I have to put a Copies in a PrintRequestAttributeSet. But after doing this only 1 copy is printed instead of the amount I provided.
During debugging I can see that the print object changes its copies variable from 0 to 2, so I would assume I do everything correctly. I've also been playing around a bit with the collation and multipledocumenthandling variables, but the end result stays the same.
Does anyone know how I can get it to print the correct number of copies?
My code:
import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.InputStream;
import javax.print.Doc;
import javax.print.DocFlavor;
import javax.print.DocPrintJob;
import javax.print.PrintService;
import javax.print.PrintServiceLookup;
import javax.print.SimpleDoc;
import javax.print.attribute.HashPrintRequestAttributeSet;
import javax.print.attribute.PrintRequestAttributeSet;
import javax.print.attribute.standard.Copies;
import javax.print.attribute.standard.MultipleDocumentHandling;
import javax.print.attribute.standard.SheetCollate;
public class PrintTest {
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
InputStream is = new BufferedInputStream(
new FileInputStream(
"<Insert pdf file here>"));
DocFlavor flavor = DocFlavor.INPUT_STREAM.AUTOSENSE;
Copies copies = new Copies(2);
SheetCollate collate = SheetCollate.COLLATED;
MultipleDocumentHandling handling = MultipleDocumentHandling.SEPARATE_DOCUMENTS_COLLATED_COPIES;
PrintRequestAttributeSet pras = new HashPrintRequestAttributeSet();
pras.add(copies);
pras.add(collate);
pras.add(handling);
PrintService service = PrintServiceLookup.lookupDefaultPrintService();
DocPrintJob printJob = service.createPrintJob();
Doc doc = new SimpleDoc(is, flavor, null);
printJob.print(doc, pras);
}
}
So I've been playing around a bit more. I've added a few sysout statements using which I found out you have something called Fidelity which can be used to force it to reject the print job if it cannot be printed exactly as specified. But there are some issues with this. After adding the fidelity setting to it I end up with the following output:
[class javax.print.attribute.standard.JobName, class javax.print.attribute.standard.RequestingUserName, class javax.print.attribute.standard.Copies, class javax.print.attribute.standard.Destination, class javax.print.attribute.standard.OrientationRequested, class javax.print.attribute.standard.PageRanges, class javax.print.attribute.standard.Media, class javax.print.attribute.standard.MediaPrintableArea, class javax.print.attribute.standard.Fidelity, class javax.print.attribute.standard.SheetCollate, class sun.print.SunAlternateMedia, class javax.print.attribute.standard.Chromaticity, class javax.print.attribute.standard.Sides, class javax.print.attribute.standard.PrinterResolution]
[]
Exception in thread "main" sun.print.PrintJobAttributeException: unsupported attribute: collated
at sun.print.Win32PrintJob.getAttributeValues(Win32PrintJob.java:667)
at sun.print.Win32PrintJob.print(Win32PrintJob.java:332)
at net.pearlchain.print.distribute.jasper.PrintTest.main(PrintTest.java:52)
The unsupported attribute is different with each execution but always one of the attributes I have set. I have tried running it using java 6 and java 7 and the only difference I get is the line on which the exception is thrown. On java 6 it is on line 667 and on java 7 it is line 685. Looking at the code found at grepcode I can see the exception being thrown but the actual reason is unclear.
Ok, I've found out why this happens, the flavor I've selected does not support multiple copies. Setting it to pdf leads me to getting a flavornotsupported exception because I have no printers installed which support printing from a pdf source.
It's been a long time and I forgot to post my solution here for future visitors.
I solved this by adding a 3rd party pdf library (Apache PDFBox) which provided me with an inputstream which I could send to the printer with all the settings I required.
http://pdfbox.apache.org/
I no longer have access to the code but this could be useful for future visitors. :)
Related
i'm trying to use itext (5.5.13) in IBM i (AKA iseries, Power, long ago AS/400). It could be done embedding java code into RPG ILE procedures, or executing plain java. We use Apache POI for Excel for a while, and it works well. We are testing itext now, but some issue persist yet.
Given that, I'm trying to test itext in plain java into IBM i. I prepared a very simple example, taken from listing 1.1 of "Itext in action", and run it. It seems to work well, but nothing is generated. No pdf file results. And no error appears while running.
am i forgetting something? are there some other aspects to take in account?
here is the code:
package QOpenSys.CONSUM.Testjeu;
import com.itextpdf.text.Document;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Paragraph;
import com.itextpdf.text.pdf.PdfWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
public class test1{
public static final String filePdf = "/QOpenSys/MyFolder/Testjeu/PdfRead1.pdf";
public static void main(String[] args)
throws DocumentException, IOException
{
///QOpenSys/MyFolder/Test/WrkBookRead1.pdf
//pdfDocument = new_DocumentVoid()
Document pdfDocument = new Document();
//pdfWriter = get_PdfWriter( pdfDocument: pdfFilePath);
PdfWriter.getInstance(pdfDocument, new FileOutputStream( filePdf ));
// jItxDocumentOpen( pdfDocument );
pdfDocument.open();
//pdfParagraph = new_PdfParagraphStr( PhraseString );
Paragraph jItxParagraph = new Paragraph("Hola, pdf");
//addToDocPg = jItxDocumentAddParagraph( pdfDocument: pdfParagraph );
pdfDocument.add(jItxParagraph);
//jItxDocumentClose( pdfDocument );
pdfDocument.close();
}
}
Solved. As said before, there was a first issue: it seems java function ran well because not errors/warnings were visible at qshell. It was false: errors were sent to outq, and were available at spool file. Being reviewed, it was a simple classpath issue. It required a full day to figure out what failed locating classpath.
Now it works, and pdf is created. I ran it on qshell, declaring environment variables for java_home (three jvm are executed concurrently by several applications), for classpath, and a couple required for tracing. Classpath declares first my class and secondly itext classes. Remaining classes comes from JRE. I have a full list of classes loaded by class loader. I hope it will help to find what fails in our embedded RPG ILE call to itext.
(I am new to Java, I don't know what 'classes' or 'api's' are.)
I was trying to compile (javac -g Sphinx.java) this code:
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.io.PrintWriter;
import edu.cmu.sphinx.api.Configuration;
import edu.cmu.sphinx.api.SpeechResult;
import edu.cmu.sphinx.api.LiveSpeechRecognizer;
public class Sphinx {
public static void main(String[] args) throws Exception {
Configuration configuration = new Configuration();
configuration.setAcousticModelPath("models/en-us/en-us");
configuration.setDictionaryPath("models/en-us/cmudict-en-us.dict");
configuration.setLanguageModelPath("models/en-us/en-us.lm.bin");
PrintWriter pw = new PrintWriter(new PrintWriter("status.txt"));
LiveSpeechRecognizer recognizer = new LiveSpeechRecognizer(configuration);
recognizer.startRecognition(true);
pw.print("running");
SpeechResult result = recognizer.getResult();
recognizer.stopRecognition();
pw.print("stopped");
pw.close();
PrintWriter pw2 = new PrintWriter(new PrintWriter("result.txt"));
pw2.println(result);
pw2.close();
}
}
And I got this message:
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
So, I re-compiled with -Xlint:deprecation, like it told me to, and it didn't give me any errors this time, so I'm assuming the compiler was finished, and that it compiled successfully.
And then I look, and there's no .jar file, just a new .class file.
Now, I don't really know much about the java compiler, I was just told online that it would give me an executable for the code I had written, which in this case is a .jar file.
I don't know if the compiler sends newly created executables to a special system directory or what, but it's not here, and I don't know why.
Would someone more knowledgeable with Java please give me some context here.
You need just add the next step - look at: docs.oracle.com/javase/tutorial/deployment/jar/build.html
jar cf jar-file input-file(s)
I am learning to program in java. I have programmed in other languages. I have tried import java.net.*; at the start of my code file. But it ghosts out. I have looked in my External Libraries directory and it is not there. I found that java.net was deprecated in 201x <-- some recent year. I am using jdk 10. I am using IntelliJ IDE. I have gotten some import statements to work.
I saw on github that they took over hosting oracle classes that were deprecated at Oracle.
I know I have to use the classpath command if I put the .jar or the .zip file containing the class in another directory. I have searched my laptop and i don't have any other .jar or .zip files other than specific other programs I have installed which also don't contain the java.net class (e.g. Aptana Studio 3)
I am using Mac with OS Sierra.
package com.robinhoodcomputer.myfirstprojects;
import java.io.FileReader; <<-- these import ok
import java.io.*;
import java.net.*; <<--- these don't import
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Date;
public class Myreader {
public static void main(String[] args) throws Exception {
int intVal;
char myChar;
String st;
File file = new File("/java/file");
BufferedReader br = new BufferedReader(new FileReader(file));
// FileReader fr = new
FileReader("/java/HelloWorld/resources/file.txt");
while ((intVal = br.read()) != -1) {
myChar = (char) intVal;
System.out.print(myChar);
}
myHostName = getLocalHost(); <<-- this doesn't show being available
}
}
I have searched and cannot find any articles that actually do anything but explain how to connect to the class file and that you have to import it in your code. No one talks about getting the java.net class itself.
I found one reference to jdk.net in an oracle jdk 10 API specification page.
What am I suppose to use to get the IP address using a hostname in java these days??
thx
P.S. I know this code really doesn't have anything to do with networking, most of it is just reading a file and displaying what's in the file. I wanted to use this code file just to try getting an IP address also. My question is mainly just about making the import statement to work. thx
Your imports are grayed out, since you do not call any method of the imported libraries. As soon as you start using the getLocalHost() method properly, the import will not be grayed out any more. This is a convenience functionality of your IDE it seems.
getLocalHost() is a method of InetAddress and can't be just called without such an instance.
Look at this question for how to use this:
java InetAddress.getLocalHost(); returns 127.0.0.1 ... how to get REAL IP?
I am testing out the atomikos transaction and database connectivity stuff, as a part of it i tried to execute the below code,
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.Properties;
import javax.sql.XAConnection;
import javax.transaction.Transaction;
import javax.transaction.xa.XAResource;
import com.atomikos.datasource.xa.jdbc.JdbcTransactionalResource;
import com.atomikos.icatch.config.UserTransactionServiceImp;
import com.atomikos.icatch.jta.UserTransactionManager;
import com.atomikos.persistence.imp.StateRecoveryManagerImp;
import com.mysql.jdbc.jdbc2.optional.MysqlXADataSource;
/**
* Working out how to use Atomikos, before building {#link Main}. It is not
* intended that you write your applications like this - use JCA+EJB or Spring
* instead! There is way too much boilerplate code here. Based on examples found
* at the Atomikos website.
*/
public class TestAtomikos {
public static void main(String[] args) throws Exception {
MysqlXADataSource mysql = new MysqlXADataSource();
mysql.setUser("root");
mysql.setPassword("root");
mysql.setUrl("jdbc:mysql://localhost:3306/world?useSSL=false");
JdbcTransactionalResource mysqlResource = new JdbcTransactionalResource(
"jdbc/mysql", mysql);
UserTransactionServiceImp utsi = new UserTransactionServiceImp();
utsi.registerResource(mysqlResource);
Properties prop = new Properties();
InputStream input = null;
//StateRecoveryManagerImp srmi = new StateRecoveryManagerImp(null);
try {
input = new FileInputStream("C:\\Users\\abcd\\eclipse\\workspace_Tomcat\\JNDI\\src\\main\\resources\\jta.properties");
// load a properties file
prop.load(input);
}
catch (IOException ex) {
ex.printStackTrace();
}
utsi.init(prop);
//utsi.init();
UserTransactionManager utm = new UserTransactionManager();
//utm.init();
utm.begin();
Transaction tx = utm.getTransaction();
XAConnection xamysql = mysql.getXAConnection();
XAResource db = xamysql.getXAResource();
tx.enlistResource(db);
Connection connection = xamysql.getConnection();
PreparedStatement stmt=connection.prepareStatement("SELECT ID, Name FROM city");
ResultSet rs = null;
rs = stmt.executeQuery("SELECT ID, Name FROM city");
while(rs.next())
{
System.out.println(rs.getInt("ID") + " " + rs.getString("Name"));
}
}
}
But i am getting an error,
log4j:WARN No appenders could be found for logger (com.atomikos.logging.LoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoSuchMethodError: com.atomikos.persistence.imp.StateRecoveryManagerImp: method <init>()V not found
at com.atomikos.icatch.standalone.UserTransactionServiceImp.createDefault(UserTransactionServiceImp.java:205)
at com.atomikos.icatch.standalone.UserTransactionServiceImp.init(UserTransactionServiceImp.java:258)
at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:405)
at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:577)
at com.test.abcd.TestAtomikos.main(TestAtomikos.java:57)
I tried executing the same by integrating atomikos with tomcat and invoking a servlet which makes a call to the db. I got the same error, i tired checking the code of the StateRecoveryManagerImp online, i see the init method defined. I am not sure what is causing this issue. I tried just the database stuff, it worked fine, i was able to execute the query and get the results as well.
I tried using various versions of atomikos jars, but no luck. Any suggestions on how to get this fixed?
The exception that you get which is:
NoSuchMethodError:
com.atomikos.persistence.imp.StateRecoveryManagerImp: method ()V
not found
means that it tries at runtime to access to the default constructor (constructor with no arguments) of the class StateRecoveryManagerImp and it cannot find it.
This is a typically an exception that you face when you build your application with one version of your library and you run it against a different version that is not compatible.
Here as far as I can see from the source code, I believe that you compiled your code with transactions 3.9.0 or higher that has no constructor defined which means that we have the default constructor (as you can see here), and your application uses at Runtime an older version which has a constructor of type public StateRecoveryManagerImp ( ObjectLog objectlog ) which means that we have no default constructor (as you can see here) so when it tries to call it at runtime it fails.
To fix your issue simply check your classpath used at Runtime and make sure that you have only the version of transactions corresponding to the one used at compile time.
I am trying to use the hadoop map reduce, but instead of mapping each line at a time in my Mapper, I would like to map a whole file at once.
So I have found these two classes
(https://code.google.com/p/hadoop-course/source/browse/HadoopSamples/src/main/java/mr/wholeFile/?r=3)
That suppose to help me do this.
And I got a compilation error that says :
The method setInputFormat(Class) in the type
JobConf is not applicable for the arguments
(Class) Driver.java /ex2/src line 33 Java
Problem
I changed my Driver class to be
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.InputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
import forma.WholeFileInputFormat;
/*
* Driver
* The Driver class is responsible of creating the job and commiting it.
*/
public class Driver {
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(Driver.class);
conf.setJobName("Get minimun for each month");
conf.setOutputKeyClass(IntWritable.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
// previous it was
// conf.setInputFormat(TextInputFormat.class);
// And it was changed it to :
conf.setInputFormat(WholeFileInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf,new Path("input"));
FileOutputFormat.setOutputPath(conf,new Path("output"));
System.out.println("Starting Job...");
JobClient.runJob(conf);
System.out.println("Job Done!");
}
}
What am I doing wrong?
Make sure your wholeFileInputFormat class has correct imports. You are using old MapReduce Api in your job Driver. I think you imported new API FileInputFormat in your WholeFileInputFormat class. If i am right, You should import org.apache.hadoop.mapred.FileInputFormat in your wholeFileInputFormat class instead of org.apache.hadoop.mapreduce.lib.input.FileInputFormat .
Hope this helps.
Easiest way to do this is to gzip your input file. This will make FileInputFormat.isSplitable() to return false.
We too ran into something similar and had an alternative out-of-box approach.
Let say you need to process 100 large files (f1, f2,...,f100) such that you need to read one file wholly in the map function. So instead of using the "WholeInputFileFormat" reader approach we created equivalent 10 text files (p1, p2,...,p10) each file containing either the HDFS URL or web URL of the f1-f100 files.
Thus p1 will contain url for f1-f10, p2 will urls for f11-f20 and so on.
These new files p1 thru p10 are then used as input to mappers. Thus the mapper m1 processing file p1 will open file f1 thru f10 one at a time and process it wholly.
This approach allowed us to control the number of mappers and write more exhaustive and complex application logic in map-reduce application. E.g we could run NLP on PDF files using this approach.