I am using Debain 8.0 - jessie (64 bit).
I am getting IO exceptions when I try to run a jar file, from outside its directory, through a shell script.
Directory: "/home/rscedit"
Files: "data (directory)" "run.sh (file)" "Webserver.jar (file)"
When I try to run "run.sh" from anywhere besides "/home/rscedit", I am facing IO exceptions in my jar file.
But if I try to run "run.sh" from "/home/rscedit", it runs perfectly fine.
I want to run my shell script at startup, so I should be able to run my shell script from outside "/home/rscedit" right?
Shell script
#!/bin/sh
java -jar -Xmx20480m /home/rscedit/Webserver.jar
read –n1
The error I get when executing my shell script
java.nio.file.NoSuchFileException: ./data/log/ipn.log.lck
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at java.util.logging.FileHandler.openFiles(FileHandler.java:459)
at java.util.logging.FileHandler.<init>(FileHandler.java:326)
at org.displee.utilities.logging.LogFactory.loadFileLogger(LogFactory.java:44)
at org.displee.utilities.logging.LogFactory.<clinit>(LogFactory.java:19)
LogFactory.java
package org.displee.utilities.logging;
import java.io.IOException;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.*;
public class LogFactory {
private static final String FORMAT = "%1$td-%1$tm-%1$tY %1$tH:%1$tM:%1$tS %4$s %2$s - %5$s%6$s%n";
private static final Map<String, Logger> MAP = new HashMap<>();
static {
try {
loadFileLogger("ipn", "./data/log/ipn.log");
loadFileLogger("error", "./data/log/error.log");
Logger logger = Logger.getLogger("console");
logger.setUseParentHandlers(false);
ConsoleHandler ch = new ConsoleHandler();
ch.setFormatter(new ConsoleFormatter());
ch.setLevel(Level.ALL);
logger.addHandler(ch);
register(logger.getName(), logger);
} catch (IOException e) {
e.printStackTrace();
}
}
public static void register(String name, Logger logger) {
MAP.putIfAbsent(name, logger);
}
public static Logger get(String name) {
return MAP.get(name);
}
private static Logger loadFileLogger(String name, String path) throws IOException {
Logger logger = Logger.getLogger(name);
FileHandler fh = new FileHandler(path, true);
fh.setFormatter(new ConsoleFormatter());
logger.addHandler(fh);
logger.setUseParentHandlers(false);
register(name, logger);
return logger;
}
private static class ConsoleFormatter extends Formatter {
#Override
public synchronized String format(LogRecord record) {
Date date = new Date();
date.setTime(record.getMillis());
String source;
if (record.getSourceClassName() != null) {
source = record.getSourceClassName();
if (record.getSourceMethodName() != null) {
source += " " + record.getSourceMethodName();
}
} else {
source = record.getLoggerName();
}
String message = formatMessage(record);
String throwable = "";
if (record.getThrown() != null) {
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw);
pw.println();
record.getThrown().printStackTrace(pw);
pw.close();
throwable = sw.toString();
}
return String.format(FORMAT,
date,
source,
record.getLoggerName(),
"LOG",
message,
throwable);
}
}
}
The command I use to execute my shell script
exec /home/rscedit/run.sh
Edit: my data map is not only containing log files, but also other stuff like website files.
Your java program seems to be sensitive to working directory. Easiest solution I know, change this
java -jar -Xmx20480m /home/rscedit/Webserver.jar
to
(cd /home/rscedit ; java -jar -Xmx20480m Webserver.jar)
Or, change the Java, this (and everywhere you have the pattern)
loadFileLogger("ipn", "./data/log/ipn.log");
loadFileLogger("error", "./data/log/error.log");
to something like (to make it relative to $HOME),
loadFileLogger("ipn", new File(System.getProperty("user.home"),
"data/log/ipn.log").getPath());
loadFileLogger("error", new File(System.getProperty("user.home"),
"data/log/error.log").getPath());
Related
My goal is to check if a Hadoop path exists and if yes, then it should download some file. Now in every five minutes, I want to check if the file exists in the Hadoop path.
currently, my code is checking if the file exists but not on an interval basis.
public boolean checkPathExistance(final org.apache.hadoop.fs.Path hdfsAbsolutePath)
{
final Configuration configuration = getHdfsConfiguration();
try
{
final FileSystem fileSystems = hdfsAbsolutePath.getFileSystem(configuration);
if (fileSystems.exists(hdfsAbsolutePath))
{
return true;
}
}
catch (final IOException e)
{
e.printStackTrace();
}
return false;
}
The criteria are this checkPathExistance method should be called in every five minutes to check if the file exists. And when it will return true, the file should be downloaded.
public void download(final String hdfs, final Path outputPath)
{
final org.apache.hadoop.fs.Path hdfsAbsolutePath = getHdfsFile(hdfsLocalPath).getPath();
logger.info("path check {}", hdfsAbsolutePath.getName());
final boolean isPathExist = checkPathExistance(hdfsAbsolutePath);
downloadFromHDFS(hdfsAbsolutePath, outputPath);
}
Can I please get some help here ?
For the file copying (and not folder copying, if I understood correctly within your question's context) you can just use the copyToLocalFile method from FileSystem as seen here by specifying the boolean that checks if you want to delete the source file, and the input (HDFS)/output (local) paths.
As for the periodic checking of the existence of the file in HDFS, you can use a ScheduledExecutorService object (Java 8 docs here) by specifying that you want your functions' execution to run every 5 minutes.
The following program takes two arguments, the path of the input file in the HDFS and the path of the output file locally.
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import java.io.IOException;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class RegularFileCheck
{
public static boolean checkPathExistence(Path inputPath, Configuration conf) throws IOException
{
boolean flag = false;
FileSystem fs = FileSystem.get(conf);
if(fs.exists(inputPath))
flag = true;
return flag;
}
public static void download(Path inputPath, Path outputPath, Configuration conf) throws IOException
{
FileSystem fs = FileSystem.get(conf);
fs.copyToLocalFile(false, inputPath, outputPath); // don't delete the source input file
System.out.println("File copied!");
}
public static void main(String[] args)
{
Path inputPath = new Path(args[0]);
Path outputPath = new Path(args[1]);
Configuration conf = new Configuration();
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
Runnable task = () ->
{
System.out.println("New Timer!");
try
{
if(checkPathExistence(inputPath, conf))
download(inputPath, outputPath, conf);
}
catch (IOException e)
{
e.printStackTrace();
}
};
executor.scheduleWithFixedDelay(task, 0, 5, TimeUnit.MINUTES);
}
}
The console output of course is continuous and looks like the one on the screenshot below (test.txt is the file stored in HDFS, test1.txt is the file to be copied locally). You can additionally modify the code above if you want to stop the re-executions after the file has been found and copied already, or if you want to stop checking for the file after a while.
To stop the search and copying, simply replace to the code above with the following snippet:
Runnable task = () ->
{
System.out.println("New Timer!");
try
{
if(new File(String.valueOf(outputPath)).exists())
System.exit(0);
else if(checkPathExistence(inputPath, conf))
download(inputPath, outputPath, conf);
}
catch (IOException e)
{
e.printStackTrace();
}
};
And the program will stop after the file has been copied, as seen from the console output:
On the first run, I want to copy the given File to a new location with a new file name.
Every subsequent run should overwrite the same destination file created during first run.
During first run, the destination file does not exist. Only the directory exists.
I wrote the following program:
package myTest;
import java.io.File;
import java.io.IOException;
import java.net.URISyntaxException;
import java.net.URL;
import java.nio.file.Paths;
import org.apache.commons.io.FileUtils;
public class FileCopy {
public static void main(String[] args) {
TestFileCopy fileCopy = new TestFileCopy();
File sourceFile = new File("myFile.txt");
fileCopy.saveFile(sourceFile);
File newSourceFile = new File("myFile_Another.txt");
fileCopy.saveFile(newSourceFile);
}
}
class TestFileCopy {
private static final String DEST_FILE_PATH = "someDir/";
private static final String DEST_FILE_NAME = "myFileCopied.txt";
public void saveFile(File sourceFile) {
URL destFileUrl = getClass().getClassLoader().getResource(DEST_FILE_PATH
+ DEST_FILE_NAME);
try {
File destFile = Paths.get(destFileUrl.toURI()).toFile();
FileUtils.copyFile(sourceFile, destFile);
} catch (IOException | URISyntaxException e) {
e.printStackTrace();
}
}
}
However, this throws null pointer exception on the following line:
File destFile = Paths.get(destFileUrl.toURI()).toFile();
What am I missing?
Directory someDir is directly under my project's root directory in eclipse.
Both source files myFile.txt and myFile_Another.txt exists directly under my project's root directory in eclipse.
I used this and it works as I am expecting:
public void saveFile1(File sourceFile) throws IOException {
Path from = sourceFile.toPath();
Path to = Paths.get(DEST_FILE_PATH + DEST_FILE_NAME);
Files.copy(from, to, StandardCopyOption.REPLACE_EXISTING);
}
Using Java nio.
I'm trying to load a file from resources/ path using
getClassLoader().getResourceAsStream("file.LIB")
but the method always returns null, unless I rename the file into another extension, say ".dll".
I've looked into the official Java documentation, but to no avail.
Why does the method acts strange on that file type?
Note: I'm using JDK 1.8.0_111 x86 (due to constraints on that lib file, which only works well with a 32-bit JVM)
It does works for me, you need to be sure what exactly you are doing with lib file.
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
public class FileHelper {
public String getFilePathToSave() {
Properties prop = new Properties();
String filePath = "";
try {
InputStream inputStream =
getClass().getClassLoader().getResourceAsStream("abc.lib");
prop.load(inputStream);
filePath = prop.getProperty("json.filepath");
} catch (IOException e) {
e.printStackTrace();
}
return filePath;
}
public static void main(String args[]) {
FileHelper fh = new FileHelper();
System.out.println(fh.getFilePathToSave());
}
}
I am trying to run pig scripts remotely from my java machine, for that i have written below code
code:
import java.io.IOException;
import java.util.Properties;
import org.apache.pig.ExecType;
import org.apache.pig.PigServer;
import org.apache.pig.backend.executionengine.ExecException;
public class Javapig{
public static void main(String[] args) {
try {
Properties props = new Properties();
props.setProperty("fs.default.name", "hdfs://hdfs://192.168.x.xxx:8022");
props.setProperty("mapred.job.tracker", "192.168.x.xxx:8021");
PigServer pigServer = new PigServer(ExecType.MAPREDUCE, props);
runIdQuery(pigServer, "fact");
}
catch(Exception e) {
System.out.println(e);
}
}
public static void runIdQuery(PigServer pigServer, String inputFile) throws IOException {
pigServer.registerQuery("A = load '" + inputFile + "' using org.apache.hive.hcatalog.pig.HCatLoader();");
pigServer.registerQuery("B = FILTER A by category == 'Aller';");
pigServer.registerQuery("DUMP B;");
System.out.println("Done");
}
}
but while executing i am getting below error.
Error
ERROR 4010: Cannot find hadoop configurations in classpath (neither hadoop-site.xml nor core-site.xml was found in the classpath).
I don't know what am i doing wrong.
Well, self describing error...
neither hadoop-site.xml nor core-site.xml was found in the classpath
You need both of those files in the classpath of your application.
You ideally would get those from your $HADOOP_CONF_DIR folder, and you would copy them into your Java's src/main/resources, assuming you have a Maven structure
Also, with those files, you should rather use a Configuration object for Hadoop
PigServer(ExecType execType, org.apache.hadoop.conf.Configuration conf)
Was able to print some stuff in the logfile by studying and modifying some sample codes but while running the package nothing is being printed to the logfile.
Main Class (Client.java)
public class Client {
static Logger LOGGER = Logger.getLogger(Client.class.getName());
public static void main(String[] args) {
Client logger = new Client();
try {
LogSetup.setup();
emsSession = logger.Initialise();
logger.getAllMEInfo();
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException("Problems with creating the log files");
}
}
private void getAllMEInfo() {
LOGGER.setLevel(Level.INFO);
LOGGER.severe("Info Log");
LOGGER.warning("Info Log");
LOGGER.info("Info Log");
LOGGER.finest("Really not important");
// Some codes for the method
}
}
LogSetup.java
import java.io.IOException;
import java.text.ParseException;
import java.util.logging.ConsoleHandler;
import java.util.logging.FileHandler;
import java.util.logging.Handler;
import java.util.logging.Level;
import java.util.logging.Logger;
public class LogSetup {
static private FileHandler fileTxt;
static private LogWriter formatterTxt;
static public void setup() throws IOException, ParseException {
Logger logger = Logger.getLogger(Logger.GLOBAL_LOGGER_NAME);
Logger rootLogger = Logger.getLogger("");
Handler[] handlers = rootLogger.getHandlers();
if (handlers[0] instanceof ConsoleHandler) {
logger.removeHandler(handlers[0]);
}
logger.setLevel(Level.SEVERE);
fileTxt = new FileHandler(LogFile.txt");
// create a TXT formatter
formatterTxt = new LogWriter();
fileTxt.setFormatter(formatterTxt);
logger.addHandler(fileTxt);
}
}
LogWriter.java
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.logging.Formatter;
import java.util.logging.Handler;
import java.util.logging.LogRecord;
class LogWriter extends Formatter {
public String format(LogRecord rec) {
System.out.println("RECORDING..............");
StringBuffer buf = new StringBuffer(1000);
buf.append(rec.getLevel());
buf.append(calcDate(rec.getMillis()));
buf.append(formatMessage(rec));
return buf.toString();
}
private String calcDate(long millisecs) {
SimpleDateFormat date_format = new SimpleDateFormat("MMM dd,yyyy HH:mm\n");
Date resultdate = new Date(millisecs);
return date_format.format(resultdate);
}
public String getHead(Handler h) {
return ("START " + new Date()) + "\n";
}
public String getTail(Handler h) {
return "END " + new Date() + "\n";
}
}
Log prints the START and END but doesn't even enter in the buff ""RECORDING.............."" so basically nothing is being logged. Any idea???
Please put include statements in your examples so others can try your code.
If you are using java.util.logging, try moving to logback. Logback logging will log properly with no configuration. If you are using java.util.logging then you'll need to find a tutorial on how to configure it, as if it's not configured it doesn't log like you would expect.
The logging framework use a configuration file, where u can set "where and what" to output, for java.util.logging the configuration file is under the folder lib of ure current jvm "/jdk1.x.x/jre/lib/logging.properties" I share my link the problem is in spanish config logging
In short words search the next line and change INFO -> ALL
java.util.logging.ConsoleHandler.level=INFO
java.util.logging.ConsoleHandler.level=ALL
In your code u only need to log message u want, ex:
public class TestLog {
private static final Logger log = Logger.getLogger(TestLog.class.getName());
public static void doLog() {
log.info("info log");
log.fine("fine log");
log.finer("finer log");
.....
log.severe("severe log");
}
}
Always try to use fine, finer or finest for ure debug message, don't use INFO because always print on default config and can slow ure application