Android file copying using Runtime.exec() not working - java

I am trying to copy a file 'project.jpg' from my /sdcard to /sdcard/temp/ folder, but for some reason , the file isn't getting copied. I am testing using a virtual device and have transferred the file 'project.jpg' via the adb shell. The function used to copy the file is,
public void $copyFile()
{
try
{
cpSrc = escapePath(this.cpSrc);
cpDest = escapePath(this.cpDest);
Log.d("$copyFile()","cpSrc = "+cpSrc);
Log.d("$copyFile()","cpDest = "+cpDest);
String destination = getFilename(cpDest,extractFilename(cpSrc));
Runtime.getRuntime().exec("dd in="+cpSrc+" of="+destination);
Log.d("$copyFile()","executed command : 'dd in="+cpSrc+" of="+destination+"'");
displayToast("File Copied Sucessfully.");
clearAllModes();
return;
}
catch(Exception e)
{
displayToast("$copyFile Error : "+e);
this.clearCopyBuffer();
clearAllModes();
}
}
where escapePath() is used to escape space characters(if any) in the given paths. I got the debug Logs as follows,
cpSrc = /sdcard/project.jpg
cpDest = /sdcard/temp
executed command : 'dd in=/sdcard/project.jpg of=/sdcard/temp/project.jpg
Could anyone point the error in the code,
BTW suggestions for other ways of coping files/folders ? it would be helpful as i am trying my hand at a file manager.

Related

does java Files.delete remove files without write permission?

I am trying to write a unit test to test IOException handling in some code. I thought I would be able to create an IOException by removing permissions from a file and trying to delete it. But it looks like the file gets deleted anyway. 1st question is that the expected behavior? If so it seems like a big security hole to me. Second question is anyone have a suggestion on how to create an IOException on either of the two methods Files.delete() or commons.io FileUtils.deleteDirectory(). Following is the unit test code, I have tried both Files.delete on a file and FileUtils.deleteDirectory on a directory. In the latter case I get a fileNotFound exception. The second assertion always fails. Using a debugger I stopped the code and made sure the permissions on unwriteable were 000. I am running java 11 on Redhat 7.
public void testIOException() throws IOException {
binPath.toFile().mkdirs();
Path unwriteablePath = Paths.get(binPath.toString(), "unwriteable");
Path writeablePath = Paths.get(binPath.toString(),"writeable");
File unwriteable = unwriteablePath.toFile();
unwriteable.createNewFile();
File writeable = unwriteablePath.toFile();
writeable.createNewFile();
Assertions.assertTrue(unwriteable.exists());
Assertions.assertTrue(writeable.exists());
Set<PosixFilePermission> perms =
Files.readAttributes( unwriteablePath, PosixFileAttributes.class).permissions();
//make file unwriteable
perms.remove(PosixFilePermission.OWNER_WRITE);
perms.remove(PosixFilePermission.GROUP_WRITE);
perms.remove(PosixFilePermission.OTHERS_WRITE);
perms.remove(PosixFilePermission.OWNER_READ);
perms.remove(PosixFilePermission.GROUP_READ);
perms.remove(PosixFilePermission.OTHERS_READ);
Files.setPosixFilePermissions(unwriteablePath, perms);
Assertions.assertFalse(unwriteable.canWrite());
// Deleter deleter = new Deleter(mockConfig);
// deleter.run();
try {
Files.delete(Paths.get(unwriteable.getAbsolutePath()));
} catch (IOException e) {
System.out.println("Got expected exception");
}
Assertions.assertFalse(writeable.exists());
Assertions.assertTrue(unwriteable.exists());
}
}

MS Graph API: Getting 404 when saving to _layouts folder

I'm using the MS Graph Java SDK to save a file to user's OneDrive and under a given path:
#Test
public void createDriveItem() {
String fileName = "moon.pdf";
String fullPath = "a/_layouts/b" + fileName;
byte[] content = Files.readAllBytes(Paths.get(fileName));
graph.users(userId)
.drive()
.root()
.itemWithPath(encodePath(fullPath))
.content()
.buildRequest()
.put(content);
}
private String encodePath(String path) {
String encoding = StandardCharsets.UTF_8.name();
try {
return URLEncoder.encode(path, encoding);
} catch (UnsupportedEncodingException e) {
return path;
}
}
I'm using MS Graph Java SDK v2.5.0, Java 11.
However, this request fails with 404 : Not Found. It also fails if I don't encode the path. It looks like the /_layouts/ which is making troubles because, once I add something to it, the request works.
Also, I reproduced this error with a number of accounts.
My question is: Is this actually expected? If yes, why does creating the same folder structure work when done through the web UI?
I believe you should not be able to add items into /_layouts/ on SharePoint Online.

Filename with screenshot[SomeRandomnumber.png] is created inside var/tmp while using selenium - java

I am taking screenshot in my selenium project using below method
public String getScreenShotUrl(String fileName, Information information) {
final String screenShotS3FolderName = "/test/s3"
final String apiHistoryScreenshotUrl = "/test/history"
new Thread(() -> {
File screenshot = ((TakesScreenshot) information.getDriver()).getScreenshotAs(OutputType.FILE);
contentHandler.moveScreenshotImageFileToS3(fileName, screenShotS3FolderName, screenshot);
}).start();
String crawlScreenshotUrl = apiHistoryScreenshotUrl + fileName + DOT_JPEG_FILE_EXTENSION;
information.getAllScreenShotUrls().add(screenshotUrl);
return crawlScreenshotUrl;
}
I am not storing file any where inside var/tmp foler
I am using firefox 83
geco driver : 28
os : linux
Problem :
I am seeing lot of png files inside
/var/tmp
screenshot18218906458183251330.png
Now sure what is triggering this screenshot in my firefox/selenium.
This is filling my hard-disk. How do i stop firefox from taking this screenshot.
Not sure which command is triggering this . I recently upgraded to firefox-83 but not sure 83 is reason for this issue
As per Rahul command i added delete command explicily . this solved the problem
Adding screenshot.delete() // This delete the file explicitly without waiting for jvm to exit
public String getScreenShotUrl(String fileName, Information information) {
final String screenShotS3FolderName = "/test/s3"
final String apiHistoryScreenshotUrl = "/test/history"
new Thread(() -> {
File screenshot = ((TakesScreenshot) information.getDriver()).getScreenshotAs(OutputType.FILE);
contentHandler.moveScreenshotImageFileToS3(fileName, screenShotS3FolderName, screenshot);
screenshot.delete() // This delete the file once it is moved to s3
}).start();
String crawlScreenshotUrl = apiHistoryScreenshotUrl + fileName + DOT_JPEG_FILE_EXTENSION;
information.getAllScreenShotUrls().add(screenshotUrl);
return crawlScreenshotUrl;
}

Hive UDF in Java fails when creating a table

What is the difference between those two queries:
SELECT my_fun(col_name) FROM my_table;
and
CREATE TABLE new_table AS SELECT my_fun(col_name) FROM my_table;
Where my_fun is a java UDF.
I'm asking, because when I create new table (second query) I receive a java error.
Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: Map operator initialization failed
...
Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: Unable to instantiate UDF implementation class com.company_name.examples.ExampleUDF: java.lang.NullPointerException
I found that the source of error is line in my java file:
encoded = Files.readAllBytes(Paths.get(configPath));
But the question is why it works when table is not created and fails if table is created?
The problem might be with the way you read the file. Try to pass the file path as the second argument in the UDF, then read as follows
private BufferedReader getReaderFor(String filePath) throws HiveException {
try {
Path fullFilePath = FileSystems.getDefault().getPath(filePath);
Path fileName = fullFilePath.getFileName();
if (Files.exists(fileName)) {
return Files.newBufferedReader(fileName, Charset.defaultCharset());
}
else
if (Files.exists(fullFilePath)) {
return Files.newBufferedReader(fullFilePath, Charset.defaultCharset());
}
else {
throw new HiveException("Could not find \"" + fileName + "\" or \"" + fullFilePath + "\" in inersect_file() UDF.");
}
}
catch(IOException exception) {
throw new HiveException(exception);
}
}
private void loadFromFile(String filePath) throws HiveException {
set = new HashSet<String>();
try (BufferedReader reader = getReaderFor(filePath)) {
String line;
while((line = reader.readLine()) != null) {
set.add(line);
}
} catch (IOException e) {
throw new HiveException(e);
}
}
The full code for different generic UDF that utilizes file reader can be found here
I think there are several points unclear, so this answer is based on assumptions.
First of all, it is important to understand that hive currently optimize several simple queries and depending on the size of your data, the query that is working for you SELECT my_fun(col_name) FROM my_table; is most likely running locally from the client where you are executing the job, that is why you UDF can access your config file locally available, this "execution mode" is because the size of your data. CTAS trigger a job independent on the input data, this job runs distributed in the cluster where each worker fail accessing your config file.
It looks like you are trying to read your configuration file from the local file system, not from the HDSFS Files.readAllBytes(Paths.get(configPath)), this means that your configuration has to either be replicated in all the worker nodes or be added previously to the distributed cache (you can use add file from this, doc here. You can find another questions here about accessing files from the distributed cache from UDFs.
One additional problem is that you are passing the location of your config file through an environment variable which is not propagated to worker nodes as part of your hive job. You should pass this configuration as a hive config, there is an answer for accessing Hive Config from UDF here assuming that you are extending GenericUDF.

part file empty while running pig in Eclipse using libraries

I ran a sample pig script in mapreduce mode and it ran successfully.
My pigscript:
allsales = load 'sales' as (name,price,country);
bigsales = filter allsales by price >999;
sortedbigsales = order bigsales by price desc;
store sortedbigsales into 'topsales';
Now, I am trying to implement that in eclipse (currently I am running using libraries).
One doubt: Pig Local mode means that we need hadoop installation as default?
IdLocal.java:
public class IdLocal {
public static void main(String[] args) {
try {
PigServer pigServer = new PigServer("local");
runIdQuery(pigServer, "/home/sreeveni/myfiles/pig/data/sales");
} catch (Exception e) {
}
}
public static void runIdQuery(PigServer pigServer, String inputFile)
throws IOException {
pigServer.registerQuery("allsales = load '" + inputFile+ "' as (name,price,country);");
pigServer.registerQuery("bigsales = filter allsales by price >999;");
pigServer.registerQuery("sortedbigsales = order bigsales by price desc;");
pigServer.store("sortedbigsales","/home/sreeveni/myfiles/OUT/topsalesjava");
}
}
The console is showing success for me, but my part file is empty.
Why is it so?
1) local mode pig does not mean that you have to have hadoop installed. you can run it without hadoop and hdfs. Everything will be performed single threaded on your machine and it should read/write from your local filesystem by default.
2) Regarding your empty output, ensure that your input file exists on your local filesystem and that it has records in the 'price' field greater than 999. You could be filtering them all out otherwise. Also, pig defaults to tab separated files. Is your inputFile tab separated? if not, then your schema definition will have the 'name' field hold the entire row in the file, and 'price' and 'country' will always be null.
hope that helps

Categories

Resources