How to find file size in scala? - java

I'm writing a scala script, that need to know a size of a file. How do I do this correctly?
In Python I would do
os.stat('somefile.txt').st_size
and in Scala?

There is no way to do this using the Scala standard libraries. Without resorting to external libraries, you can use the Java File.length() method do do this. In Scala, this would look like:
import java.io.File
val someFile = new File("somefile.txt")
val fileSize = someFile.length
If you want something Scala-specific, you can use an external framework like scalax.io or rapture.io

java.nio.file.Files.size
from api:
public static long size(Path path) throws IOException
Returns the size of a file (in bytes)

import java.io.File;
val file:File = new File("your file path");
file.length()

Related

Copy All Type of Files in Java

I'm trying to make a simple program to copy file of any type. I write the code as below.
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.File;
public class CopyExample {
public static void main(String[] args) throws Exception {
File f = new File("image.jpg");
FileInputStream is = new FileInputStream(f);
FileOutputStream os = new FileOutputStream("copy-image.png");
byte[] ar = new byte[(int)f.length()];
is.read(ar);
os.write(ar);
is.close();
os.close();
}
}
I already tested this code for .txt , .jpg , .png, .pdf It is working fine.
But I want to ask is it fine? or is there any other way to do this in better way?
Copying a file is not about its file extension or type. It is about its content. If file is so big maybe computer's memory will not be enough.
Apache's FileUtils may be useful for your question.
this Q&A may help you.
And this article is about your question
Java 7 provides Files class that you could use to copy a file
Files.copy(src,dest);

Read file from classpath with Java 7 NIO

I've googled around for quite a while for this, but all the results point to pre-Java 7 NIO solutions. I've used the NIO stuff to read in files from the a specific place on the file system, and it was so much easier than before (Files.readAllBytes(path)). Now, I'm wanting to read in a file that is packaged in my WAR and on the classpath. We currently do that with code similar to the following:
Input inputStream = this.getClass().getClassLoader().getResourceAsStream(fileName);
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
/* iterate through the input stream to get all the bytes (no way to reliably find the size of the
* file behind the inputStream (see http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html#available()))
*/
int byteInt = -1;
try
{
byteInt = inputStream.read();
while (byteInt != -1)
{
byteStream.write(byteInt);
byteInt = inputStream.read();
}
byteArray = byteStream.toByteArray();
inputStream.close();
return byteArray;
}
catch (IOException e)
{
//...
}
While this works, I was hoping there was an easier/better way to do this with the NIO stuff in Java 7. I'm guessing I'll need to get a Path object that represents this path on the classpath, but I'm not sure how to do that.
I apologize if this is some super easy thing to do. I just cannot figure it out. Thanks for the help.
This works for me.
import java.nio.file.Files;
import java.nio.file.Paths;
// fileName: foo.txt which lives under src/main/resources
public String readFileFromClasspath(final String fileName) throws IOException, URISyntaxException {
return new String(Files.readAllBytes(
Paths.get(getClass().getClassLoader()
.getResource(fileName)
.toURI())));
}
A Path represents a file on the file system. It doesn't help to read a resource from the classpath. What you're looking after is a helper method that reads everything fro a stream (more efficiently than how you're doing) and writes it to a byte array. Apache commons-io or Guava can help you with that. For example with Guava:
byte[] array =
ByteStreams.toByteArray(this.getClass().getClassLoader().getResourceAsStream(resourceName));
If you don't want to add Guava or commons-io to your dependencies just for that, you can always read their source code and duplicate it to your own helper method.
As far as I understand, what you want is to open a ReadableByteChannel to your resource, so you can use NIO for reading it.
This should be a good start,
// Opens a resource from the current class' defining class loader
InputStream istream = getClass().getResourceAsStream("/filename.txt");
// Create a NIO ReadableByteChannel from the stream
ReadableByteChannel channel = java.nio.channels.Channels.newChannel(istream);
You should look at ClassLoader.getResource(). This returns a URL which represents the resource. If it's local to the file system, it will be a file:// URL. At that point you can strip off the scheme etc., and then you have the file name with which you can do whatever you want.
However, if it's not a file:// path, then you can fall back to the normal InputStream.

How to compress video file to 3gp or mp4 in java?

i am having a video file which i need to convert to 3gp, mp4 using java.
is there any library to do this or any sample ?
if any of you have done or know the sample please guide me to do the above or provide me the example.
thanks
If you MUST use Java, check Java Media Framework.
http://www.oracle.com/technetwork/java/javase/tech/index-jsp-140239.html
Use a tool like ffmpeg (ffmpeg.org) or mconvert (from MPlayer) or VLC. You can call them from Java using the ProcessBuilder or Commons Exec.
You may have look at pandastream. But it is a web service.
Compressing video in java you can use IVCompressor
is very easy to use
for more details go https://techgnious.github.io/IVCompressor/
simple code
<dependency>
<groupId>io.github.techgnious</groupId>
<artifactId>IVCompressor</artifactId>
<version>1.0.1</version>
</dependency>
public static void main(String[] args) throws VideoException, IOException {
IVCompressor compressor = new IVCompressor();
IVSize customRes = new IVSize();
customRes.setWidth(400);
customRes.setHeight(300);
File file = new File("D:/Testing/20.mp4");
compressor.reduceVideoSizeAndSaveToAPath(file,VideoFormats.MP4,ResizeResolution.R480P,"D:/Testing/Custome");
}

Change encoding of existing file with Java?

I need to programatically change the encoding of a set of *nix scripts to UTF-8 from Java. I won't write anything to them, so I'm trying to find what's the easiest|fastest way to do this. The files are not too many and are not that big. I could:
"Write" an empty string using an OutputStream with UTF-8 set as encoding
Since I'm already using FileUtils (from Apache Commons), I could read|write the contents of these files, passing UTF-8 as encoding
Not a big deal, but has anyone run into this case before? Are there any cons on either approach?
As requested, and since you're using commons io, here is example code (error checking to the wind):
import java.io.File;
import java.io.IOException;
import org.apache.commons.io.FileUtils;
public class Main {
public static void main(String[] args) throws IOException {
String filename = args[0];
File file = new File(filename);
String content = FileUtils.readFileToString(file, "ISO8859_1");
FileUtils.write(file, content, "UTF-8");
}
}

How to convert a Hadoop Path object into a Java File object

Is there a way to change a valid and existing Hadoop Path object into a useful Java File object. Is there a nice way of doing this or do I need to bludgeon to code into submission? The more obvious approaches don't work, and it seems like it would be a common bit of code
void func(Path p) {
if (p.isAbsolute()) {
File f = new File(p.toURI());
}
}
This doesn't work because Path::toURI() returns the "hdfs" identifier and Java's File(URI uri) constructor only recognizes the "file" identifier.
Is there a way to get Path and File to work together?
**
Ok, how about a specific limited example.
Path[] paths = DistributedCache.getLocalCacheFiles(job);
DistributedCache is supposed to provide a localized copy of a file, but it returns a Path. I assume that DistributedCache make a local copy of the file, where they are on the same disk. Given this limited example, where hdfs is hopefully not in the equation, is there a way for me to reliably convert a Path into a File?
**
I recently had this same question, and there really is a way to get a file from a path, but it requires downloading the file temporarily. Obviously, this won't be suitable for many tasks, but if time and space aren't essential for you, and you just need something to work using files from Hadoop, do something like the following:
import java.io.File;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public final class PathToFileConverter {
public static File makeFileFromPath(Path some_path, Configuration conf) throws IOException {
FileSystem fs = FileSystem.get(some_path.toUri(), conf);
File temp_data_file = File.createTempFile(some_path.getName(), "");
temp_data_file.deleteOnExit();
fs.copyToLocalFile(some_path, new Path(temp_data_file.getAbsolutePath()));
return temp_data_file;
}
}
If you get a LocalFileSystem
final LocalFileSystem localFileSystem = FileSystem.getLocal(configuration);
You can pass your hadoop Path object to localFileSystem.pathToFile
final File localFile = localFileSystem.pathToFile(<your hadoop Path>);
Not that I'm aware of.
To my understanding, a Path in Hadoop represents an identifier for a node in their distributed filesystem. This is a different abstraction from a java.io.File, which represents a node on the local filesystem. It's unlikely that a Path could even have a File representation that would behave equivalently, because the underlying models are fundamentally different.
Hence the lack of translation. I presume by your assertion that File objects are "[more] useful", you want an object of this class in order to use existing library methods? For the reasons above, this isn't going to work very well. If it's your own library, you could rewrite it to work cleanly with Hadoop Paths and then convert any Files into Path objects (this direction works as Paths are a strict superset of Files). If it's a third party library then you're out of luck; the authors of that method didn't take into account the effects of a distributed filesystem and only wrote that method to work on plain old local files.

Categories

Resources