I'm writing a play 2.0 java application that allows users to upload files. Those files are stored on a third-party service I access using a Java library, the method I use in this API has the following signature:
void store(InputStream stream, String path, String contentType)
I've managed to make uploads working using the following simple controller:
public static Result uploadFile(String path) {
MultipartFormData body = request().body().asMultipartFormData();
FilePart filePart = body.getFile("files[]");
InputStream is = new FileInputStream(filePart.getFile())
myApi.store(is,path,filePart.getContentType());
return ok();
}
My concern is that this solution is not efficient because by default the play framework stores all the data uploaded by the client in a temporary file on the server then calls my uploadFile() method in the controller.
In a traditional servlet application I would have written a servlet behaving this way:
myApi.store(request.getInputStream(), ...)
I have been searching everywhere and didn't find any solution. The closest example I found is Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0? but I didn't found how to modify it to fit my needs.
Is there a way in play2 to achieve this behavior, i.e. having the data uploaded by the client to go "through" the web-application directly to another system ?
Thanks.
I've been able to stream data to my third-party API using the following Scala controller code:
def uploadFile() =
Action( parse.multipartFormData(myPartHandler) )
{
request => Ok("Done")
}
def myPartHandler: BodyParsers.parse.Multipart.PartHandler[MultipartFormData.FilePart[Result]] = {
parse.Multipart.handleFilePart {
case parse.Multipart.FileInfo(partName, filename, contentType) =>
//Still dirty: the path of the file is in the partName...
String path = partName;
//Set up the PipedOutputStream here, give the input stream to a worker thread
val pos:PipedOutputStream = new PipedOutputStream();
val pis:PipedInputStream = new PipedInputStream(pos);
val worker:UploadFileWorker = new UploadFileWorker(path,pis);
worker.contentType = contentType.get;
worker.start();
//Read content to the POS
Iteratee.fold[Array[Byte], PipedOutputStream](pos) { (os, data) =>
os.write(data)
os
}.mapDone { os =>
os.close()
Ok("upload done")
}
}
}
The UploadFileWorker is a really simple Java class that contains the call to the thrid-party API.
public class UploadFileWorker extends Thread {
String path;
PipedInputStream pis;
public String contentType = "";
public UploadFileWorker(String path, PipedInputStream pis) {
super();
this.path = path;
this.pis = pis;
}
public void run() {
try {
myApi.store(pis, path, contentType);
pis.close();
} catch (Exception ex) {
ex.printStackTrace();
try {pis.close();} catch (Exception ex2) {}
}
}
}
It's not completely perfect because I would have preferred to recover the path as a parameter to the Action but I haven't been able to do so. I thus have added a piece of javascript that updates the name of the input field (and thus the partName) and it does the trick.
Related
I am writing an API service that fetches data from a stream, and outputs it in a file. I can't output it as a stream because I use Swagger (now OpenAPI) 2.0, which doesn't support output streams (Swagger 3.0 does, but i can't use it).
What would be the cleanest way to make a file, output it via the service, and then make sure it gets deleted?
I initially thought I might use a temp file and delete in finally clause. However, there is no guarantee that the file finished downloading on the client side before that clause is reached and file is deleted.
Am I right? Wrong? Is there a better way to do this?
I was talking about using a closeable in the comments. This is it.
Usage:
try (TempFile file = new TempFile("tempfile", ".txt")) {
// do stuff with file
} catch (IOException e) {
// error handling.
// file should be automatically deleted.
}
TempFile:
public class TempFile implements AutoCloseable {
private final File file;
public TempFile(String prefix, String suffix) {
this.file = File.createTempFile(prefix, suffix);
}
public File getFile() {
return this.file;
}
#Override
public void close() throws IOException {
this.file.delete();
}
}
I have got an OutputStream which can be initialized as a chain of OutputStreams. There could be any level of chaining .Only thing guaranteed is that at the end of the chain is a FileOutputStream.
I need to recreate this chained outputStream with a modified Filename in FileOutputStream. This would have been possible if out variable (which stores the underlying chained outputStream) was accessible ; as shown below.
public OutputStream recreateChainedOutputStream(OutputStream os) throws IOException {
if(os instanceof FileOutputStream) {
return new FileOutputStream("somemodified.filename");
} else if (os instanceof FilterOutputStream) {
return recreateChainedOutputStream(os.out);
}
}
Is there any other way of achieving the same?
You can use reflection to access the os.out field of the FilterOutputStream, this has however some drawbacks:
If the other OutputStream is also a kind of RolloverOutputStream, you can have a hard time reconstructing it,
If the other OutputStream has custom settings, like GZip compression parameter, you cannot reliable read this
If there is a
A quick and dirty implementation of recreateChainedOutputStream( might be:
private final static Field out;
{
try {
out = FilterInputStream.class.getField("out");
out.setAccessible(true);
} catch(Exception e) {
throw new RuntimeException(e);
}
}
public OutputStream recreateChainedOutputStream(OutputStream out) throws IOException {
if (out instanceof FilterOutputStream) {
Class<?> c = ou.getClass();
COnstructor<?> con = c.getConstructor(OutputStream.class);
return con.invoke(this.out.get(out));
} else {
// Other output streams...
}
}
While this may be ok in your current application, this is a big no-no in the production world because the large amount of different kind of OutputStreams your application may recieve.
A better way to solve would be a kind of Function<String, OutputStream> that works as a factory to create OutputStreams for the named file. This way the external api keeps its control over the OutputStreams while your api can adress multiple file names. An example of this would be:
public class MyApi {
private final Function<String, OutputStream> fileProvider;
private OutputStream current;
public MyApi (Function<String, OutputStream> fileProvider, String defaultFile) {
this.fileProvider = fileProvider;
selectNewOutputFile(defaultFile);
}
public void selectNewOutputFile(String name) {
OutputStream current = this.current;
this.current = fileProvider.apply(name);
if(current != null) current.close();
}
}
This can then be used in other applications as:
MyApi api = new MyApi(name->new FileOutputStream(name));
For simple FileOutputStreams, or be used as:
MyApi api = new MyApi(name->
new GZIPOutputStream(
new CipherOutputStream(
new CheckedOutputStream(
new FileOutputStream(name),
new CRC32()),
chipper),
1024,
true)
);
For a file stream that stored checksummed using new CRC32(), chipped using chipper, gzip according to a 1024 buffer with sync write mode.
How do I send an email using the Java Mail API that contains large attachments (~25 MB)? The attachments are files stored online with our cloud storage provider (Google Cloud Storage). The API for the service returns an InputStream object or a ReadableByteChannel object for each file.
I can't use ByteArrayDataSource to create a MimeBodyPart because it creates a copy of the entire file that resides in the memory, and we get a OutOfMemoryError.
If its a physical file, we can create a FileDataSource object and attach to the email. But can we do it with an InputStream object?
I can't increase the heap size because increasing it to 25MB seems like a very bad idea. If you have any other ideas too, please let me know. We're working on the Google App Engine platform.
Try the javax.activation.URLDataSource or the javax.activation.FileDataSource instead. Otherwise, you can create your own DataSource adapter class to directly return the given InputStream.
public class InputStreamDataSource implements javax.activation.DataSource {
private final InputStream in;
public InputStreamDataSource(final InputStream in) {
if (in == null) {
throw new NullPointerException();
}
this.in = in;
}
#Override
public InputStream getInputStream() throws IOException {
return in;
}
#Override
public OutputStream getOutputStream() throws IOException {
throw new IOException();
}
#Override
public String getContentType() {
return "application/octet-stream";
}
#Override
public String getName() {
return "some name";
}
}
This might be a time to copy the InputStream to a physical file and use the FileDataSource object to attach it. But better still would be not to send as an attachment, just send the link to the object in cloud storage. If you need to control access to the item in cloud storage, send a signed url.
I'm trying to extend my library for integrating Swing and JPA by making JPA config as automatic (and portable) as can be done, and it means programmatically adding <class> elements. (I know it can be done via Hibernate's AnnotationConfiguration or EclipseLInk's ServerSession, but - portability). I'd also like to avoid using Spring just for this single purpose.
I can create a persistence.xml on the fly, and fill it with <class> elements from specified packages (via the Reflections library). The problem starts when I try to feed this persistence.xml to a JPA provider. The only way I can think of is setting up a URLClassLoader, but I can't think of a way what wouldn't make me write the file to the disk somewhere first, for sole ability to obtain a valid URL. Setting up a socket for serving the file via an URL(localhost:xxxx) seems... I don't know, evil?
Does anyone have an idea how I could solve this problem? I know it sounds like a lot of work to avoid using one library, but I'd just like to know if it can be done.
EDIT (a try at being more clear):
Dynamically generated XML is kept in a String object. I don't know how to make it available to a persistence provider. Also, I want to avoid writing the file to disk.
For purpose of my problem, a persistence provider is just a class which scans the classpath for META-INF/persistence.xml. Some implementations can be made to accept dynamic creation of XML, but there is no common interface (especially for a crucial part of the file, the <class> tags).
My idea is to set up a custom ClassLoader - if you have any other I'd be grateful, I'm not set on this one.
The only easily extendable/configurable one I could find was a URLClassLoader. It works on URL objects, and I don't know if I can create one without actually writing XML to disk first.
That's how I'm setting things up, but it's working by writing the persistenceXmlFile = new File("META-INF/persistence.xml") to disk:
Thread.currentThread().setContextClassLoader(
new URLResourceClassLoader(
new URL[] { persistenceXmlFile.toURI().toURL() },
Thread.currentThread().getContextClassLoader()
)
);
URLResourceClassLoader is URLCLassLoader's subclass, which allows for looking up resources as well as classes, by overriding public Enumeration<URL> findResources(String name).
Maybe a bit late (after 4 years), but for others that are looking for a similar solution, you may be able to use the URL factory I created:
public class InMemoryURLFactory {
public static void main(String... args) throws Exception {
URL url = InMemoryURLFactory.getInstance().build("/this/is/a/test.txt", "This is a test!");
byte[] data = IOUtils.toByteArray(url.openConnection().getInputStream());
// Prints out: This is a test!
System.out.println(new String(data));
}
private final Map<URL, byte[]> contents = new WeakHashMap<>();
private final URLStreamHandler handler = new InMemoryStreamHandler();
private static InMemoryURLFactory instance = null;
public static synchronized InMemoryURLFactory getInstance() {
if(instance == null)
instance = new InMemoryURLFactory();
return instance;
}
private InMemoryURLFactory() {
}
public URL build(String path, String data) {
try {
return build(path, data.getBytes("UTF-8"));
} catch (UnsupportedEncodingException ex) {
throw new RuntimeException(ex);
}
}
public URL build(String path, byte[] data) {
try {
URL url = new URL("memory", "", -1, path, handler);
contents.put(url, data);
return url;
} catch (MalformedURLException ex) {
throw new RuntimeException(ex);
}
}
private class InMemoryStreamHandler extends URLStreamHandler {
#Override
protected URLConnection openConnection(URL u) throws IOException {
if(!u.getProtocol().equals("memory")) {
throw new IOException("Cannot handle protocol: " + u.getProtocol());
}
return new URLConnection(u) {
private byte[] data = null;
#Override
public void connect() throws IOException {
initDataIfNeeded();
checkDataAvailability();
// Protected field from superclass
connected = true;
}
#Override
public long getContentLengthLong() {
initDataIfNeeded();
if(data == null)
return 0;
return data.length;
}
#Override
public InputStream getInputStream() throws IOException {
initDataIfNeeded();
checkDataAvailability();
return new ByteArrayInputStream(data);
}
private void initDataIfNeeded() {
if(data == null)
data = contents.get(u);
}
private void checkDataAvailability() throws IOException {
if(data == null)
throw new IOException("In-memory data cannot be found for: " + u.getPath());
}
};
}
}
}
We can use the Jimfs google library for that.
First, we need to add the maven dependency to our project:
<dependency>
<groupId>com.google.jimfs</groupId>
<artifactId>jimfs</artifactId>
<version>1.2</version>
</dependency>
After that, we need to configure our filesystem behavior, and write our String content to the in-memory file, like this:
public static final String INPUT =
"\n"
+ "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"
+ "<note>\n"
+ " <to>Tove</to>\n"
+ " <from>Jani</from>\n"
+ " <heading>Reminder</heading>\n"
+ " <body>Don't forget me this weekend!</body>\n"
+ "</note>";
#Test
void usingJIMFS() throws IOException {
try (var fs = Jimfs.newFileSystem(Configuration.unix())) {
var path = fs.getPath(UUID.randomUUID().toString());
Files.writeString(path, INPUT);
var url = path.toUri().toURL();
assertThat(url.getProtocol()).isEqualTo("jimfs");
assertThat(Resources.asCharSource(url, UTF_8).read()).isEqualTo(INPUT);
}
}
We can find more examples in the official repository.
If we look inside the jimfs source code we will find the implementation is similar to #NSV answer.
I have a small self-written web-server, that is capable of processing POST\GET queries. Also, I have a Handler, that receives audio files and puts them in the response stream, like that:
package com.skynetwork.player.server;
import ...
public class Server {
private static Logger log = Logger.getLogger(Server.class);
//Here goes the handler.
static class MyHandler implements HttpHandler {
private String testUrl = "D:\\test";
private ArrayList<File> urls = new ArrayList<File>();
private long calculateBytes(ArrayList<File> urls) throws IOException {
long bytes = 0;
for (File url : urls) {
bytes += FileUtils.readFileToByteArray(url).length;
}
return bytes;
}
public void handle(HttpExchange t) throws IOException {
File dir = new File (testUrl);
System.out.println(dir.getAbsolutePath());
if (dir.isDirectory()) {
log.info("Chosen directory:" + dir);
Iterator<File> allFiles = (FileUtils.iterateFiles(dir, new String[] {"mp3"}, true));
while (allFiles.hasNext()) {
File mp3 = (File)allFiles.next();
if (mp3.exists()) {
urls.add(mp3);
log.info("File " + mp3.getName() + " was added to playlist.");
}
}
} else {
log.info("This is not a directory, but a file you chose.");
System.exit(0);
}
t.sendResponseHeaders(200, calculateBytes(urls));
OutputStream os = t.getResponseBody();
for (File url : urls) {
os.write(FileUtils.readFileToByteArray(url));
}
os.close();
}
}
public static void main(String[] args) throws Exception {
HttpServer server = HttpServer.create(new InetSocketAddress(8080), 0);
server.createContext("/test", new MyHandler());
server.setExecutor(null);
server.start();
}
}
Right now it takes all of the audio files and creates one solid stream. I would like it to play in loop infinitely, like a small radio station in the web. So when my server is running, I enter a url in the browser and it plays the audio files from the directory in loop.
EDIT:
If my server has the needed bytes, how could I play these bytes in a loop, for example in VLC Player?
I mean it will play the stream just once, but how could I loop it?
Hello Constantine i think it's important to understand the difference between progressive download and streaming here.
What you are doing is not streaming at all but a progressive download, that is to say you have to download first if you want to jump to that part of the file (ex. You Tube) while in streaming that's not necessary and you can listen to it endlessly (ex. BBC Radio)
I would recommend you to check out the red5 server project in you are interested in streaming.
If you want to go on with your present code (progressive) perhaps you should just create an never ending output stream and pause every now and then to limit the download speed.
I hope this helps!