I am maybe asking a dumb question but I would like to be sure as my app is almost finished and I don't want to face some issue with viruses in the future
I have an app written in angular2 and a backend in java.
People can change their profile picture.
From my frontend I encode the picture in base64 and send it with a post to my rest api.
Server check the size of the base64 and reject it if it reached a certain size (but I also have a maxPostSize of 2MB in tomcat by default)
The base64 is then decoded with library net.iharder which transform it in bytearray
http://iharder.sourceforge.net/current/java/base64/
Once it is done I check if the file is a picture (and resize it as well) by creating a BufferedImage with
ImageIO.read(ByteArrayInputStream)
If it does not correspond to an image it returns null So I don't see the risk here as well.
Once it is done I store the picture in my server.
Any profile who consult the profile with picture will receive a base64 encoded image (corresponding to the uploaded one) and it will be displayed in an basic
<img src="myBase64"/>
Only JPG and PNG are allowed
My question is this one: Is there any risk for my server or for the end users if a guy send a file containg a virus? Or am I safe with the ImageIO reader.
Thanks in advance
If you store anything sent to you and send it back unchanged, then anything can happen. Just because ImageIO can read the image, doesn't mean that there's not something compromising in there.
However, if you resize the image, and use that, then there's pretty much no chance of anything surviving that as you're creating a brand new image from raw image bytes. JPG and (I guess) PNG files can contain meta data that's not part of the image, and those can potentially be vectors for exploits. But by creating a new image from the raw image data, you implicitly strip all of that.
Related
I've a stream with contains a audio/video stream. The video encoding is H.264 AVC and the audio encoding is G.711 µ-law (PCMU). (I have no control over the output format.)
When I try to display the video into a VideoView frame, the device says that it cannot play the video. I guess that's because Android doesn't support the aforementioned audio encoding.
Is there a way to somehow display the selected stream in the VideoView?
Can I just use a Java code snippet to 'compute' the codec? I've seen the class android.net.rtp.AudioCodec, which contains a field PCMU, and I can imagine that class would be useful.
Or do I have to insert a library or some native code (FFmpeg) into my application and use that? If yes, how?
How to fix it?
If you want to manipulate a stream before you display it, you are unfortunately getting into tricky territory on an Android device - if you do have any possibility of doing the conversion on the serve side it will likely by much easier (and you should also usually have more 'horsepower' on the server side).
Assuming that you do need to convert on the server side, then a technique you can use is to stream the video from the server to your app, convert it as needed and then 'stream' from a localhost server in your app to a VideoView in your app. Roughly the steps are:
Stream the encrypted file from the server as usual, 'chunk by chunk'
On your Android device, read from the stream and convert each chunk as it is received
Using a localhost http server on your Android device, now 'serve' the converted chunks to the MediaPlayer (the media player should be set up to use a URL pointing at your localhost http server)
An example of this approach, I believe, is LibMedia: sure: http://libeasy.alwaysdata.net (note, this is not a free library, AFAIK).
For the actual conversion you can use ffmpeg as you suggest - there are a number of ffmpeg wrapper for Android projects that you can either use or look at to design your own. Some examples:
http://hiteshsondhi88.github.io/ffmpeg-android-java/
https://github.com/jhotovy/android-ffmpeg
Take a look at the note about libffmpeginvoke in the second link in particular if you are planning to write your won wrapper.
One thing to be aware of is that compressions techniques, in particular video compression, often use an approach where one packet or frame is compressed relative to the frames before it (and sometimes even the frames after it). For example the first frame might be a 'key frame', the next five frames might just contain data to explain how they differ from the key frame, and the the seventh frame might be another key frame and so on. This means you generally want a 'chunk' which has all the required key frames if you are converting the chunk from one format to another.
I don't think you will have this problem converting from PCM to AAC audio, but it useful to be aware of the concept anyway.
Here i'm scaling image and saving in minImage (Buffred Image format)now how can i print that image as
BufferedImage minImage = ImageSale(buffered, minImageWidth, minImageHeight, TYPE_INT_RGB);
out.println("<img src=\""+minImage+"\">");
How to print image as thumb,please help me to resolve this issue.
You seem to have an issue with understanding the difference between a client and a server and what information they have available to each other, as well as the information that is maintained by HTML.
HTML is a plain text document, technically, it can't contain binary information (such as image data) and you really don't want to try and do this any way, as the HTML page itself should download relatively quickly.
The client HTML will need a reference to the image on the file server (or within the web servers context). This is typically done by saving the file to the server in a location which is accessible by the browser.
If you don't want to save the images to disk, then you will need to create some kind of "memory cache" which contains the key to the image, so that when the browser requests the image from the server you can look it up from the cache and return a stream of the image to the client browser.
This would require you to seed the URL with some kind of identifier that could be mapped to the cache
I am learning Inubit. I want to know, how may I store images in a database using the Inubit tool set?
The question is more than a year old. I guess you solved it by now.
For all others coming here, let me sketch out the typical way you'd do that.
0. (optional) Compress data.
Depending on the compression of the image (e.g. its GIF, PDF, uncompressed TIFF, etc. and not JPEG), you might want to compress it via a Compressor module first to reduce needed database space and increase overall performance on the next steps. Be sure to compress the binary data and not the base64-encoded string (see next step)!
1. Encode binary stream to base64.
Depending on where you get the image
data from, chances are that it already is base64 encoded. E.g. you
used a file connector to retrieve it from disk with the appropriate option checked or used a web service
connector. If you really have a binary data stream, convert it to
base64 using an encoder module (better self-documenting) or using a variable
assignment using the XPATH-function isxp:encode (more concise).
2. Save the encoded data via a database connector.
Well, the details
for doing this right are pretty much database specific. The cheap
trick that should work on any database, is storing the base64-string
simply as a string in a TEXT / CLOB column. This will waste about
three times as much space in the database as the original binary
data, since base64 is poorly packed. Doing it right would mean to
construct a forced SQL query in an XSLT that decodes the
base64-string to binary and stores it. Here is some reference
to how it can be done in Oracle.
Hope, this might be of some help.
Cheers,
Jörn
Jörn Willhöft
Willhöft IT-Beratung GmbH, Berlin, Germany
You do not store the image in the database, you only record the path to the image. The Image will be stored on the server.
Here is an example of how to store the path to the image : How to insert multiple images path to database
I am using android pdf writer
(apw) in my app successfully for the most part. However, when I try to include a high resolution in a pdf document, I get an out of memory exception.
Immediately before creating the pdf file, the library must have the content itself converted into a string (representing the raw pdf content), which is then converted to a byte array. The byte array is written to the file in a file output stream (see example via website).
The out of memory expection occurs when the string is generated because representing all the pixels of a bitmap image in string format is very memory intensive. I could downsample the image using the android API, however, it is essential that the images are put into the pdf at high resolution (~2000 x 1000).
There are many scanner type apps which seem to be able to able generate pdf high res images, so there must be a way around it, surely. Granted, they may be using other libraries, but surely there is someone who has figured out a way around it with this library given that it is free and therefore popular(?)
I emailed the developer, but there was no response.
Potential solutions (I can think of) include:
Modifying the library to load a string representing e.g. the first 10% of the PDF, and writing to file chunk by chunk. (edit)
Modifying the library to output a stringoutput stream, or other output stream to a temp file (or final file) as the actual pdf content is being written in the pdfwriter object.
However as a relative java noob (and even more of a pdf specification noob), I am unable to understand the library well enough to do this myself.
Has anyone come across this problem and found a way around it? Anyone willing to hazard a suggestion, or take a look at the library itself even to see if there is a fix of some sort.
Thanks for your help.
nme32
Edit:
Logcat says heap size is in the range on 40 to 60mb before the crash. I understand (do correct me if not) that Android limits the available memory to apps depending on what else is running, though it is in the 50mb ballpark, depending on device.
When loading the image, I think APW essentially converts it to bitmap, that is represents the image pixel by pixel then puts it into string format, meaning it doesn't matter which image format you use, it may as well be bitmap.
First of all the resolution you are mentioning is very high. And i have already mentioned the issues related to Images in Android in this Answer
Secondly in case first solution doesn't work for you i would suggest Disk based LruCache.And store the chunks into that disk based cache and then retrieve and use it. Here is an Example of that.
Hope this would help. If it doesn't comment on this answer and i will add more solutions.
I am trying to get the metadata of an apng image at the moment. I have
been able to get different frames from one apng file flawlessly and i am using PNGJ (a really great Standalone Java library for reading and writing PNG images), but I
am not able to get the different info that is stored against every
apng frame like delay of every frame.
I am at the moment just able to get the simple png image info that is stored in the header part by using
PngReader pngr = FileHelper.createPngReader(File);
pngr.imgInfo;
But I don't know how to have the information stored against the fcTL chunk. How can I do that?
You omitted the information that you are using the PNGJ library. As I mentioned in the other answer, this library does not parse APGN chunks (fcTL, fdAT). It loads them (you can inspect them in the ChunksList property) but they will be instatiated as "UNKNOWN" chunks, hence the binary data will be left in raw form. If you want to look inside the content of the fcTL chunks, you'd either parse the binary yourself, or implement youself the logic for that chunk type and register it in the reader (here's an example for a custom chunk).
Look at how you're currently reading 4-bytes integer 'seq' from fdAT.
You can read information from fcTL the same way.
Just keep in mind that some info is stored in fcTL as 4 bytes, some as 2 bytes, and some as 1 byte.