I am working on a photobooth type app for iPhone and Android. On iPhone, I know the exact resolution of the front camera, and am able to always output 4 mini pics predictably and make a photostrip from them. But for Android, I need a way to resize 4 images I have taken to a width of 48px and height of 320px per image. This way, I can build the same size photostrip I built for the iPhone version, and easily display the photostrips in a consistent manner on a website (I don't want the size of them to vary depending on platform). On Android, how can I resize to that resolution (48x320), even if the Android camera doesn't output that aspect ratio? Basically, I'd like to resize on Android, and have it automatically zoom as necessary until 48x320 is reached and it doesn't look stretched/distorted...I'm ok with part of the image (like the outside border) being lost in favor of getting a 48x320 image. Maybe this is just a straight Java question...
Thanks so much!
Related
I made a camera app with Camera2 API in Android, a fullscreen camera without taking pictures and I apply it a negative effect to all the preview, what I want to do is the negative effect only be applied to half of the preview, example:
Image Example
Here is my code:
Link to my code on github
I will appreciate the help because I am so lost i don't know what to do :(
Unfortunately, you'll have to do custom rendering here, since nothing in the camera2 or cameraX APIs will do this for you.
Basically, you'll need to send the camera output to the GPU, and use GL shader code to write your own custom negative effect.
That's a lot of boilerplate to get to where you want, but it's unlikely you'll have any other realistic option. While ImageView and some other Android UI APIs allow applying some effects or color transforms to their output, I don't think you could get them to give you half the view as negative, without significant performance problems.
To send camera image data to the GPU, use a SurfaceTexture as the output target, and then use the SurfaceTexture's texture ID in your EGL code as the source texture.
so basically, i'm new to java and android studio. I know the basics but I'm not that good yet.
I get this error when I try to run an the app on my phone. Going through other threads didn't help me either as I basically have just one background image in the MainActivity. I have to add one more but when I do it and try to run the app, it crashes.
size of background image: 115kb
size of the image I still have to add: 164 KB (tried to compress it to 74Kb, didn't work.)
java.lang.RuntimeException: Canvas: trying to draw too large(430377192bytes) bitmap.
I saw this in another thread which was supposed to be put in the manifest which hasn't helped either:
android:largeHeap="true"
I hope I have provided enough information needed to answer the question, if you need more please tell me.
Again: I am new to this.
430377192 bytes is the equivalent of a 10372 x 10372 pixel image. This is much too large. Moreover, it is far larger than any Android device screen that you are ever likely to encounter.
So, find this drawable resource, and reduce its resolution to something more reasonable.
If you placed this drawable resource in res/drawable/, please understand that res/drawable/ is a synonym for res/drawable-mdpi/, representing images designed for -mdpi screens (~160 dpi). Those images will be upsampled to higher resolutions on higher-density screens (e.g., double along each axis for -xhdpi screens). Either prepare dedicated drawables for appropriate densities, or move this image into res/drawable-nodpi/.
I am trying to design a program in java to periodically (every 100 milliseconds or so) take screenshots of my display and compute the average pixel rgb values of the entire screen. I need this to be able to work with video games and iTunes/Quicktime videos. However I have tried using JNA and robot to capture the screen and it only works when I am not capturing a video game in full screen or an iTunes video. For instance I tested my code by saving an image to examine and see what is happening. When I am in a video game I only see a screenshot of a blank window. I think this is because games use directx or openGL and communicate with the hardware differently than your typical app.
If I use this this method for capturing a screenshot instead of robot or JNA will this solve my problem? It looks like it is copying over data from the openGL screen buffer. What about DirectX applications?
I basically just want to be able to get the perceived pixel data on the screen at all times. Regardless of whether or not its a fullscreen DirectX or OpenGL application or not. Thanks in advance for your help.
I'm going to guess this is for a homebrew version of the amBX lighting system. Even if it's not, the following page may help you; it contains both the java code and arduino code for a DIY ambient lighting setup, which needs to accomplish the same thing:
http://siliconrepublic.blogspot.com/2011/02/arduino-based-pc-ambient-lighting.html
Things to consider:
1. For processing speed reasons, that sample of code purposely ignores some of the pixels on the screen
2. Depending on what you're displaying (racing games vs. first-person shooters vs. top-down-view strategy or MOBA games vs. movies) you may want to purposely segment the display into separate sectors. For example, for a racing game you may want both the left and right sides to be more independent and very responsive to rapid changes, whereas for general film viewing you may want a more general output because you're dealing with a wider variety of ways the camera can move.
I need to know how my game can handle all these screen sizes i have seen a few options including:
Re sizing the elements to fit the screen
Making assets in lots of different sizes
I'd like to know which is more efficient ?
What are my other options ?
How would i go about making it work ?
So far i am just making my screen fit to the android device i'm testing on and this could lead to failure in the future if i do not set this handler up
Thanks
well if you are using libgdx then you dont have to worry about screen sizes. just use its camera class and set its viewports accordingly . refer this link for camera
Also you dont need to make android handlers for it.
This website talks about how to handle this problem http://developer.android.com/guide/practices/screens_support.html.
It claims the best practices are:
Use wrap_content, fill_parent, or dp units when specifying dimensions in an XML layout file
Do not use hard coded pixel values in your application code
Do not use AbsoluteLayout (it's deprecated)
Supply alternative bitmap drawables for different screen densities
The problem is not with the dimensions of the screen, rather the density of the screens. Using dp to set the size for elements is the most common way.
My web application requires users to upload a passport-style photo of themselves. This photo will be used to generate several images:
web avatars of multiple sizes to display within the application (72 dpi)
printable image to print a 1"x1" face shot using an ID card printer (300 dpi)
printable image to print in reports on a standard printer (300+ dpi)
I'm going to use a jquery-based image cropping tool (perhaps JCrop) so that the user can select an area just around their face and disregard the rest of the image.
Are there any techniques to make sure that the image that is uploaded is of high enough resolution that it can be printed to the card printer and regular printers with a dimension of at least 1" x 1"?
My understanding is that EXIM dpi information is not reliable. Should I simply verify that the size they select in the crop equates to at least 300x300 pixels in the raw image?
Would it be best to handle this on the client in javascript or on the server (which is using Java)?
Well if you want to make sure the image resolution is big enough so that it can be printed at 300dpi with a good quality you just need to make sure that the part that is being selected by the user.
After having a quick look on JCrop it seem like you can access the coordinates of the selected image part easily (using showCoords() ).
With that you know the size of the selected image parts in pixels. It now depends on how big you want to print your image with 300dpi.
Therefore for i.e. an US Letter at 300dpi it needs to be 2550x3300px. For DIN A4 it would be 2480x3508 pixels.
So out of the coordinates you get from JCrop simply calculate how big the rectangle dimensions are in pixels and check if it's big enough to be printed to the size you desire at 300dpi...
Hope that helps...
Edit:
If you want to make sure the image is correct, by which I mean it has a face that fills about 80% of the image you could try using a python script that uses OpenCV... OpenCV already provides basic face detection algorithms. So maybe you can have the uploaded image run through the face detection algorithm which then says whether it contains a face or not...