Trying to set a texture from Java plugin in Unity 4.6 - java

What i'm trying to do:
Pull an image from sd-card on phone using Java Plugin.
Unity passes a texture ID to plugin.
Plugin uses opengl to assign the image to the texture in Unity through the ID.
Will (eventually) be used to play a video clip from the phone in Unity, for now, it's just trying to change a texture outside of unity.
My issue:
When i call the method in the plugin, passing texture.GetNativeTextureID() into it, the texture does not change. I'm currently only using a simple black 50x50 texture for testing, and the original texture is a flat white.
I'm worried that i've missed something significant, as this is my first time working with Gl calls in java. Much of the answers to similar problems involve using native C++ instead of Java, but I can't find a concrete answer saying that C++ must be used. I'd like to do my best to avoid writing another set of plugins and plugin handlers for C++, but if it's the most efficient/only way to get this working, i'll do it as i'm not unfamiliar with OpenGL and C++
Code:
The plugin method is called from OnPreRender() in a script attached to the main camera:
if (grabTex) {
int texPtr = m_VideoTex.GetNativeTextureID();
Debug.Log( "texPtr = " + texPtr );
m_JVInterface.SetTex( texPtr );
}
m_VideoTex is a basic Texture2D( 50, 50 ) with all pixels set to white, attached to the diffuse shader on the quad in the scene.
The Java plugin code is as follows:
public void SetTexture(Context cont, int _texPointer) {
if (_texPointer != 0) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
options.inJustDecodeBounds = false;
final Bitmap bitmap = BitmapFactory.decodeFile("/storage/emulated/0/Pictures/black.jpg", options);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, _texPointer);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
Log.i("VideoHandler", "Recieved ID: " + _texPointer);
bitmap.recycle();
}
}

This is most likely a problem with the OpenGL Context. The easiest way would be to send the texture as raw bytes to Unity and then upload as texture inside Unity.

Related

Android Java - programmatically capture background before app window opens [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to programatically take a screenshot on Android?
How to capture the android device screen content and make an image file using the snapshot data? Which API should I use or where could I find related resources?
BTW:
not camera snapshot, but device screen
Use the following code:
Bitmap bitmap;
View v1 = MyView.getRootView();
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
Here MyView is the View through which we need include in the screen. You can also get DrawingCache from of any View this way (without getRootView()).
There is also another way.. If we having ScrollView as root view then its better to use following code,
LayoutInflater inflater = (LayoutInflater) this.getSystemService(LAYOUT_INFLATER_SERVICE);
FrameLayout root = (FrameLayout) inflater.inflate(R.layout.activity_main, null); // activity_main is UI(xml) file we used in our Activity class. FrameLayout is root view of my UI(xml) file.
root.setDrawingCacheEnabled(true);
Bitmap bitmap = getBitmapFromView(this.getWindow().findViewById(R.id.frameLayout)); // here give id of our root layout (here its my FrameLayout's id)
root.setDrawingCacheEnabled(false);
Here is the getBitmapFromView() method
public static Bitmap getBitmapFromView(View view) {
//Define a bitmap with the same size as the view
Bitmap returnedBitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(),Bitmap.Config.ARGB_8888);
//Bind a canvas to it
Canvas canvas = new Canvas(returnedBitmap);
//Get the view's background
Drawable bgDrawable =view.getBackground();
if (bgDrawable!=null)
//has background drawable, then draw it on the canvas
bgDrawable.draw(canvas);
else
//does not have background drawable, then draw white background on the canvas
canvas.drawColor(Color.WHITE);
// draw the view on the canvas
view.draw(canvas);
//return the bitmap
return returnedBitmap;
}
It will display entire screen including content hidden in your ScrollView
UPDATED AS ON 20-04-2016
There is another better way to take screenshot.Here I have taken screenshot of WebView.
WebView w = new WebView(this);
w.setWebViewClient(new WebViewClient()
{
public void onPageFinished(final WebView webView, String url) {
new Handler().postDelayed(new Runnable(){
#Override
public void run() {
webView.measure(View.MeasureSpec.makeMeasureSpec(
View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
webView.layout(0, 0, webView.getMeasuredWidth(),
webView.getMeasuredHeight());
webView.setDrawingCacheEnabled(true);
webView.buildDrawingCache();
Bitmap bitmap = Bitmap.createBitmap(webView.getMeasuredWidth(),
webView.getMeasuredHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint();
int height = bitmap.getHeight();
canvas.drawBitmap(bitmap, 0, height, paint);
webView.draw(canvas);
if (bitmap != null) {
try {
String filePath = Environment.getExternalStorageDirectory()
.toString();
OutputStream out = null;
File file = new File(filePath, "/webviewScreenShot.png");
out = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 50, out);
out.flush();
out.close();
bitmap.recycle();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}, 1000);
}
});
Hope this helps..!
AFAIK, All of the methods currently to capture a screenshot of android use the /dev/graphics/fb0 framebuffer. This includes ddms. It does require root to read from this stream. ddms uses adbd to request the information, so root is not required as adb has the permissions needed to request the data from /dev/graphics/fb0.
The framebuffer contains 2+ "frames" of RGB565 images. If you are able to read the data, you would have to know the screen resolution to know how many bytes are needed to get the image. each pixel is 2 bytes, so if the screen res was 480x800, you would have to read 768,000 bytes for the image, since a 480x800 RGB565 image has 384,000 pixels.
For newer Android platforms, one can execute a system utility screencap in /system/bin to get the screenshot without root permission.
You can try /system/bin/screencap -h to see how to use it under adb or any shell.
By the way, I think this method is only good for single snapshot.
If we want to capture multiple frames for screen play, it will be too slow.
I don't know if there exists any other approach for a faster screen capture.
[Based on Android source code:]
At the C++ side, the SurfaceFlinger implements the captureScreen API. This is exposed over the binder IPC interface, returning each time a new ashmem area that contains the raw pixels from the screen. The actual screenshot is taken through OpenGL.
For the system C++ clients, the interface is exposed through the ScreenshotClient class, defined in <surfaceflinger_client/SurfaceComposerClient.h> for Android < 4.1; for Android > 4.1 use <gui/SurfaceComposerClient.h>
Before JB, to take a screenshot in a C++ program, this was enough:
ScreenshotClient ssc;
ssc.update();
With JB and multiple displays, it becomes slightly more complicated:
ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain));
Then you can access it:
do_something_with_raw_bits(ssc.getPixels(), ssc.getSize(), ...);
Using the Android source code, you can compile your own shared library to access that API, and then expose it through JNI to Java. To create a screen shot form your app, the app has to have the READ_FRAME_BUFFER permission.
But even then, apparently you can create screen shots only from system applications, i.e. ones that are signed with the same key as the system. (This part I still don't quite understand, since I'm not familiar enough with the Android Permissions system.)
Here is a piece of code, for JB 4.1 / 4.2:
#include <utils/RefBase.h>
#include <binder/IBinder.h>
#include <binder/MemoryHeapBase.h>
#include <gui/ISurfaceComposer.h>
#include <gui/SurfaceComposerClient.h>
static void do_save(const char *filename, const void *buf, size_t size) {
int out = open(filename, O_RDWR|O_CREAT, 0666);
int len = write(out, buf, size);
printf("Wrote %d bytes to out.\n", len);
close(out);
}
int main(int ac, char **av) {
android::ScreenshotClient ssc;
const void *pixels;
size_t size;
int buffer_index;
if(ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain)) != NO_ERROR ){
printf("Captured: w=%d, h=%d, format=%d\n");
ssc.getWidth(), ssc.getHeight(), ssc.getFormat());
size = ssc.getSize();
do_save(av[1], pixels, size);
}
else
printf(" screen shot client Captured Failed");
return 0;
}
You can try the following library: Android Screenshot Library (ASL) enables to programmatically capture screenshots from Android devices without requirement of having root access privileges. Instead, ASL utilizes a native service running in the background, started via the Android Debug Bridge (ADB) once per device boot.
According to this link, it is possible to use ddms in the tools directory of the android sdk to take screen captures.
To do this within an application (and not during development), there are also applications to do so. But as #zed_0xff points out it certainly requires root.
Framebuffer seems the way to go, it will not always contain 2+ frames like mentioned by Ryan Conrad. In my case it contained only one. I guess it depends on the frame/display size.
I tried to read the framebuffer continuously but it seems to return for a fixed amount of bytes read. In my case that is (3 410 432) bytes, which is enough to store a display frame of 854*480 RGBA (3 279 360 bytes). Yes, the frame in binary outputed from fb0 is RGBA in my device. This will most likely depend from device to device. This will be important for you to decode it =)
In my device /dev/graphics/fb0 permissions are so that only root and users from group graphics can read the fb0. graphics is a restricted group so you will probably only access fb0 with a rooted phone using su command.
Android apps have the user id (uid) app_## and group id (guid) app_## .
adb shell has uid shell and guid shell, which has much more permissions than an app.
You can actually check those permissions at /system/permissions/platform.xml
This means you will be able to read fb0 in the adb shell without root but you will not read it within the app without root.
Also, giving READ_FRAME_BUFFER and/or ACCESS_SURFACE_FLINGER permissions on AndroidManifest.xml will do nothing for a regular app because these will only work for 'signature' apps.
if you want to do screen capture from Java code in Android app AFAIK you must have Root provileges.

Android App crashes when Imgproc.Canny function is called

I've been working on an app that detects changes in pupil size. However, at the moment I'm stuck on a section of code using the Canny function in the openCV library.
private void runOpenCVCode() {
try {
File imageFile = new File(locations.get(0));
Uri uri = Uri.fromFile(imageFile);
Bitmap bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), uri);
Mat mat = Mat.zeros(100,400, CvType.CV_8UC3);
Utils.bitmapToMat(bitmap, mat);
//check mat
Bitmap bm = Bitmap.createBitmap(mat.cols(), mat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(mat, bm);
pic.setImageBitmap(bm);
Mat gray = new Mat(mat.size(), CvType.CV_8UC1);
Imgproc.cvtColor(mat, gray, Imgproc.COLOR_BGR2GRAY, 4);
Bitmap bmGray = Bitmap.createBitmap(gray.cols(), gray.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(gray, bmGray);
pic.setImageBitmap(bmGray);
Mat edges = gray;
double thresh = Imgproc.threshold(gray, edges, 0, 255, Imgproc.THRESH_OTSU);
Imgproc.Canny(gray, edges, 80, 100);
Bitmap bmEdges = Bitmap.createBitmap(edges.cols(), edges.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(edges, bmEdges);
pic.setImageBitmap(bmEdges);
} catch (IOException e) {
Log.e(TAG, e.toString());
e.printStackTrace();
} catch (Exception e) {
Log.e(TAG, e.toString());
e.printStackTrace();
}
}
The code runs fine until it hits the canny function call. If it is commented out the rest runs without a problem. I'm assuming it is a native crash since the app crashes without any errors in the log and goes straight back to the initial activity. I've tried instantiating the matrices gray and edges various ways, and using different thresholds but nothing seems to work.
Any help would be appreciated.
I have answered a similar question on: https://stackoverflow.com/a/50637228/1693327 but I am unable to comment to the post due to lack of reputation points so I will copy the answer below.
I believe the app is crashing when it hits the Canny detector because of the wrong type of OpenCV Manager installed on your device, be it version number or central processor instruction set. Checking the correct version should be straightforward. Just go to the OpenCV-android-sdk\apk directory and check for the 3 (x.y.z) numbers after OpenCV_
Checking instruction set of Android devices for Windows
To check the instruction set of your device, navigate to the adb (android debug bridge) directory commonly located at:
C:\Users\<'your username'>\AppData\Local\Android\Sdk\platform-tools
Run the command:
./adb.exe shell cat /proc/cpuinfo
After getting the correct instruction set, navigate back to the OpenCV-android-sdk\apk and locate the correct apk version and instruction set to be installed on your android device.
You can then transfer the apk to your device and install it. Another way I find useful is to navigate to the adb.exe directory and run the command:
./adb.exe install <path to OpenCV-android-sdk>/apk/OpenCV_x.y.z_Manager_x.yz_<platform instruction set>.apk
Apart from the steps above, make sure that you do not have any other environment variables that use other types of OpenCV Manager such as stating a different one in the Application.mk or build.gradle files.
After the steps above, your Canny detector should be able to run on your device without crashing.
Happy Developing :).

Store sprites from spritesheet in array Libgdx

I have 960x960 spritesheet stored as a png inside of my Libgdx android assets. In a class I have specified towards initializing sprites for use in my game, I am trying to make it so there is a 120x120 sprite cut from the spritesheet (so there should be 64 items in the array). How am I able to do this? This is what I have tried in a similar situation:
public static Texture spritesheet;
public static Sprite[] textures = new Sprite[64];
...
//inside method that gets called once to initialize variables
spritesheet = new Texture(
Gdx.app.getType() == Application.ApplicationType.Android ?
"...Spritesheet.png" :
"android/assets/...Spritesheet.png"
);
spritesheet.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
for (int x = 0; x < 64; x ++) {
textures[x] = new Sprite(spritesheet, x, (x%8), 64, 64);
textures[x].flip(false, true);
}
I then render the sprite in other classes with this:
batch.draw(Assets.textures[0 /*this can be any number*/], (float) x, (float) y, 108, 108);
When I do this, it acts really weird. It says there are elements filled in the array, but there's still Array Index Out of Bounds Exceptions or the sprites just render crazily. Overall, it's not working out. What I'm trying to do is make it so I don't have to initialize 64 different sprites separately, and make it so I can easily change the sprite by changing the index inputted when rendering the sprite so I can do some other things later on, like an animation. How can I go about doing this?
You should use a TextureAtlas for this purpose. A atlas is a file generated automatically from separate images by the LibGDX TexturePacker. It stores everything from image bounds within your sheet to NinePatch information. All you need to do is put your separate images in a folder and run the TexturePacker on that folder. This will create a sheet and a .atlas/.pack file for you that can easily loaded.
There exists a TexturePackerGui if you have difficulty with the commandline but I do recommend the command line or even using it within your app.
What I usually do is create these sheets on the fly when I'm developing. I can easily overwrite separate images and they have a immediate effect after I run my app again. I start with creating a new folder images at the root of my project. Then for each pack I need I create another folder. For this example I create the folder tileset in images. In the DesktopLauncher I will setup this folder to produce a atlas from the images.
TexturePacker.Settings settings = new TexturePacker.Settings();
settings.maxWidth = 1024;
settings.maxHeight = 1024;
The settings file specifies everything about your atlas. The maximum size of a single sheet, if it should strip transparency from images, padding, rotation, etc. They are all very straightforward and you can look these all up in the documentation. Using these settings you can create your atlas.
TexturePacker.process(settings,
"../../images/tileset", //where to find the images to be packed.
"../../android/assets/tileset/", //where to store the atlas and sheet
"tileset"); //what filename to use
Now you can open your .atlas file and you see it uses the filenames as a alias. This alias is used to look them up but let's load the atlas first.
TextureAtlas atlas = new TextureAtlas("tileset/tileset.atlas");
By passing just a string to the constructor it looks by default in your local path which in turn is by default in android/assets/. Now we can ask the atlas to hand over our assets in the sheet.
atlas.findRegion("alias"); //hands over the image on the sheet named "alias"
Looking up textures like this is somewhat expensive. You don't want to look up many textures like this each update so you still need to store them somewhere.
If you name your image sequence like image_01.png, image_02.png, image_03.png it stores them all under the same name and within the atlas it sorts them bu it's index. So if you want a separate array of certain textures you can name them with _xx and get them all in one go:
atlas.findRegions("alias");
This is especially handy for Animation. Just copy your image sequence to a folder and specify it to be packed. Name your sequence correctly and give the regions to the animation. Everything will work right off the bat.
Loading a TextureAtlas with the AssetManager is pretty much the same as a normal Texture except you would specify it's from a TextureAtlas.class. You always load the .atlas which in turn handles your image.
I always use AssetDescriptor to load my assets. If I where you I would get rid of the static Assets.textures[] since that will get you into trouble sooner or later.
//None static AssetManager with getter
private AssetManager manager = new AssetManager();
public AssetManager getManager() { return manager; }
//Specify a descriptor for the atlas, this is static so I can acces it anywhere.
//It's just a descriptor of the asset so this is safe.
public static final AssetDescriptor<TextureAtlas> TileSet = new AssetDescriptor<TextureAtlas>("tileset/tileset.atlas", TextureAtlas.class);
//then just load everything
public void load()
{
manager.load(tileSet);
//... load other stuff
}
Now just pass the AssetManager object anywhere you need access to your assets and you can load any asset just like so:
TextureAtlas tileset = assetManager.get(Assets.TileSet);
I think your for loop should look like this
for(int x = 0; x < 64; x ++){
textures[x] = new Sprite(
spritesheet,
(x%8)*64, //where x=3, (3%8)*64 = 3*64 = 192px sourceX
(x/8)*64, //where x=3, (int)(3/8)*64 = 0*64 = 0px sourceY
64, //source width
64 //source height
);
Another test case where x=20;
(20%8)*64 = 4*64 = 256px SourceX
(20/8)*64 = 2*64 = 128px SourceY

Unimplemented WebView method run called from: android.os.Handler.handleCallback(Handler.java:733)

I am new to Android app development. I try to connect facebook using socialauth. I implemented everything as given. When I execute my app, at background it gets green access to facebook and return back to app, but it opens with a blue screen webview and start to loading, and then still remains same.
Getting errors in Logcat as "W/OpenGLRenderer(1361): Bitmap too large to be uploaded into a texture (2560x1600, max=2048x2048)
W/UnimplementedWebViewApi??(1361): Unimplemented WebView?? method run called from: android.os.Handler.handleCallback(Handler.java:733)"
here is my logcat
Can any one help me resolve this.?
W/OpenGLRenderer(1361): Bitmap too large to be uploaded into a texture
(2560x1600, max=2048x2048)<br/>
^^^^^^^^^Out Of the range
You can't go beyond the limitations of bitmap as rendering is done by OpenGL.You may want your image to be scaled down to be fitted in to the bitmap range and limit of OpenGL hardware textures (2048x2048) as suggested in the error as well.
So better to pindown the scale of your bitmap by some calculations for size.You can use Bitmap createScaledBitmap (Bitmap src, int dstWidth, int dstHeight, boolean filter) to create scaled bitmap from your available resource.
Sorry but Unimplemented WebView problem is unresolved for me as well but I have found reported Issue regarding this on Code Google which might help you.

Android Image Masking not rendering as intended

I am using this blog post to generate a the masking effect. I might be missing some thing , I am not able to achieve the intended behavior. I am new to image processing. based on my internet research I am assuming I got to have a JPEG image for the originol image and the PNG format for the mask image with the same dimension . I even tried to create images as below
My Images :
some of the images I created to create masking effect :
But the resulted Image using this logic looks like this ,
.
Why is it ? Am I missing some thing here ? The only Images working so far is the one in the example on that link.
My masking code :
public static Bitmap getMaskedBitmap(Resources res, int sourceResId, int maskResId) {
BitmapFactory.Options options = new BitmapFactory.Options();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) {
options.inMutable = true;
}
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap source = BitmapFactory.decodeResource(res, sourceResId, options);
Bitmap bitmap;
if (source.isMutable()) {
bitmap = source;
} else {
bitmap = source.copy(Bitmap.Config.ARGB_8888, true);
source.recycle();
}
bitmap.setHasAlpha(true);
Canvas canvas = new Canvas(bitmap);
Bitmap mask = BitmapFactory.decodeResource(res, maskResId);
Paint paint = new Paint();
paint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.DST_IN));
canvas.drawBitmap(mask, 0, 0, paint);
mask.recycle();
return bitmap;
}
Please let me know where am I going wrong ?
Is there any requirement for the Images ? especially Mask Image
I have tested on Samsung S3 OS 4.2
UPDate :
I tried the mask Image from the blog post with my own JPEG .IT works perfect so the issue is narrowed down to mask Image. It expects some kind of configuration. Please let me know about it if any one know faced this before or am I missing some thing .
Latest UPDate :
I finally figured out. it is the alpha channel inside a mask image that makes the difference . If you are a programmer without a UI designer then you got to take care of this alpha channel
Referring to this SO answer Android: Bitmap recycle() how does it work?
This particular line of code is likely the problem. I can say you probably should not call
mask.recycle();
I haven't tested it but I think its worth a shot.This should work since when you recycle mask the masked image itself is GCed. So if you don't call it it should work.
I finally figured out , the mask image got to have a alpha channel . On Photoshop you have to add a transparency layer to get set the alpha channel enabled.

Categories

Resources