I'm trying to capture all the pen data: touch, pressure at one point, coordinates of the touch ...
Any suggestions?
SigCtl sigCtl = new SigCtl();
DynamicCapture dc = new DynamicCapture();
int rc = dc.capture(sigCtl, "who", "why", null, null);
if(rc == 0) {
System.out.println("signature captured successfully");
String fileName = "sig1.png";
SigObj sig = sigCtl.signature();
sig.extraData("AdditionalData", "CaptureImage.java Additional Data");
int flags = SigObj.outputFilename | SigObj.color32BPP | SigObj.encodeData;
sig.renderBitmap(fileName, 200, 150, "image/png", 0.5f, 0xff0000, 0xffffff, 0.0f, 0.0f, flags );
}
I managed to solve the problem using the wgssStu library, this jar there is PenData class, which has the following methods: getPressure, getX, getY, getRdy, getSw...
Related
i work with libGDX.
When i am trying in Box2D to create a fixture having a polygon-shape i get following error.:
java: ./Box2D/Collision/b2Distance.h:103: const b2Vec2& b2DistanceProxy::GetVertex(int32) const: Assertion `0 <= index && index < m_count' failed.
When i don't do Box2D's world.step(), i don't get this error any more.
So i commented out everything in my WorldContactListener and i added world.step() again.
I am still getting the same Error.
When i replace the polygon-shape with a circle-shape everything works fine.
So here is how i am creating my polygon-shape:
PolygonShape shape = new PolygonShape();
float ppm = Game.PixelsPerMeter;
Vector2[] vertices = new Vector2[3];
vertices[0] = new Vector2(0f/ppm , 0f );
vertices[1] = new Vector2(1/ppm , 1f/ppm );
vertices[2] = new Vector2(0f/ppm ,1f/ppm);
shape.set(vertices);
And here is how i am adding everything it the Box2D world:
float ppm = Game.PixelsPerMeter
BodyDef bdef = new BodyDef();
bdef.position.set(100/ ppm, 200/ ppm);
bdef.type = BodyDef.BodyType.DynamicBody;
b2dbody = world.createBody(bdef);
FixtureDef mainFdef = new FixtureDef();
mainFdef.shape = Shape; //this is the shape from above of course
b2dbody.createFixture(mainFdef).setUserData(this);
I would be really happy if you could tell me whats wrong!
Thank You
More of a wild guess, but do you have your ppm conversion going the right way? 1/ppm (which you indicate is 75) gives quite a small value. I didn't dig into the bowels of the box2d code, but as it works best when objects are defined in meters, creating a polygon with vertices 0,0 and 0,0.0133 (1cm) might "confuse it" (meaning some kind of rounding error or something somewhere in the code so it can't distinguish vertices and thinks there are not at least 3 of them.)
For example, a bare bones app with 3 versions of your vertices code yields runtime exceptions on the first 2 versions (small values) but no runtime exception with larger values:
/* Version 1 (your code) - Runtime error
vertices[0] = new Vector2(0f/ppm , 0f );
vertices[1] = new Vector2(1f/ppm , 1f/ppm );
vertices[2] = new Vector2(0f/ppm ,1f/ppm);
*/
/* Version 2 (your actual values) - Runtime error
vertices[0] = new Vector2(0f , 0f );
vertices[1] = new Vector2(0.0133f , .0133f );
vertices[2] = new Vector2(0f , 0.0133f);
*/
/* Version 3 (larger values) - No error
vertices[0] = new Vector2(0f , 0f );
vertices[1] = new Vector2(1f , 1f );
vertices[2] = new Vector2(0f ,1f);
*/
I have two images that I want to merge into one and show side by side (right and left). I've searched in websites and have gotten the below code, but it always is coming top and bottom only.
i think This occurs due to the extra code I added to make both images the same size.
my code:
if(c.getWidth() > s.getWidth()) {
width = c.getWidth();
height = c.getHeight() + s.getHeight();
} else {
width = s.getWidth();
height = c.getHeight() + s.getHeight();
}
cs = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(cs);
canvas.drawBitmap(c, 0f, 0f, null);
canvas.drawBitmap(s, c.getWidth(), 0f , null);
and also I am using this code for make both image as same size
Rect dest1 = new Rect(0, 0, width, height / 2); // left,top,right,bottom
canvas.drawBitmap(c, null, dest1, null);
Rect dest2 = new Rect(0, height / 2, width, height); // left,top,right,bottom
canvas.drawBitmap(s, null, dest2, null);
add customdrawable.xml in drawable folder
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item>
<bitmap android:src="#drawable/image1" android:gravity="left"/>
</item>
<item>
<bitmap android:src="#drawable/image2" android:gravity="right"/>
</item>
</layer-list>
use as #drawable/customdrawable.
Example Link
LayeredDrawable
If you requirement suites, you can use this.
Well, the first if{ is not valid code, I assume you want some condition there to distinguish between side-by-side and above-below?
The if code seems identical to the else code (since they have the same width), you probably want to add the widths instead.
Overall, you probably want something like:
if(sideBySide)
cs = Bitmap.createBitmap(c.getWidth(), c.getHeight() + s.getHeight(), Bitmap.Config.ARGB_8888);
else
cs = Bitmap.createBitmap(c.getWidth() + s.getWidth(), s.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(cs);
canvas.drawBitmap(c, 0f, 0f, null);
if(sideBySide)
canvas.drawBitmap(s, c.getWidth(), 0f , null);
else
canvas.drawBitmap(s, 0, c.getHeight() , null);
I'm a new to LibGDX 3D facilities and I'm wondering how I can merge two cylinders created using the ModelBuilder#createCylinder class.I have two ModelInstances :
The first one is a white cylinder,
The second a red cylinder with the same properties
How can get only one cylinder to render (instance / model / object / whatever can be rendered) composed of the red above the white one (or vice versa).
Pixmap pixmap1 = new Pixmap(1, 1, Format.RGBA8888);
pixmap1.setColor(Color.WHITE);
pixmap1.fill();
Texture white = new Texture(pixmap1);
//...
Texture red = new Texture(pixmap2);
model1 = modelBuilder.createCylinder(4f, 6f, 4f, 16,
new Material(
TextureAttribute.createDiffuse(white),
ColorAttribute.createSpecular(1,1,1,1),
FloatAttribute.createShininess(8f))
, Usage.Position | Usage.Normal | Usage.TextureCoordinates);
model1I_white = new ModelInstance(model1, 0, 0, 0);
//...
model2I_red = new ModelInstance(model2, 0, 0, -2f);
Then I render ModelInstance with ModelBatch#render.
Instead of using createCylinder(), you can create 2 cylinders with the MeshBuilder class, and compose your final cylinder with part().
meshBuilder.begin();
meshBuilder.cylinder(4f, 6f, 4f, 16);
Mesh cylinder1 = meshBuilder.end();
meshBuilder.begin();
meshBuilder.cylinder(4f, 6f, 4f, 16);
Mesh cylinder2 = meshBuilder.end();
modelBuilder.begin();
modelBuilder.part("cylinder1",
cylinder1,
Usage.Position | Usage.Normal | Usage.TextureCoordinates,
new Material(
TextureAttribute.createDiffuse(white),
ColorAttribute.createSpecular(1,1,1,1),
FloatAttribute.createShininess(8f)));
modelBuilder.part("cylinder2",
cylinder2,
Usage.Position | Usage.Normal | Usage.TextureCoordinates,
new Material(
TextureAttribute.createDiffuse(red),
ColorAttribute.createSpecular(1,1,1,1),
FloatAttribute.createShininess(8f)))
.mesh.transform(new Matrix4().translate(0, 0, -2f));
Model finalCylinder = modelBuilder.end();
Thanks a lot to Aurel for sharing the code.
Probably libGdx's Api changed slightly so I'll post the complete code wich worked for me
MeshBuilder meshBuilder = new MeshBuilder();
meshBuilder.begin(VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
meshBuilder.part("id1", GL20.GL_TRIANGLES);
meshBuilder.box(1f, 1f, 1f);
Mesh mesh1 = meshBuilder.end();
meshBuilder.begin(VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);
meshBuilder.part("id2", GL20.GL_TRIANGLES);
meshBuilder.cylinder(1f, 1f, 1f, 16);
Mesh mesh2 = meshBuilder.end();
ModelBuilder modelBuilder = new ModelBuilder();
modelBuilder.begin();
modelBuilder.part("part1",
mesh1,
GL20.GL_TRIANGLES,
new Material(ColorAttribute.createDiffuse(Color.RED)));
modelBuilder.part("part2",
mesh2,
GL20.GL_TRIANGLES,
new Material(ColorAttribute.createDiffuse(Color.RED)))
.mesh.transform(new Matrix4().translate(1, 0, 0f));
To preserve a color for every mesh add VertxtAttributes.Usage.ColorPacked as meshBuilder.begin()
meshBuilder.begin(VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal | VertexAttributes.Usage.ColorPacked);
I'm new to android SDK and programming using OGLES2.0. my problem is, most of the programs are not running on my PC.
I'm using Android virtual Device Nexus 4 with 512 Mb Ram, VM Heap 64, Internal Storage 512 and Android 4.3 with API 18 (No SD Card).
A sample code which I'm trying to run is
package com.example.mynewsample;
//
// Book: OpenGL(R) ES 2.0 Programming Guide
// Authors: Aaftab Munshi, Dan Ginsburg, Dave Shreiner
// ISBN-10: 0321502795
// ISBN-13: 9780321502797
// Publisher: Addison-Wesley Professional
// URLs: http://safari.informit.com/9780321563835
// http://www.opengles-book.com
//
// Hello_Triangle
//
// This is a simple example that draws a single triangle with
// a minimal vertex/fragment shader. The purpose of this
// example is to demonstrate the basic concepts of
// OpenGL ES 2.0 rendering.
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
import android.content.Context;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.util.Log;
public class myTriangleRenderer implements GLSurfaceView.Renderer
{
///
// Constructor
//
public myTriangleRenderer(Context context)
{
mVertices = ByteBuffer.allocateDirect(mVerticesData.length * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
mVertices.put(mVerticesData).position(0);
}
///
// Create a shader object, load the shader source, and
// compile the shader.
//
private int LoadShader(int type, String shaderSrc)
{
int shader;
int[] compiled = new int[1];
// Create the shader object
shader = GLES20.glCreateShader(type);
if (shader == 0)
return 0;
// Load the shader source
GLES20.glShaderSource(shader, shaderSrc);
// Compile the shader
GLES20.glCompileShader(shader);
// Check the compile status
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0)
{
Log.e(TAG, GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
return 0;
}
return shader;
}
///
// Initialize the shader and program object
//
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
String vShaderStr =
"attribute vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = vPosition; \n"
+ "} \n";
String fShaderStr =
"precision mediump float; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_FragColor = vec4 ( 1.0, 0.0, 0.0, 1.0 );\n"
+ "} \n";
int vertexShader;
int fragmentShader;
int programObject;
int[] linked = new int[1];
// Load the vertex/fragment shaders
vertexShader = LoadShader(GLES20.GL_VERTEX_SHADER, vShaderStr);
fragmentShader = LoadShader(GLES20.GL_FRAGMENT_SHADER, fShaderStr);
// Create the program object
programObject = GLES20.glCreateProgram();
if (programObject == 0)
return;
GLES20.glAttachShader(programObject, vertexShader);
GLES20.glAttachShader(programObject, fragmentShader);
// Bind vPosition to attribute 0
GLES20.glBindAttribLocation(programObject, 0, "vPosition");
// Link the program
GLES20.glLinkProgram(programObject);
// Check the link status
GLES20.glGetProgramiv(programObject, GLES20.GL_LINK_STATUS, linked, 0);
if (linked[0] == 0)
{
Log.e(TAG, "Error linking program:");
Log.e(TAG, GLES20.glGetProgramInfoLog(programObject));
GLES20.glDeleteProgram(programObject);
return;
}
// Store the program object
mProgramObject = programObject;
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
}
// /
// Draw a triangle using the shader pair created in onSurfaceCreated()
//
public void onDrawFrame(GL10 glUnused)
{
// Set the viewport
GLES20.glViewport(0, 0, mWidth, mHeight);
// Clear the color buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Use the program object
GLES20.glUseProgram(mProgramObject);
// Load the vertex data
GLES20.glVertexAttribPointer(0, 3, GLES20.GL_FLOAT, false, 0, mVertices);
GLES20.glEnableVertexAttribArray(0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
}
// /
// Handle surface changes
//
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
mWidth = width;
mHeight = height;
}
// Member variables
private int mProgramObject;
private int mWidth;
private int mHeight;
private FloatBuffer mVertices;
private static String TAG = "HelloTriangleRenderer";
private final float[] mVerticesData =
{ 0.0f, 0.5f, 0.0f, -0.5f, -0.5f, 0.0f, 0.5f, -0.5f, 0.0f };
}
I had tried different virtual devices, but each time it says Unfortunately stops running.
I am getting this with all OGLES2.0 programs, that won't use CANVAS. A Canvas Program is running accurately.
My experience thus far has always been that the Android emulator does not fully support OpenGL ES 2.0, only ES 1.x. By far the easiest approach is to test on a physical device.
However please checkout this question which suggests it can now be done:
Android OpenGL ES 2.0 emulator
OpenGL ES 2.0 emulation on AVDs actually works pretty well now since the Jelly Bean version. However, the critical factor is the underlying OpenGL driver you have installed on your host development system. It really must be a recent Nvidia or AMD driver. Also, installing Intel's HAXM makes it run much faster. See the third article here:
http://montgomery1.com/opengl/
As described in post title, I'm looking for a way to detect motion/movement on the input stream from CCTV camera (IP/WiFi). Anyone know best way how I can connect to IP video stream and monitor for motion?
this is the opencv code in python, java is simiar, you need use opencv for the image operation
import cv2, time, pandas
from datetime import datetime
first_frame = None
status_list = [None,None]
times = []
df=pandas.DataFrame(columns=["Start","End"])
video = cv2.VideoCapture('rtsp://admin:Paxton10#10.199.27.128:554')
while True:
check, frame = video.read()
status = 0
gray = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(21,21),0)
if first_frame is None:
first_frame=gray
continue
delta_frame=cv2.absdiff(first_frame,gray)
thresh_frame=cv2.threshold(delta_frame, 30, 255, cv2.THRESH_BINARY)[1]
thresh_frame=cv2.dilate(thresh_frame, None, iterations=2)
(cnts,_)=cv2.findContours(thresh_frame.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contour in cnts:
if cv2.contourArea(contour) < 200000:
continue
status=1
(x, y, w, h)=cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x+w, y+h), (0,255,0), 3)
status_list.append(status)
status_list=status_list[-2:]
if status_list[-1]==1 and status_list[-2]==0:
times.append(datetime.now())
if status_list[-1]==0 and status_list[-2]==1:
times.append(datetime.now())
#cv2.imshow("Gray Frame",gray)
#cv2.imshow("Delta Frame",delta_frame)
imS = cv2.resize(thresh_frame, (640, 480))
cv2.imshow("Threshold Frame",imS)
imS = cv2.resize(frame, (640, 480))
cv2.imshow("Color Frame",imS)
#cv2.imshow("Color Frame",frame)
key=cv2.waitKey(1)
if key == ord('q'):
if status == 1:
times.append(datetime.now())
break
print(status_list)
print(times)
for i in range(0, len(times), 2):
df = df.append({"Start": times[i],"End": times[i+1]}, ignore_index=True)
df.to_csv("Times.csv")
video.release()
cv2.destroyAllWindows