Detect size of HICON with JNA - java

My final goal is to obtain the icon of a HWND in Java with help of JNA library. Everything works fine except of one important thing: I need the size of the icon for further processing steps in Java.
It seems that I cannot request the size. I always obtain the size 0x0. What am I doing wrong? The basic code example looks like the following. Most API function templates were not part of JNA. So, I had to define them on my own.
final long hicon = ExtUser32.INSTANCE.SendMessageA(hwnd, ExtUser32.WM_GETICON, ExtUser32.ICON_BIG, 0);
final Pointer hIcon = new Pointer(hicon);
final ICONINFO info = new ICONINFO();
final BITMAP bmp = new BITMAP();
final SIZE size = new SIZE();
System.out.println(ExtUser32.INSTANCE.GetIconInfo(hIcon, info));
System.out.println(info);
System.out.println(ExtGdi32.INSTANCE.GetBitmapDimensionEx(info.hbmColor, size));
System.out.println(size);
if (info.hbmColor != null)
{
final int nWrittenBytes = ExtGdi32.INSTANCE.GetObjectA(info.hbmColor, bmp.size(), bmp.getPointer());
System.out.println(nWrittenBytes);
System.out.println(bmp);
}
The sysouts print this:
true
ICONINFO(auto-allocated#0x5b72b4f0 (32 bytes)) {
WinDef$BOOL fIcon#0=1
WinDef$DWORD xHotspot#4=16
WinDef$DWORD yHotspot#8=16
WinDef$HBITMAP hbmMask#10=native#0xffffffffb00515e8 (com.sun.jna.platform.win32.WinDef$HBITMAP#b00515e7)
WinDef$HBITMAP hbmColor#18=native#0xffffffffa50515c8 (com.sun.jna.platform.win32.WinDef$HBITMAP#a50515c7)
}
true
WinUser$SIZE(auto-allocated#0x652a3000 (8 bytes)) {
int cx#0=0
int cy#4=0
}
32
BITMAP(auto-allocated#0x5b72b5b0 (32 bytes)) {
WinDef$LONG bmType#0=0
WinDef$LONG bmWidth#4=0
WinDef$LONG bmHeight#8=0
WinDef$LONG bmWidthBytes#c=0
WinDef$WORD bmPlanes#10=0
WinDef$WORD bmBitsPixel#12=0
WinDef$LPVOID bmBits#18=0
}
The request of ICONINFO structure seems to be correct. But if I try to request the dimension for the set hbmColor structure component by Gdi32.GetBitmapDimensionEx() then the structure keeps initialized with zeros. This approach via hbmColor or hbmMask was suggested by:
How to determine the size of an icon from a HICON?
UPDATE 1
Error tracing added!
As the sysouts indicate (true), the concerning function invocations didn't fail.
UPDATE 2
Further observation: In Java, these recreated structure types are intialized with zeros after instantiation. I set the initial values of the structure components in SIZE and BITMAP to a value that deviates from zero. GetBitmapDimensionEx sets it back to zero. But GetObjectA doesn't modify the structure! The function's return result indicates that bytes were written but that's not true!
...
size.cx = 1;
size.cy = 2;
bmp.bmType.setValue(1);
bmp.bmWidth.setValue(2);
bmp.bmHeight.setValue(3);
bmp.bmWidthBytes.setValue(4);
bmp.bmPlanes.setValue(5);
bmp.bmBitsPixel.setValue(6);
bmp.bmBits.setValue(7);
System.out.println(ExtGdi32.INSTANCE.GetBitmapDimensionEx(info.hbmColor, size));
System.out.println(size);
if (info.hbmColor != null)
{
final int nWrittenBytes = ExtGdi32.INSTANCE.GetObjectA(info.hbmColor, bmp.size(), bmp.getPointer());
System.out.println(nWrittenBytes);
System.out.println(bmp);
}
Results:
true
WinUser$SIZE(auto-allocated#0x64fbcb20 (8 bytes)) {
int cx#0=0
int cy#4=0
}
32
BITMAP(auto-allocated#0x64fb91f0 (32 bytes)) {
WinDef$LONG bmType#0=1
WinDef$LONG bmWidth#4=2
WinDef$LONG bmHeight#8=3
WinDef$LONG bmWidthBytes#c=4
WinDef$WORD bmPlanes#10=5
WinDef$WORD bmBitsPixel#12=6
WinDef$LPVOID bmBits#18=7
}

I would have added this as a comment but my reputation is too low:
You are not showing your BITMAP or GetObjectA definitions so I'm guessing but
in your line:
final int nWrittenBytes = ExtGdi32.INSTANCE.GetObjectA(info.hbmColor, bmp.size(), bmp.getPointer());
you fail to call 'bmp.read()' afterwards.
If you look at the javadoc for Struture.getPointer()
https://jna.java.net/javadoc/com/sun/jna/Structure.html
you see that you are responsible for calling Structure.write() and Structure.read() before and after making the call to native method that uses a pointer obtained with getPointer(). In your case the write is superfluous but it is a good practice.
To understand why this is necessary consider that your BITMAP/bmp object is a Java object living in Java heap where it can get moved around during garbage collection. Hence the getPointer() cannot return the real address of the 'real' object. Instead it returns a pointer to a separate fixed (non movable) chunk of memory in the native heap (which chunk JNA allocates and associates with your Java object. Now your getObjectA() routine will write its stuff to that memory but JNA or anyone on the java side cannot have a clue that this is what happens. So you need to call the read() to tell JNA to copy the native side stuff to the Java object.

If this is a 32-bit application only:
Your BITMAP structure is incorrect. Here's a simple C program that prints the expected offsets of all the fields of BITMAP, and its total size:
// 24 january 2015
#include <windows.h>
#include <stdio.h>
#include <stddef.h>
int main(void)
{
printf("%d\n", sizeof (BITMAP));
#define O(f) printf("%s %x\n", #f, offsetof(BITMAP, f))
O(bmType);
O(bmWidth);
O(bmHeight);
O(bmWidthBytes);
O(bmPlanes);
O(bmBitsPixel);
O(bmBits);
return 0;
}
And here's what I get (in wine, compiled as a 32-bit program with MinGW-w64):
24
bmType 0
bmWidth 4
bmHeight 8
bmWidthBytes c
bmPlanes 10
bmBitsPixel 12
bmBits 14
Notice that your Java output above has a different size for BITMAP and a different offset for bmBits.
(The BITMAP structure is correct for 64-bit applications.)
General:
Why GetObject() is indicating success is beyond me.
I don't know JNA at all so I don't know what to do about this. Does JNA not provide a WinGDI$BITMAP?

Related

What JVM runtime data area are included in ThreadMXBean.getThreadAllocatedBytes(long id)

I was trying to get the memory consumption of some code snippets. After some search, I realized that ThreadMXBean.getThreadAllocatedBytes(long id) can be used to achieve this. So I tested this method with the following code:
ThreadMXBean threadMXBean = (ThreadMXBean) ManagementFactory.getThreadMXBean();
long id = Thread.currentThread().getId();
// new Long(0);
long beforeMemUsage = threadMXBean.getThreadAllocatedBytes(id);
long afterMemUsage = 0;
{
// put the code you want to measure here
for (int i = 0; i < 10; i++) {
new Long(i);
}
}
afterMemUsage = threadMXBean.getThreadAllocatedBytes(id);
System.out.println(afterMemUsage - beforeMemUsage);
I run this code with different iteration times in for loop (0, 1, 10, 20, and 30). And the result as follows:
0 Long: 48 bytes
1 Long: 456 bytes
10 Long: 672 bytes
20 Long: 912 bytes
30 Long: 1152 bytes
The differences between 1 and 10, 10 and 20, as well as 20 and 30 are easy to explain, because the size of Long object is 24 bytes. But I was confused by the huge difference between 0 and 1.
Actually, I guessed this is caused by the class loading. So I uncommented the 3rd line code and the result as follows:
0 Long: 48 bytes
1 Long: 72 bytes
10 Long: 288 bytes
20 Long: 528 bytes
30 Long: 768 bytes
It seems that my guess is confirmed by the result. However, in my opinion, the information of class structure is stored in Method Area, which is not a part of heap memory. As the Javadoc of ThreadMXBean.getThreadAllocatedBytes(long id) indicates, it returns the total amount of memory allocated in heap memory. Have I missed something?
The tested JVM version is:
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Thanks!
The first invocation of new Long(0) causes the resolution of the constant pool entry referenced by new bytecode. While resolving CONSTANT_Class_info for the first time, JVM loads the referenced class - java.lang.Long.
ClassLoader.loadClass is implemented in Java, and it can certainly allocate Java objects. For instance, getClassLoadingLock method creates a new lock object and a new entry in parallelLockMap:
protected Object getClassLoadingLock(String className) {
Object lock = this;
if (parallelLockMap != null) {
Object newLock = new Object();
lock = parallelLockMap.putIfAbsent(className, newLock);
if (lock == null) {
lock = newLock;
}
}
return lock;
}
Also, when doing a class name lookup in the system dictionary, JVM creates a new String object.
I used async-profiler to record all heap allocations JVM does when loading java.lang.Long class. Here is the clickable interactive Flame Graph:
The graph includes 13 samples - one per each allocated object. The type of an allocated object is not shown, but it can be easily guessed from the context (stack trace).
Green color denotes Java stack trace;
Yellow means VM stack trace.
Note that each java_lang_String::basic_create() (and similar) allocates two objects: an instance of java.lang.String and its backing char[] array.
The graph is produced by the following test program:
import one.profiler.AsyncProfiler;
public class ProfileHeapAlloc {
public static void main(String[] args) throws Exception {
AsyncProfiler profiler = AsyncProfiler.getInstance();
// Dry run to skip allocations caused by AsyncProfiler initialization
profiler.start("_ZN13SharedRuntime19dtrace_object_allocEP7oopDesci", 0);
profiler.stop();
// Real profiling session
profiler.start("_ZN13SharedRuntime19dtrace_object_allocEP7oopDesci", 0);
new Long(0);
profiler.stop();
profiler.execute("file=alloc.svg");
}
}
How to run:
java -Djava.library.path=/path/to/async-profiler -XX:+DTraceAllocProbes ProfileHeapAlloc
Here _ZN13SharedRuntime19dtrace_object_allocEP7oopDesci is the mangled name for SharedRuntime::dtrace_object_alloc() function, which is called by JVM for every heap allocation whenever DTraceAllocProbes flag is on.

Java and JNA passing params for C function

I am using JNA to call a function from a C DLL :
extern _declspec( dllexport )
int ReadCP(IN OUT unsigned char* Id, IN OUT unsigned int* Size);
In Java I am using an interface for JNA with this method :
int ReadCP(byte[] id, IntByReference size);
I load the DLL successfully and call the method that way :
byte[] id= new byte[10];
IntByReference size = new IntByReference();
IMimicDLL demo = (IMimicDLL) Native.loadLibrary("MyLib", IMimicDLL.class);
size.setValue(10);
//....
while(true){
demo.ReadCP(id, size);
//...
}
The first time in the loop id has a correct value, but it keeps the same value even if the logic should change it. What can be the problem? Is that something to do with pointers?
Your mapping of id is wrong: you cannot pass a primitive array as an argument via JNA.
You should change your interface to use a Pointer:
int ReadCP(Pointer id, IntByReference size);
Then you would allocate native-side memory for id:
Pointer id = new Memory(10);
After passing and retrieving id from the function you would then fetch the byte array from the native memory:
byte[] idByteArray = id.getByteArray(0, 10);
There are other get*() methods for Pointer, such as getString(), that may or may not be more applicable to the ultimate type of the Id field that you're trying to fetch.
As far as the value updating once but not after repeated calls, this sounds like at some point the system is taking a "snapshot" of the current hardware state and you must find a way to refresh that snapshot. Troubleshooting steps would include:
Clear out the data in the array/pointer and see if it's repopulated from the C-side DLL (the problem is not in your JNA it's in usage of the DLL).
Check your size variable throughout the process to make sure it's remaining the value of 10 you expect. It's possible that when you remove the card it may return 0, and then if you try to read a new value (of length 0) you're not overwriting the old array past index 0.
Alternate which card is used first.
Alternate the order of starting the program, loading, and swapping out the cards to collect data on which step of the process seems to cause the value to stick.
Investigate the DLL for methods to "refresh" or "reload" a snapshot of the hardware state.
Try unloading and reloading the DLL in between loops.
Most of these steps are outside of the scope of your question, on using JNA, and would require you to provide more information about the DLL being used for us to help further.
here the business login in the while loop
while(true){
try {
Thread.sleep(1500);
} catch (InterruptedException e1)
{
e1.printStackTrace();
}
if(demo.cardPresent() == 0 && read == false){
demo.ReadCP(id, size);
try {
System.out.println(" -- id : " + new String(id.getByteArray(0, 10),"UTF-8"));
read = true;
continue;
} catch (Exception ex) {
ex.printStackTrace();
}
}else if(demo.cardPresent() != 0){
read = false;
}

fwrite() in C & readInt() in Java differ in endianess

Native Code :
writing number 27 using fwrite().
int main()
{
int a = 27;
FILE *fp;
fp = fopen("/data/tmp.log", "w");
if (!fp)
return -errno;
fwrite(&a, 4, 1, fp);
fclose();
return 0;
}
Reading back the data(27) using DataInputStream.readInt() :
public int readIntDataInputStream(void)
{
String filePath = "/data/tmp.log";
InputStream is = null;
DataInputStream dis = null;
int k;
is = new FileInputStream(filePath);
dis = new DataInputStream(is);
k = dis.readInt();
Log.i(TAG, "Size : " + k);
return 0;
}
O/p
Size : 452984832
Well that in hex is 0x1b000000
0x1b is 27. But the readInt() is reading the data as big endian while my native coding is writing as little endian. . So, instead of 0x0000001b i get 0x1b000000.
Is my understanding correct? Did anyone came across this problem before?
From the Javadoc for readInt():
This method is suitable for reading bytes written by the writeInt method of interface DataOutput
If you want to read something written by a C program you'll have to do the byte swapping yourself, using the facilities in java.nio. I've never done this but I believe you would read the data into a ByteBuffer, set the buffer's order to ByteOrder.LITTLE_ENDIAN and then create an IntBuffer view over the ByteBuffer if you have an array of values, or just use ByteBuffer#getInt() for a single value.
All that aside, I agree with #EJP that the external format for the data should be big-endian for greatest compatibility.
There are multiple issues in your code:
You assume that the size of int is 4, it is not necessarily true, and since you want to deal with 32-bit ints, you should use int32_t or uint32_t.
You must open the file in binary more to write binary data reliably. The above code would fail on Windows for less trivial output. Use fopen("/data/tmp.log", "wb").
You must deal with endianness. You are using the file to exchange data between different platforms that may have different native endianness and/or endian specific APIs. Java seems to use big-endian, aka network byte order, so you should convert the values on the C platform with the hton32() utility function. It is unlikely to have significant impact on performance on the PC side, as this function is usually expanded inline, possibly as a single instruction and most of the time will be spent waiting for I/O anyway.
Here is a modified version of the code:
#include <endian.h>
#include <stdint.h>
#include <stdio.h>
int main(void) {
uint32_t a = hton32(27);
FILE *fp = fopen("/data/tmp.log", "wb");
if (!fp) {
return errno;
}
fwrite(&a, sizeof a, 1, fp);
fclose();
return 0;
}

Glib memory allocation error

I am using a library libfprint on ubuntu nd I am trying to call a function through my java code.
API_EXPORTED struct fp_img *fpi_img_new(size_t length)
{
struct fp_img *img = g_malloc(sizeof(*img) + length);
memset(img, 0, sizeof(*img));
fp_dbg("length=%zd", length);
img->length = length;
return img;
}
I am passing integer value 5 from my java code to this function. When I try to execute above function I got following errors:
GLib-ERROR **: /build/buildd/glib2.0-2.30.0/./glib/gmem.c:170: failed to allocate 3077591024 bytes
I have tried same code on 2 different ubuntu machine but the error remains the same. I dont know why it is trying to allocate so many bytes for 24+5 length.
Could anyone suggest me any solution?
The source code clearly states:
/* structs that applications are not allowed to peek into */
(...)
struct fp_img;
So, I'm not sure what you did in order to even compile something that needs the size of struct fp_img: you're not supposed to be able to do that, since the structure declaration is opaque.
It look like you get a pointer instead of a size_t.
Try to change your definition to:
API_EXPORTED struct fp_img *fpi_img_new(size_t * length);
You then need to derefenrece it:
API_EXPORTED struct fp_img *fpi_img_new(size_t * length)
{
struct fp_img *img = g_malloc(sizeof(*img) + *length);
memset(img, 0, sizeof(*img));
fp_dbg("length=%zd", *length);
img->length = *length;
return img;
}
Note: It seems that 3077591024 is on the stack (0x125807FE) it is highly platform dependent so don't quote me on that.
What happens if you change
sizeof(*img)
with
sizeof(struct fp_img)
? I'm thinking that this could be your problem because *img is not initialised to anything at the time you are calling g_malloc().
ERROR: struct fp_img *img = g_malloc(sizeof(*img) + length);
*img is new created, so you can not use it like "sizeof(*img)",
you could write like this: "sizeof(struct fp_img)"

Echo/delay algorithm just causes noise/static?

There have been other questions and answers on this site suggesting that, to create an echo or delay effect, you need only add one audio sample with a stored audio sample from the past. As such, I have the following Java class:
public class DelayAMod extends AudioMod {
private int delay = 500;
private float decay = 0.1f;
private boolean feedback = false;
private int delaySamples;
private short[] samples;
private int rrPointer;
#Override
public void init() {
this.setDelay(this.delay);
this.samples = new short[44100];
this.rrPointer = 0;
}
public void setDecay(final float decay) {
this.decay = Math.max(0.0f, Math.min(decay, 0.99f));
}
public void setDelay(final int msDelay) {
this.delay = msDelay;
this.delaySamples = 44100 / (1000/this.delay);
System.out.println("Delay samples:"+this.delaySamples);
}
#Override
public short process(short sample) {
System.out.println("Got:"+sample);
if (this.feedback) {
//Delay should feed back into the loop:
sample = (this.samples[this.rrPointer] = this.apply(sample));
} else {
//No feedback - store base data, then add echo:
this.samples[this.rrPointer] = sample;
sample = this.apply(sample);
}
++this.rrPointer;
if (this.rrPointer >= this.samples.length) {
this.rrPointer = 0;
}
System.out.println("Returning:"+sample);
return sample;
}
private short apply(short sample) {
int loc = this.rrPointer - this.delaySamples;
if (loc < 0) {
loc += this.samples.length;
}
System.out.println("Found:"+this.samples[loc]+" at "+loc);
System.out.println("Adding:"+(this.samples[loc] * this.decay));
return (short)Math.max(Short.MIN_VALUE, Math.min(sample + (int)(this.samples[loc] * this.decay), (int)Short.MAX_VALUE));
}
}
It accepts one 16-bit sample at a time from an input stream, finds an earlier sample, and adds them together accordingly. However, the output is just horrible noisy static, especially when the decay is raised to a level that would actually cause any appreciable result. Reducing the decay to 0.01 barely allows the original audio to come through, but there's certainly no echo at that point.
Basic troubleshooting facts:
The audio stream sounds fine if this processing is skipped.
The audio stream sounds fine if decay is 0 (nothing to add).
The stored samples are indeed stored and accessed in the proper order and the proper locations.
The stored samples are being decayed and added to the input samples properly.
All numbers from the call of process() to return sample are precisely what I would expect from this algorithm, and remain so even outside this class.
The problem seems to arise from simply adding signed shorts together, and the resulting waveform is an absolute catastrophe. I've seen this specific method implemented in a variety of places - C#, C++, even on microcontrollers - so why is it failing so hard here?
EDIT: It seems I've been going about this entirely wrong. I don't know if it's FFmpeg/avconv, or some other factor, but I am not working with a normal PCM signal here. Through graphing of the waveform, as well as a failed attempt at a tone generator and the resulting analysis, I have determined that this is some version of differential pulse-code modulation; pitch is determined by change from one sample to the next, and halving the intended "volume" multiplier on a pure sine wave actually lowers the pitch and leaves volume the same. (Messing with the volume multiplier on a non-sine sequence creates the same static as this echo algorithm.) As this and other DSP algorithms are intended to work on linear pulse-code modulation, I'm going to need some way to get the proper audio stream first.
It should definitely work unless you have significant clipping.
For example, this is a text file with two columns. The leftmost column is the 16 bit input. The second column is the sum of the first and a version delayed by 4001 samples. The sample rate is 22KHz.
Each sample in the second column is the result of summing x[k] and x[k-4001] (e.g. y[5000] = x[5000] + x[999] = -13840 + 9181 = -4659) You can clearly hear the echo signal when playing the samples in the second column.
Try this signal with your code and see if you get identical results.

Categories

Resources