I'm trying to use Raspberry PI's PWM from Java, without using WiringPi/Pi4j or any other library. So far I learned that the only way to control the PWMs is by memory-mapping the /dev/mem file and setting appropriate registers there. That's how WiringPi does it (see here) and also people ask about it pretty often on the Raspberry Pi StackExchange.
I'm facing a certain problem that seems to be irrelevant to the Raspberry, that's why I'm asking the question on the general SO forum. I tested the below symptom positive on my Ubuntu laptop.
So, I'm trying with this simple piece of code with seems to be a minimal example that reproduces the problem, runnnig with sudo:
import java.io.RandomAccessFile;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
public class Main {
public static void main(String[] args) {
try {
RandomAccessFile file = new RandomAccessFile("/dev/mem", "rw");
long size = 12345; // Anything greater than 0 - it seems to be irrelevant.
MappedByteBuffer out = file.getChannel().map(FileChannel.MapMode.READ_WRITE, 0, size);
// Here I plan to make some writes to the registers.
} catch (Exception e) {
e.printStackTrace();
}
}
}
I'm not even trying to read any data or change the internal pointer position. I'm getting the following stack trace, originating from the call to map(...):
java.io.IOException: Invalid argument
at java.base/sun.nio.ch.FileDispatcherImpl.truncate0(Native Method)
at java.base/sun.nio.ch.FileDispatcherImpl.truncate(FileDispatcherImpl.java:86)
at java.base/sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:987)
at Main.main(Main.java:10)
When debugging the code, I noticed this piece in the implementation of map(...) (in sun/nio/ch/FileChannelImpl.java):
filesize = nd.size(fd);
and the file size is calculated to hava a size of 0. I don't know how to debug deeper because it seems to be already a native call, through JNI.
Checking the size of this file through the command-line reveals 1:
$ ll | grep mem
crw-r----- 1 root kmem 1, 1 lip 2 13:12 mem
I checked the man page: https://man7.org/linux/man-pages/man4/mem.4.html, but it doesn't seem to address this specific prolem.
It got more interesting when I found out that the JDK version matters. It doesn't work on 11.0.1 from Oracle, and OpenJDK 14.0.1. However, it does work (doesn't crash) on 8.0.191 from Oracle. I have some problems with attaching sources of this JDK to my debugger, so I cannot debug it as deeply as with the other version. I also cannot use JDK 8 on my Raspberry, I have some problems installing it. I figured it's better to understand the problem better rather than sticking to a single JDK.
What am I doing wrong? It seems to be so easy to map /dev/mem with C/C++ or Python, I see the examples all around the RPI forum. I'd really like to try to stay pure Java as long as possible. In theory there's also a risk of a bug in the newer JDKs, but I want to get some second opinion before I file a bug, especially that I'm not proficient with low-level Linux.
Thanks,
Peter
Related
I am new to java and DLL-s
I need to access DLL's methods from java. So go easy on me.
I have tried using JNA to access the DLL here is what I have done.
import com.sun.jna.Library;
public class mapper {
public interface mtApi extends Library {
public boolean IsStopped();
}
public static void main(String []args){
mtApi lib = (mtApi) Native.loadLibrary("MtApi", mtApi.class);
boolean test = lib.IsStopped();
System.out.println(test);
}
}
When I run the code, I am getting the following error:
Exception in thread "main" java.lang.UnsatisfiedLinkError:Error looking up function 'IsStopped':The specified procedure could not be found.
I understand that this error is saying it cannot find the function, but I have no idea how to fix it.
I am trying to use this API mt4api
and here is the method, I am attempting to access MQL4
Can anyone tell me what I am doing wrong?
I have looked at other alternatives, like jni4net, but I cannot get this working either.
If anyone can link me to a tutorial that shows me how to set this up, or knows how to, I would be greatfull.
Trading?Hunting for milliseconds to shave-off?Go rather into Distributed Processing... Definitely safer than relying on API !
While your OP was directed onto how bend java to call .NET DLL-functions,
let me sketch a much future-safer solution.
Using AI/ML-regression based predictors for FOREX trading, I was hunting in the same forest. The best solution found within the last about 12-years, having spent about a few hundreds man*years of experience, was setup in the following manner:
Host A executes trades: operates MetaTrader Terminal 4, with both Script and EA --- the distributed-processing system communicates with with a use of ZeroMQ low-latency messaging/signalling framework ( about a few tens of microseconds needed )
Host B executes AI/ML processing of predictions for a traded instrument ( about a few hundreds of microseconds apply )
Cluster C executes continuous AI/ML predictor re-trainings and HyperParameterSPACE model selections ( many CPU-hours indeed needed, continuous model self-adapting process running 24/7 )
Signalling / Messaging layer with ZeroMQ has ports and/or bindings available and ready for most of the mainstream and many of niche programming languages, including java.
Hidden dangers of going just against a published API:
While the efforts for system integration and testing are immense, the API specifications are always dangerous for specification creeping.
This said, add countless man*months consumed on debugging after a silent change in MT4 language specifications that de-rail your previous tools + libraries. Why? Just imagine. Some time ago, MQL4 stopped to be MQL4 and was silently shifted towards MQL5, under a name New-MQL4. Among other changes in compilation, there were many small and big nails in the coffin -- string surprisingly ceased to be a string and was hidden as an internal struct -- which one could guess what will cause with all DLL-calls.
So, beware of API creepings.
Does it hurt a distributed processing solution?
No.
With a wise message-layout design, there are no adverse effects of MetaTrader Terminal 4 behaviour and all the logic ( incl. the strategy decision ) is put outside this creeping platform.
Doable. Fast and smart. Also could use remote-GPU-cluster processing, if your budget allows.
Does it work even in Strategy Tester?
Yes, it does.
If anyone has the gut to rely on the in-built Strategy Tester, the distributed-processing model still works there. Performance depends on the preferred style of modelling, a full one year, tick-by-tick simulation, with a quite complex AI/ML components took a few days on a common COTS desktops PC-systems ( after years of Quant R&D, we do not use Strategy Tester internally at all, but the request was to batch-test the y/y tick-data, so could be commented here ).
I recently made the switch from LWJGL 2 to LWJGL 3, and after a few hours of gaping at the documentation and assembling a program to use it, I had this code. note that the code in the methods are all static, and Eclipse is giving me no issues with the code related to this. Also, note that changing it from allocateDirect to allocate had no effect.
//At the beginning of the class declaration:
public static ByteBuffer mouseXb=ByteBuffer.allocateDirect(8), mouseYb=ByteBuffer.allocateDirect(8);
public static double mouseX=0,mouseY=0;
//Then later, in another method in the same class
glfwPollEvents();
glfwGetCursorPos(window, mouseXb, mouseYb);
mouseX=mouseXb.getDouble();
mouseY=mouseYb.getDouble();
System.out.println(mouseX+", "+mouseY);
mouseXb.flip();
mouseYb.flip();
Oddly, though, I get values like:
(Also note that they only changed when the mouse moved around in the window, and never when outside of it, nor when the mouse was not moving)
2.0857E-317, 2.604651E-317
3.121831E-317, 2.604651E-317
5.1940924E-317, 2.604651E-317
7.2664804E-317, 2.604651E-317
6.7490474E-317, 2.0865855E-317
4.6771653E-317, 7.785178E-317
5.19561E-317, 5.7129166E-317
Okay, I fixed the issue, but I'll leave this post here in case anyone else has the same problem. It turns out GLFW uses little endian and Bytebuffers are big endian by default.
You can solve the issue by adding this (if you were using my code): mouseYb.order(ByteOrder.LITTLE_ENDIAN);
mouseXb.order(ByteOrder.LITTLE_ENDIAN);
I'm still quite new to LWJGL 3 myself, and had some trouble with the ByteBuffers. After a bit of looking around I found this page: LWJGL - Bindings FAQ.
In the org.lwjgl package there's a class called BufferUtils, which has a static function to generate a ByteBuffer. From the source code I noticed they called the order method too, just like with your fix, albeit with a different parameter:
public static ByteBuffer createByteBuffer(int capacity)
{
return BUFFER_ALLOCATOR.malloc(capacity).order(ByteOrder.nativeOrder());
}
The FAQ recommends using this option. Apparently your code's working fine now, but I just thought you should know in case you run into any problems in the future.
Also, thanks for your code snippet, I didn't know how to work the ByteBuffers, and now I do!
I have problem with reading JPEG images in Java with help of Image IO in multithreaded environment. Problems only arises if several thread try to read image.
Symptoms vary from incorrect profile loading to exception:
java.awt.color.CMMException: LCMS error 13: Couldn't link the profiles
No matter how i read image, via ImageIO.read or by using ImageReader.
Source data (image) is completly isolated and immutable.
This problem can be related to:
https://bugs.openjdk.java.net/browse/JDK-8041429 and
https://bugs.openjdk.java.net/browse/JDK-8032243
The question is there any other way to read JPEG files with ImageIO with multiple threads. It seems like there is problem in ImageIO with sharing mutable state of the image color profiles that i have no control over. Only solution I see is completely isolating it on JVM level, which sounds like bad idea.
I use Oracle JDK 8u25. Changing JDK update version have no effect on the issue (not major version), i can't use JDK 7 without rewriting big chunks of the code.
Code for reference.
ImageInputStream input = new MemoryCacheImageInputStream(inputStream);
Iterator<ImageReader> readers = ImageIO.getImageReaders(input);
if (!readers.hasNext()) {
throw new IllegalArgumentException("No reader for: " + dataUuid.toString());
}
ImageReader reader = readers.next();
try {
reader.setInput(input);
BufferedImage image = reader.read(0, reader.getDefaultReadParam());
Add a hook on JVM start. In the hook, just put :
Class.forName("javax.imageio.ImageIO");
This will force the class loader to load the class and do whatever static initialization it needs. I think your problem is the class is being loaded on a thread, and the 2nd thread is trying to use ImageIO, which cause a clash on locks (or lackof locks) obtained on color profiles.
Edit: You can add this line to your main too. Make sure it's the first line you call.
ImageIO was not the class responsible for ColorSpace initialization.
Class.forName("java.awt.color.ICC_ColorSpace");
Class.forName("sun.java2d.cmm.lcms.LCMS");
does the trick tough.
I had a similar multithreading problem with ImageIO yesterday and spent all day trying to figure out a solution (JDK 8u31 Win64). Even though I was not getting LCMS exceptions, the jpegs I read with ImageIO would have completely different colors. This happened only with jpegs with embedded color profiles and only when using ImageIO in multiple threads. An not always. Approximately 50% of the time. If it would start normally then it would continue properly reading all the rest of the images, but if it would not - all the rest would also be broken. So it was definetely color profile reading/conversion synchronization issue.
The trick with class loading proposed here did not help in my case.
The final solution was to use https://github.com/haraldk/TwelveMonkeys ImageIO jpeg plugin. I have tested it with thousands of jpegs in multiple threads and so far no issues.
Update
TwelveMonkeys plugin did not solve th e problem completely, still was getting exception. What helped is reverting to the old Kodak CMS by setting system property System.setProperty("sun.java2d.cmm", "sun.java2d.cmm.kcms.KcmsServiceProvider");
I am getting the same error since JRE 1.8.0_31 on a single Win7 machine using PDFRenderer. The suggestions with Class.forName did not fix it. However I could find another trick: By placing the following code at the top of the main method, the error did vanish:
try {
ColorSpace.getInstance(ICC_ColorSpace.CS_sRGB).toRGB(new float[]{0, 0, 0});
} catch (Exception e) {
e.printStackTrace();
}
Hopefully, this will help others as well. So far I cannot confirm, if this hack solves the problem in any case. The problem seems to be related with lazy instantion in the current versions of OpenJDK.
My system has USB DAC capable to play formats 24/96 and 24/192, however when I try to play them using Java I am getting line unavailable exception. Accordingly Java sound documentation, Java doesn't bring any limitations for sample and bits rates, and the limitations are coming for underline sound system. So I traced down where the exception coming from and it looks like native function
private static native void nGetFormats(int mixerIndex, int deviceID,
boolean isSource, Vector formats);
doesn't fill formats with corresponding line info above 16/48.
I looked in sources here.
However to prove that the function really doesn't return format I need or it just returns slight different, I have to see actual formats list. But I can't figure out how reach it. Method of DirectDL is private:
private AudioFormat[] getHardwareFormats() {
return hardwareFormats;
}
So my question has two parts:
How can I still get list of supported formats Java gets from hardware? In this case I could just write own DirectAudioDevice.
Is there any other alternative to standard Java sound to manage higher sample rates from Java? For example, Tritonus, but it seems really old and not supported.
I did my testing on Window, so plan to repeat it on Linux using different than Oracle JRE. However I need to find portable solution anyway.
I found some solution so I can listen to 24/96Khz and 24/192Khz recording using Java in FLAC, APE, and Wavpack formats.
After some debugging in Java sound I found that for some reason Java runtime limits bits depth to 16bits, however accepts high sample rates as 96Khz and 192KHz. So I borrowed down sampling solution from MediaChest. I also got JustFLAC which provides 24/192 support for FLAC files. Supporting 24/192 directly through hardware seems not possible without updating Java runtime native libraries that I plan to do soon.
Edit: the latest update is: I looked in native Java runtime and found the following:
static INT32 bitsArray[] = { 8, 16};
....
#define BITS_COUNT sizeof(bitsArray)/sizeof(INT32)
....
for (rateIndex = 0; rateIndex < SAMPLERATE_COUNT; rateIndex++) {
for (channelIndex = 0; channelIndex < CHANNELS_COUNT; channelIndex++) {
for (bitIndex = 0; bitIndex < BITS_COUNT; bitIndex++) {
DAUDIO_AddAudioFormat(creator, bitsArray[bitIndex],
So as you can see 8, and 16 bits are hardcoded and used in supported formats matrix generation. A fix seems to be easy just by adding two more constants, however it leads in creation own copy of Java runtime and it isn't acceptable. So it looks like I need to initiate some community process to make my recommendations accepted by and then included in next Java runtime updates.
Edit: One more update. Linux sound native implementation seems better. It has only limitation 24bits sample size. So if underline sound system (ALSA) allows the sample depth, then Java can play 24 bits/ 192,000 format. I tested it on Raspberry Pi using latest Raspbian and JDK 8 EA. Unfortunately even latest Arch and Pidora do not have the ALSA improvement.
For reading and writing the value in the private field, you can use reflection. The Java Tutorial and a question in StackOverflow.
About your second question, I've found a library called jd3lib that seems recent.
I am using URL.openStream() to download many html pages for a crawler that I am writing. The method runs great locally on my mac however on my schools unix server the method is extremely slow. But only when downloading the first page.
Here is the method that downloads the page:
public static String download(URL url) throws IOException {
Long start = System.currentTimeMillis();
InputStream is = url.openStream();
System.out.println("\t\tCreated 'is' in "+((System.currentTimeMillis()-start)/(1000.0*60))+"minutes");
...
}
And the main method that invokes it:
LinkedList<URL> ll = new LinkedList<URL>();
ll.add(new URL("http://sheldonbrown.org/bicycle.html"));
ll.add(new URL("http://www.trentobike.org/nongeo/index.html"));
ll.add(new URL("http://www.trentobike.org/byauthor/index.html"));
ll.add(new URL("http://www.myra-simon.com/bike/travel/index.html"));
for (URL tmp : ll) {
System.out.println();
System.out.println(tmp);
CrawlerTools.download(tmp);
}
Output locally (Note: all are fast):
http://sheldonbrown.org/bicycle.html
Created 'is' in 0.00475minutes
http://www.trentobike.org/nongeo/index.html
Created 'is' in 0.005083333333333333minutes
http://www.trentobike.org/byauthor/index.html
Created 'is' in 0.0023833333333333332minutes
http://www.myra-simon.com/bike/travel/index.html
Created 'is' in 0.00405minutes
Output on School Machine Server (Note: All are fast except the first one. The first one is slow regardless of what the first site is):
http://sheldonbrown.org/bicycle.html
Created 'is' in 3.2330666666666668minutes
http://www.trentobike.org/nongeo/index.html
Created 'is' in 0.016416666666666666minutes
http://www.trentobike.org/byauthor/index.html
Created 'is' in 0.0022166666666666667minutes
http://www.myra-simon.com/bike/travel/index.html
Created 'is' in 0.009533333333333333minutes
I am not sure if this is a Java issue (*A problem in my Java code) or a server issue. What are my options?
When run on the server this is the output of the time command:
real 3m11.385s
user 0m0.277s
sys 0m0.113s
I am not sure if this is relevant... What should I do to try and isolate my problem..?
You've answered your own question. It's not a Java issue, it has to do with your school's network or server.
I'd recommend that you report your timings in milliseconds and see if they're repeatable. Run that test in a loop - 1,000 or 10,000 times - and keep track of all the values you get. Import them into a spreadsheet and calculate some statistics. Look at the distribution of values. You don't know if the one data point that you have is an outlier or the mean value. I'd recommend that you do this for both networks in exactly the same way.
I'd also recommend using Fiddler or some other tool to watch network traffic as you download. You can get better insight into what's going on and perhaps ferret out the root cause.
But it's not Java. It's your code, your network. If this was a bug in the JDK it would have been fixed a long time ago. Suspect yourself first, last, and always.
UPDATE:
My network admin assured me that this
was a bad java implementation Not a
network problem. What do you think?
"Assured" you? What evidence did s/he produce to support this conclusion? What data? What measurements were taken? Sounds like laziness and ignorance to me.
It certainly doesn't explain why all the other requests behave just fine. What changed in Java between the first and subsequent calls? Did the JVM suddenly rewrite itself?
You can accept it if you want, but I'd say shame on your network admin for not being more curious. It would have been more honorable to be honest and say they didn't know, didn't have time, and weren't interested.
By Default Java prefers to use IPv6. My school's firewall
drops all IPv6 traffic (with no warning). After 3 minutes, 15 seconds Java falls back to IPv4. Seems strange to me that it takes so long to fall back to IPv4.
duffymo's answer, essentially: "Go talk to your network admin", helped me to solve the problem however I think that this is a problem caused by a strange Java implementation and a strange network configuration.
My network admin assured me that this was a bad java implementation Not a network problem. What do you think?