Android - Bitmap OCR - java

I get this problem:
The type java.awt.image.RenderedImage cannot be resolved. It is indirectly referenced from required .class files
I know it means that there is no reference to it in my build-path and I heard that java.awt.image is not in the Android SDK. So I am trying to figure out how to work around it.
This is my code:
Bitmap image = (Bitmap)data.getExtras().get("data");
String text = new OCR().recognizeEverything(image);
Obviously you can see that I am trying to use a OCR library. If it is impossible to get around this then can anyone show me a reference to maybe a "how to make an OCR program" or something along those lines. I pretty much have NO experience with images.
Thanks!

If your OCR library uses java.awt classes internally, you can't use it on Android. Porting it to use Android classes instead is likely to be non-trivial, especially if you have no experience. This library is being actively developed, and is reported to work on Android (I haven't personally used it). You might want to give it a try. Also, searching helps too: this is a fairly frequent question on SO, you might get some other ideas from previous answers.

We are developing pure java OCR library here:
http://sourceforge.net/projects/javaocr/
At the moment, there is some image processing stuff, invariant moment based recognition
and high performance binarisation. There are also demos schowcasing complete roundtrip for android ( gathering samples, performing recognizer training, performing reconigtion )
I already published 2 appluications based on it:
http://www.pribluda.de/android/charger/
http://www.pribluda.de/android/ocrcall/

Related

Use Java or other languages in a Flutter Application

Since I got no answer and not much feedback on this question: Android Flutter Analyze Audio Waveform
and found nothing online about what I'm looking for, I'll simply ask a broader question, since a comment on that answer told me to use native code and use a platform channel to connect it to flutter but when I asked some clarifications I got nothing.
So my question is If I can do operations in Java (which has been around since a much longer time, and thus has a way bigger documentation), and then use the outcome in Flutter.
More precisely, could I do these things in Java and Flutter:
1) Analyse Audio waveform and find peak points in specific frequencies, and use the timestamp to display them in flutter;
Edit 1:
What Are Peak Points?
This Is the waveform of different frequencies ranges (The Orange one is bass (80-255Hz)), and the Points circled in black are Peak Points. I should analyze the audio specter of a song and Find the peak points in certain frequencies. Then When I Find the Peaks I need to save the timestamps, for example, 16 seconds in and so on.
2) Edit 2:
I need to Edit some photos in a video, like a video collage, for which each frame of a 30 or 60fps video is an image.
3) Edit 3:
I need to add basic frame specific effects to the video, for example a blur that will change frame to frame, or a glare.
4) Adding Music to that video and save it to an mp4 or avi or any format.
5) Edit 4: Most Important thing, I don't want to this all in real time, but more like an After Effect like Render process, in which all the frames are rendered together. The Only thing that would be nice is a sort of progress bar telling the user that the Render is at frame, for example, 200 of 300, but I don't want to display any of the frames or the video, just to render it in background and then save it to an mp4 video that can be viewed after.
As You can see it's a difficult process to do in a language to which you hardly find a tutorial on how to play music due to its early state. But Uis and some other things in flutter are way easier to do and Flutter is also Multi-Platform. So I prefer to stick to Flutter language.
Edit 5:
I took a look at Qt and JUCE, and found out that Qt seems a valid alternative but it seems for what understood more like a "closed" system, I mean, for example I looked the multimedia library but for what I've understood, you can do basic stuff, for example play a video, but not collage frames and save it. (Don't know if I explained myself well). JUCE On the other side, looks better but it seems more for PC audio VST than for mobile applications including video rendering. And another thing is that these two are not free and open source like Flutter is.
Then There is Kivy, which could and could not be the best, because it is a Python port for Mobile Devices and I have a lot of experience with Python And I think it's one of the easier language to learn, but on the other side, it hasn't got that much UI power. and as you mentioned there could be problem using libraries on Android.
You stated I could use C++ or Java With Flutter, but with C++ you told that it's a difficult process. So My question turned out to be Could I write the process in java with a Normal Android Application And Then in some way use the functions in a Flutter App?
Edit 6:
I found a possivle alternative:
Kha (http://kha.tech/). But again found nothing on how to use it with Flutter. Could it be a good Idea?
I'm asking more of a confirmation on if I could use Java or any other language to do what I need in a Flutter Application. And If yes if it's complicated or not that much. (I'm a beginner sorta). But Some tutorial or links to kickstart the code would be helpful aswell!
Flutter at this time is great for building UIs but as you've mentioned, it doesn't have a lot of power or compatibility with libraries yet. A good part of the reason for that is that it doesn't have easy integration with c++, but I won't get into that now.
What you're asking is most likely possible but it's not going to be simple at all to do. First, it sounds like you're wanting to pull particular frames from a video, and display them - that's going to be an additional complication. And don't forget that on a mobile device you have somewhat limited processing power - things will have to be very asynchronous which can actually cause problems for flutter unless you're careful.
As to your points:
This is a very general ask. I'd advise looking up android audio processing libraries. I'm almost sure it's possible, but SO questions are not meant for asking advise on which framework to use. Try https://softwarerecs.stackexchange.com/.
Once again, fairly general and a bit unclear about what you're asking... Try sofwarerecs. I assume you're wanting to take several frames and make them into a video?
Some of those effects (i.e. zoom) you could definitely do with flutter using a Transform. But that would just be while playing in flutter rather then adding to the video files themselves. To do that, you'll have to use the video library in android/java code.
Once again, the video library should do this.
This should also be part of the video library.
I do know of one audio/video library off the top of my head called Processing that may do what you need, but not for sure. It does have an android sdk though. OpenCV would be another but only for video/image processing and I haven't used it directly with Java so I'm not sure how easy it is to use.
For how you'd actually go about implementing this along with flutter... you're going to need to use Platform Channels. I mentioned them in the comment to your other answer but figured you could look that up yourself. The documentation does do a much better job of explaining how that works and how to set it up than I can. But the TLDR is that essentially, what they allow you to do is to send serialized data from native code (java/kotlin/swift etc) to flutter code (dart) and vice-versa, which gets translated into similar data structures in the target language. You can set up various 'channels' upon which the data flows, and within those channels set up 'methods' which get called at either end, or simply send events back and forth.
The complication I mentioned at the beginning is that sending images back and forth across the channels between flutter and dart isn't all that optimal. You most likely won't get a smooth 24/30/60fps of images being sent from java to dart, and it might slow down the rest of the flutter ui significantly. So what you'll want to use for the actual viewport is instead a Texture, which simply displays data from the android side. You'll have to figure out how to write to a texture from android yourself, but there's lots of information available for that. Controls, the visualization of the audio, etc can be done directly in flutter with data that is retrieved from native.
What you'll have is essentially a remote control written in dart/flutter, which sends various commands to a audio/video processing library & wrapper code in Java.
If all that sounds complicated, that's because it is. And as much as flutter is very nice to build UIs in, I have doubts as to whether it's going to be worth the extra complications if you're only targeting android.
Not really related to the answer but rather some friendly advice:
There is one other thing I'll mention - I don't know your level of proficiency with programming and with different languages, but video/audio processing and such are generally not done in java but rather in actual native code (i.e. c/c++). As such, there are actually two levels of abstraction you're going to have to be dealing with here (to some degree as it will probably be abstracted somewhat or a lot depending on the library you're using) - c/c++ to java and java to dart.
You may want to cut out the middlemen and work more directly with native - in that case I'd recommend at least taking a look at Qt or JUCE as they may be more suitable than flutter for your particular use case. There's also Kivy (uses python) which may work well as there's a ton of image/video/audio processing libraries for Python somehow... although they may not all work on android and still have the c++ => python translation to some degree. You'll have to look into licensing etc though - Qt has a broad enough OS licence for most android apps, but JUCE you'd have to pay for unless you're doing open source. I'd have to recommend Qt slightly more than the others as it actually has native decoding of video frames etc, although you'd probably want to incorporate OpenCV or something for the more complicated effects you are talking about. But it would probably be on the same level of complicated as simply writing in java code, but with a slightly different UI style & easier integration with c++ libraries.

Using android palette class in desktop java application

Android developers have created a great class to isolate colors from an image
https://android.googlesource.com/platform/frameworks/support/+/refs/heads/master/v7/palette/src/android/support/v7/graphics/Palette.java
However I could really use this in a desktop application. I tried ripping the class from the library (as recommended to me , during a Devox java conference by one of the android devs) but he probably didn't realize I would have to drag half of the android library with me then eg: Bitmap, Color
Does anyone know of a tool that might be able to strip the classes I need retroactivly from the library, or give me a better way to solve my original problem, a java class that will give me the dominant colors of an image together with their inverse and adjoining colors ? I can't write it from scratch cause we just don't have the budget/time to do that :(
I stumbled upon your question and I also found this lightweight library on github that is a port of one of the javascript libraries that Chris Banes references in the blog post mentioned above.
It's pretty good at extracting the primary color for images quickly. I would still love it if someone had a way to extract something like what android calls the "vibrant" color. Ideally I would love a way to mimic exactly what happens in the android library so i could have a seamless experience between my android, desktop, and ios experiences. I still haven't found that one yet.
Another Java port - https://github.com/trickl/palette. I wrote this one myself (I was unaware of the existence of other ports at the time). It's a pretty direct port and includes a bunch of the original tests.
There is this Palette ported library. You can use it in your Gradle based java project by adding jcenter() to your repositories and the next dependency into your build.gradle file.
dependencies {
...
compile "com.loyalsound:iris:1.1"
}

How to take a picture of in panaroma mode?

I want to make an application that allow the user to take a picture of text either from android device Gallery or from android Camera application in a Panorama mode .But i can not find any source or tutorial to do this.How can i do this in my application? how to make an application that take picture from android camera application in a panaroma mode?
Thanks in advance.
I don't know if it's still actual for you, but hope it will be helpful for someone.
Panorama feature is already implemented in standard android camera at least since Android 4.0 (perhaps it was available in even earlier versions but I'm not sure, you can check it), so since source code is open for everyone, it might be the easiest way just to copy required functionality.
Although you can download source of apps from https://android.googlesource.com/ (you want LegacyCamera or Camera), you can't just open project of any standard app in Eclipse or other IDE. For example, LegacyCamera depends on Gallery2 and other dependences that might be hard to be resolved.
I spent several days trying to move panorama feature to separate project. You can download it from here: https://github.com/yankeppey/PanoramaSample . Several remarks:
Functional core (creating one panorama image from several ones, progress notices, etc) is on native part.
I used java code from from LegacyCamera which was used in Android 4.0-4.1, not 4.2, because it was significantly easier for me. Native part is taken from 4.2, it has only minor changes inside and almost the same JNI interfaces.
This project is just to help you move panorama feature to your own app, it's not like kind of library, don't expect clean code without bugs, it's just pretty dirty and buggy project. If I have time I'll try to make it cleaner, but there is no warranty :)

is there any inbuilt function for image recognition

i want to use a function for image recognition
i dont want to make an algorithm.
please suggest me a function where i could compare two images and tell whether these images belong to the same object.
please help me!.
Arbitrary image recognition is something that computers can't yet do (even for supercomputers). However, Google Goggles comes close, being able to recognize a wide range of objects. Read its limitations, and see if it suits your purpose.
Yes. There are definitely ways to do this but they all depend on what you are trying to do. If you are more specific about what you want to compare then it will be easier to give a more thorough answer.
There are some excellent libraries out there but it will require some effort on your part to learn and understand how to use them and how to use them on the iPhone.
The most famous algorithms so assist in finding images inside other images are called SIFT and SURF. Unfortunately both are patented and cannot be used commercially in an application.
Consider using OpenCV for most of your image operations.
Or you could use OpenFrameworks (google has tagged it as a phishing site for some reason, im sure theyll fix that soon)
You might also consider VXL which has started to become more popular.
Good luck!

Count the number of objects in an Image

I am investigating the possibility of image processing to identify certain objects and also count them in an image.
I will be given a picture and I need to identify the number of boxes present in that image.
Does anybody have any experience with any Machine Vision/ Image Processing libraries like ImageJ, Fiji, JAI, jMagick ,Java Vision Toolkit? Which do you think is best suited for the job? What do you guys suggest? If the APIs can be used from Java, it would be better. Thank you.
Edit:
I am dealing with warehouse brown boxes. Yes I am talking about regular photos. The source is usually a mobile phone picture.
Edit2:
I am sorry the answer got autoselected. : (
I have never used the libraries you listed but I have used OpenCV.
OpenCV is a well supported and proven computer vision library. It has built in features to count the number of primitive shapes in an image. It is written in C++ but you could create a small wrapper to be invoked via JNI.
RoboRealm is another proven computer vision system used by robotic hobbyists. It is a closed source commercial product that uses a socket based control API.
http://opencv.willowgarage.com/wiki/FullOpenCVWiki
http://www.roborealm.com/index.php
If you must stick to Java, you can still use OpenCV.
If it's just boxes you can use Hough Transforms to detect them.
You can use OpenSURF to detect phones based on source images you feed to it.
Don't think this would be feasible in your case: HAAR Cascades. You could create a custom HAAR clasifier, but the training process can be quite time consuming.
HTH,
George
In Java, there are several projects that extend the Java Advanced Imaging API to provide computer vision:
JavaVis
image processing in java + IPJ - computer vision extensions for JAI
Java Vision Toolkit - JVT (EDIT: opps, this is mentioned in the question.)
There is a paper for JavaVis which introduces the library, compares and constrasts with these other two libraries mentioned.
JavaVis has these features:
handles 2D and 3D images (3D being most relevant in this case)
Has a GUI for inspecting potential results
Matlab image export
Also for java is NeatVision. Unlike the others, documentation is clearly visible for this project.
None of these projects are going to give you a simple turnkey solution. You will need to understand how computer vision works, and create a sequence of processing steps on the photos to help get the best results from the vision algorithms. To that end, JavaVis maybe most useful, since it is aimed towards teaching computer vision.
If you are not talking about real time image processing, you could write an API to Amazon Mechanical Turk.
Are you willing to develop your own code for that? There are several techniques that can be applied and tuned to your specific problem, but I never used a packaged library, always developed my own code. I can provide references for that if you're interested.

Categories

Resources