I record audio using android AudioRecord class with success and to make it play with all players i add AAC ADTS headers with success and concatenated all in MPEG 2 TS (.ts) file and it's able to play with all players (native an VLC).
But unfortunately this file is not ISOMEDIA compliant to dashing with GPAC. I try to look at a solution and i found that i need to add PES headers to my ES stream.
please can't someone know how to add PES headers on top of ADTS headers in MPEG 2 TS using java ? on code below ?
enter code here
private void addADTStoPacket(byte[] packet, int packetLen) {
int profile = 2; //AAC LC //39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = 4; //44.1KHz
int chanCfg = 2; //CPE
// fill in ADTS data
packet[0] = (byte)0xFF; // conversion hexadecimal a decimal - il y a seize unités de 0 à F, on parle donc d'hexadécimal.
packet[1] = (byte)0xF1; // installe l'entete ADTS dans MPEG-2 (0xF1) au lieu de MPEG-4 (0xF9)
packet[2] = (byte)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
packet[3] = (byte)(((chanCfg&3)<<6) + (packetLen>>11));
packet[4] = (byte)((packetLen&0x7FF) >> 3);
packet[5] = (byte)(((packetLen&7)<<5) + 0x1F);
packet[6] = (byte)0xFC; // 0xFC est également correct si vous ne connaissez pas la valeur de la plénitude du tampon
}
Related
I'm writing a very basic program in java but I get the error "error: cannot find symbol".
It's driving me crazy because I really don't know what I did wrong this time.
Here is the code of the main class
package glaces.tests;
import geometrie.Point ;
import glaces.Iceberg2D;
public class TestIceberg2D {
public static void main (String[] args){
Iceberg2D i1 = new Iceberg2D(new Point(2,3), new Point(6,7));
Iceberg2D i2 = new Iceberg2D(new Point(3,7), new Point(5,9));
Iceberg2D i3 = new Iceberg2D(i1,i2);
System.out.println(i3.toString());
}
}
Here is the class Iceberg2D :
package glaces;
import geometrie.Point;
import java.lang.Math;
/**
* Un iceberg rectangulaire
* #author Martine Gautier, Université de Lorraine
*/
public class Iceberg2D {
private Point enBasAGauche ;
private Point enHautADroite ;
/**
* Construction
* #param g le coin en bas à gauche
* #param d le coin en haut à droite
* uniquement en coordonnées positives
*/
public Iceberg2D(Point g, Point d) {
this.enBasAGauche = g;
this.enHautADroite = d;
}
/**
* Construction par fusion de deux icebergs qui se touchent
* #param i1 premier iceberg à fusionner
* #param i2 deuxième iceberg à fusionner
*/
public Iceberg2D(Iceberg2D i1, Iceberg2D i2) {
this.enBasAGauche = new Point (Math.min(i1.coinEnBasAGauche().getAbscisse(),i2.coinEnBasAGauche().getAbscisse()),Math.min(i1.coinEnBasAGauche().getOrdonnee(),i2.coinEnBasAGauche().getOrdonnee()));
this.enHautADroite = new Point (Math.max(i1.coinEnHautADroite().getAbscisse(),i2.coinEnHautADroite().getAbscisse()),Math.max(i1.coinEnHautADroite().getOrdonnee(),i2.coinEnHautADroite().getOrdonnee()));
}
/**
* Retourne le coin en bas à gauche
* #return le coin en bas à gauche
*/
public Point coinEnBasAGauche() {
return this.enBasAGauche ;
}
/**
* Retourne le coin en haut à droite
* #return le coin en haut à droite
*/
public Point coinEnHautADroite() {
return this.enHautADroite ;
}
/**
* Retourne la hauteur
* #return hauteur
*/
public double hauteur() {
return Math.abs(enHautADroite.getOrdonnee()-enBasAGauche.getOrdonnee());
}
/**
* Retourne la largeur
* #return largeur
*/
public double largeur() {
return Math.abs(enHautADroite.getAbscisse()-enBasAGauche.getAbscisse());
}
/**
* Retourne la surface totale
* #return surface totale
*/
public double surface() {
return hauteur()*largeur() ;
}
/**
* Retourne vrai si il y a une collision entre les deux icebergs
* #param i iceberg potentiellement en collision
* #return vrai si collision entre les deux icebergs
*/
public boolean collision(Iceberg2D i) {
if(this.enBasAGauche.getAbscisse() + largeur() == i.enBasAGauche.getAbscisse() || i.enBasAGauche.getAbscisse() + i.largeur() == this.enBasAGauche.getAbscisse()){
if(this.enBasAGauche.getOrdonnee() + hauteur() >= i.coinEnBasAGauche().getOrdonnee() && i.enBasAGauche.getOrdonnee() + i.hauteur() >= this.coinEnBasAGauche().getOrdonnee()){
return true;
}}
if(this.enBasAGauche.getOrdonnee() + hauteur() == i.enBasAGauche.getOrdonnee() || i.enBasAGauche.getOrdonnee() + i.hauteur() == this.enBasAGauche.getOrdonnee()){
if(this.enBasAGauche.getAbscisse() + largeur() >= i.coinEnBasAGauche().getAbscisse() && i.enBasAGauche.getAbscisse() + i.largeur() >= this.coinEnBasAGauche().getAbscisse()){
return true;
}
}
return false ;
}
/**
* Retourne vrai si this est plus volumineux que i
* #param i iceberg à comparer
* #return vrai si this est plus volumineux que i
*/
public boolean estPlusGrosQue(Iceberg2D i) {
return this.surface() > i.surface();
}
public String toString() {
return "Point bas gauche : "+enBasAGauche.toString()+" / Point haut droite : "+enHautADroite.toString() ;
}
/**
* Retourne le point au centre
* #return le point au centre de l'iceberg
*/
public Point centre() {
return new Point((enBasAGauche.getAbscisse()+enHautADroite.getAbscisse())/2,(enBasAGauche.getOrdonnee() +enHautADroite.getOrdonnee())/2);
}
/**
* Réduction dans les quatre directions ; le centre ne bouge pas
* #param fr dans ]0..1[ facteur de réduction
*/
public void fondre(double fr) {
fr = fr/2;
double hauteur = hauteur();
double largeur = largeur();
enBasAGauche.deplacer(largeur*fr,hauteur*fr);
enHautADroite.deplacer(-largeur*fr,-hauteur*fr);
}
/**
* Casser une partie à droite
* #param fr dans ]0..1[ facteur de réduction
*/
public void casserDroite(double fr) {
fr = fr/2;
enHautADroite.deplacer(-largeur()*fr,0);
}
/**
* Casser une partie à gauche
* #param fr dans ]0..1[ facteur de réduction
*/
public void casserGauche(double fr) {
fr = fr/2;
enBasAGauche.deplacer(largeur()*fr,0);
}
/**
* Casser une partie en haut
* #param fr dans ]0..1[ facteur de réduction
*/
public void casserHaut(double fr) {
fr = fr/2;
enHautADroite.deplacer(0,-hauteur()*fr);
}
/**
* Casser une partie en bas
* #param fr dans ]0..1[ : définit le pourcentage supprimé
*/
public void casserBas(double fr) {
fr = fr/2;
enBasAGauche.deplacer(0,hauteur()*fr);
}
}
here is the error I get :
javac -classpath ../ressourcesBPO/geometrie.jar -encoding "iso-8859-1" glaces/tests/TestIceberg2D.java
glaces/tests/TestIceberg2D.java:3: error: cannot find symbol
import glaces.Iceberg2D;
^
symbol: class Iceberg2D
location: package glaces
glaces/tests/TestIceberg2D.java:9: error: cannot find symbol
Iceberg2D i1 = new Iceberg2D(new Point(2,3), new Point(6,7));
^
symbol: class Iceberg2D
location: class TestIceberg2D
glaces/tests/TestIceberg2D.java:9: error: cannot find symbol
Iceberg2D i1 = new Iceberg2D(new Point(2,3), new Point(6,7));
^
symbol: class Iceberg2D
location: class TestIceberg2D
glaces/tests/TestIceberg2D.java:10: error: cannot find symbol
Iceberg2D i2 = new Iceberg2D(new Point(3,7), new Point(5,9));
^
symbol: class Iceberg2D
location: class TestIceberg2D
glaces/tests/TestIceberg2D.java:10: error: cannot find symbol
Iceberg2D i2 = new Iceberg2D(new Point(3,7), new Point(5,9));
^
symbol: class Iceberg2D
location: class TestIceberg2D
glaces/tests/TestIceberg2D.java:58: error: cannot find symbol
Iceberg2D i3 = new Iceberg2D(i1,i2);
^
symbol: class Iceberg2D
location: class TestIceberg2D
glaces/tests/TestIceberg2D.java:58: error: cannot find symbol
Iceberg2D i3 = new Iceberg2D(i1,i2);
^
symbol: class Iceberg2D
location: class TestIceberg2D
7 errors
This is my folder tree view
thanks in advance !!!
Ok so I found the solution but I have absolutely no idea why it works.
Here is what I typed in the terminal :
javac -classpath ../ressourcesBPO/geometrie.jar:. -encoding "iso-8859-1" glaces/tests/TestIceberg2D.java
I added :. after the .jar
Why it worked ? I have absolutely no idea so, if someone wants to explain it would be much appreciated. Thanks a lot btw for trying to help !
Since your test is using the Iceberg2D class, you will first need to compile that and then point javac to its location as part of the classpath, or compile both sources at the same time.
Based on your directory structure, the first would be (assuming you’re in the top-level java directory):
javac -cp ../ressourcesBPO/geometrie.jar -d java java/glaces/Iceberg2D.java
javac -cp ../ressourcesBPO/geometrie.jar:. -d java java/glaces/test/TestIceberg2D.java
The point is that you need to add the top-level path of your *.class files to the classpath (i.e. where Java would search for the compiled file glaces/Iceberg2d.class) — and that is the current directory (i.e. .).
However, that’s ending up mixing compiled and source files, and makes everything more complicated than necessary. A more conventional Java project structure would have this outline:
projectname/
╰── build/
╰── classes/
├── main/
╰── test/
├── lib/
│ ╰── geometrie.jar
╰── src/
├── main/
│ ╰── java/
│ ╰── glaces/
│ ╰── Iceberg2D.java
╰── test/
╰── java/
╰── glaces/
╰── TestIceberg2D.java
This simplifies the build command somewhat, and prevents cluttering your source tree:
shopt extglob # requires Bash to run!
javac -d build/classes/main -cp lib/geometrie.jar src/main/java/**/*.java
javac -d build/classes/test -cp lib/geometrie.jar:build/classes/main src/test/java/**/*.java
Furthermore, this is also the directory structure used by modern Java build systems such as Gradle. Using the latter, you could create a minimal build configuration (using gradle init) and then run gradle test to build your entire main source tree, the test source tree, and then run the tests.
A final note, I know it’s extremely common to program “in French” at University in France (been there, done the same) but I strongly recommend consistently using English when writing code. Names matter for code comprehension and mixing different languages when working with other libraries makes everything confusing. It also means that only French-speaking people can read or use your code. This means asking questions (e.g. here on Stack Overflow) gets harder, but also that you can’t usefully distribute your code once you write something cool and want to share it.
Hi I am just learning java with a book; in this, there is an excersise called loan; I am following the sample, but when I want to test the program after I type the loan amount named by me as montoPrestamo (in Spanish) but when I type any value, netBeans does not nothing, no error, no exception, no next prompt, nothing.
I dont know what is going on or where is the mistake.
thanks
here is my code:
public static void main(String[] args) {
double montoPrestamo, interesAnual,pagoMensual,pagoTotal;
int tiempoPrestamo;
Scanner scanner = new Scanner(System.in);
scanner.useDelimiter(System.getProperty("line.separator"));
System.out.print("Monto del prestamo (Pesos y Centavos): ");
montoPrestamo = scanner.nextDouble();
System.out.print("el valor ingresado es " + montoPrestamo);
System.out.print("Tasa de Interes anual (ejemplo, 9,5): ");
interesAnual=scanner.nextDouble();
System.out.print("Tiempo de periodo del prestamo en años : ");
tiempoPrestamo=scanner.nextInt();
System.out.println("");
System.out.println("Cantidad solicitada $"+montoPrestamo);
System.out.println("la tasa de interes de su prestamo:"+interesAnual+"%");
System.out.println("Tiempo del prestamo en años"+tiempoPrestamo);
System.out.println("\n");
System.out.println("Pago mensual"+pagoMensual);
System.out.println("Pago total"+pagoTotal);
}
It works after removing this line:
scanner.useDelimiter(System.getProperty("line.separator"));
Probably this does not work from within an IDE because it uses an own terminal which might use another linefeed separator than your operating system.
Some days ago, I am developing an java server to keep a bunch of data and identify its language, so I decided to use lingpipe for such task. But I have facing an issue, after training code and evaluating it with two languages(English and Spanish) by getting that I can't identify spanish text, but I got a successful result with english and french.
The tutorial that I have followed in order to complete this task is:
http://alias-i.com/lingpipe/demos/tutorial/langid/read-me.html
An the next steps I have made in order to complete the task:
Steps followed to train a Language Classifier
~1.First place and unpack the english and spanish metadata inside a folder named leipzig, as follow (Note: Metadata and Sentences are provided from http://wortschatz.uni-leipzig.de/en/download):
leipzig //Main folder
1M sentences //Folder with data of the last trial
eng_news_2015_1M
eng_news_2015_1M.tar.gz
spa-hn_web_2015_1M
spa-hn_web_2015_1M.tar.gz
ClassifyLang.java //Custom program to try the trained code
dist //Folder
eng_news_2015_300K.tar.gz //unpackaged english sentences
spa-hn_web_2015_300K.tar.gz //unpackaged spanish sentences
EvalLanguageId.java
langid-leipzig.classifier //trained code
lingpipe-4.1.2.jar
munged //Folder
eng //folder containing the sentences.txt for english
sentences.txt
spa //folder containing the sentences.txt for spanish
sentences.txt
Munge.java
TrainLanguageId.java
unpacked //Folder
eng_news_2015_300K //Folder with the english metadata
eng_news_2015_300K-co_n.txt
eng_news_2015_300K-co_s.txt
eng_news_2015_300K-import.sql
eng_news_2015_300K-inv_so.txt
eng_news_2015_300K-inv_w.txt
eng_news_2015_300K-sources.txt
eng_news_2015_300K-words.txt
sentences.txt
spa-hn_web_2015_300K //Folder with the spanish metadata
sentences.txt
spa-hn_web_2015_300K-co_n.txt
spa-hn_web_2015_300K-co_s.txt
spa-hn_web_2015_300K-import.sql
spa-hn_web_2015_300K-inv_so.txt
spa-hn_web_2015_300K-inv_w.txt
spa-hn_web_2015_300K-sources.txt
spa-hn_web_2015_300K-words.txt
~2.Second unpack the language metadata compressed into a unpack folder
unpacked //Folder
eng_news_2015_300K //Folder with the english metadata
eng_news_2015_300K-co_n.txt
eng_news_2015_300K-co_s.txt
eng_news_2015_300K-import.sql
eng_news_2015_300K-inv_so.txt
eng_news_2015_300K-inv_w.txt
eng_news_2015_300K-sources.txt
eng_news_2015_300K-words.txt
sentences.txt
spa-hn_web_2015_300K //Folder with the spanish metadata
sentences.txt
spa-hn_web_2015_300K-co_n.txt
spa-hn_web_2015_300K-co_s.txt
spa-hn_web_2015_300K-import.sql
spa-hn_web_2015_300K-inv_so.txt
spa-hn_web_2015_300K-inv_w.txt
spa-hn_web_2015_300K-sources.txt
spa-hn_web_2015_300K-words.txt
~3.Then Munge the sentences of each one in order to remove the line numbers, tabs and replacing line breaks with single space characters. The output is uniformly written using the UTF-8 unicode encoding (Note:the munge.java at Lingpipe site).
/-----------------Command line----------------------------------------------/
javac -cp lingpipe-4.1.2.jar: Munge.java
java -cp lingpipe-4.1.2.jar: Munge /home/samuel/leipzig/unpacked /home/samuel/leipzig/munged
----------------------------------------Results-----------------------------
spa
reading from=/home/samuel/leipzig/unpacked/spa-hn_web_2015_300K/sentences.txt charset=iso-8859-1
writing to=/home/samuel/leipzig/munged/spa/spa.txt charset=utf-8
total length=43267166
eng
reading from=/home/samuel/leipzig/unpacked/eng_news_2015_300K/sentences.txt charset=iso-8859-1
writing to=/home/samuel/leipzig/munged/eng/eng.txt charset=utf-8
total length=35847257
/---------------------------------------------------------------/
<---------------------------------Folder------------------------------------->
munged //Folder
eng //folder containing the sentences.txt for english
sentences.txt
spa //folder containing the sentences.txt for spanish
sentences.txt
<-------------------------------------------------------------------------->
~4.Next we start by training the language(Note:the TrainLanguageId.java at Lingpipe LanguageId tutorial).
/---------------Command line--------------------------------------------/
javac -cp lingpipe-4.1.2.jar: TrainLanguageId.java
java -cp lingpipe-4.1.2.jar: TrainLanguageId /home/samuel/leipzig/munged /home/samuel/leipzig/langid-leipzig.classifier 100000 5
-----------------------------------Results-----------------------------------
nGram=100000 numChars=5
Training category=eng
Training category=spa
Compiling model to file=/home/samuel/leipzig/langid-leipzig.classifier
/----------------------------------------------------------------------------/
~5. We evaluated our trained code with the next result, having some issues on the confusion matrix (Note:the EvalLanguageId.java at Lingpipe LanguageId tutorial).
/------------------------Command line---------------------------------/
javac -cp lingpipe-4.1.2.jar: EvalLanguageId.java
java -cp lingpipe-4.1.2.jar: EvalLanguageId /home/samuel/leipzig/munged /home/samuel/leipzig/langid-leipzig.classifier 100000 50 1000
-------------------------------Results-------------------------------------
Reading classifier from file=/home/samuel/leipzig/langid-leipzig.classifier
Evaluating category=eng
Evaluating category=spa
TEST RESULTS
BASE CLASSIFIER EVALUATION
Categories=[eng, spa]
Total Count=2000
Total Correct=1000
Total Accuracy=0.5
95% Confidence Interval=0.5 +/- 0.02191346617949794
Confusion Matrix
reference \ response
,eng,spa
eng,1000,0 <---------- not diagonal sampling
spa,1000,0
Macro-averaged Precision=NaN
Macro-averaged Recall=0.5
Macro-averaged F=NaN
Micro-averaged Results
the following symmetries are expected:
TP=TN, FN=FP
PosRef=PosResp=NegRef=NegResp
Acc=Prec=Rec=F
Total=4000
True Positive=1000
False Negative=1000
False Positive=1000
True Negative=1000
Positive Reference=2000
Positive Response=2000
Negative Reference=2000
Negative Response=2000
Accuracy=0.5
Recall=0.5
Precision=0.5
Rejection Recall=0.5
Rejection Precision=0.5
F(1)=0.5
Fowlkes-Mallows=2000.0
Jaccard Coefficient=0.3333333333333333
Yule's Q=0.0
Yule's Y=0.0
Reference Likelihood=0.5
Response Likelihood=0.5
Random Accuracy=0.5
Random Accuracy Unbiased=0.5
kappa=0.0
kappa Unbiased=0.0
kappa No Prevalence=0.0
chi Squared=0.0
phi Squared=0.0
Accuracy Deviation=0.007905694150420948
Random Accuracy=0.5
Random Accuracy Unbiased=0.625
kappa=0.0
kappa Unbiased=-0.3333333333333333
kappa No Prevalence =0.0
Reference Entropy=1.0
Response Entropy=NaN
Cross Entropy=Infinity
Joint Entropy=1.0
Conditional Entropy=0.0
Mutual Information=0.0
Kullback-Liebler Divergence=Infinity
chi Squared=NaN
chi-Squared Degrees of Freedom=1
phi Squared=NaN
Cramer's V=NaN
lambda A=0.0
lambda B=NaN
ONE VERSUS ALL EVALUATIONS BY CATEGORY
CATEGORY[0]=eng VERSUS ALL
First-Best Precision/Recall Evaluation
Total=2000
True Positive=1000
False Negative=0
False Positive=1000
True Negative=0
Positive Reference=1000
Positive Response=2000
Negative Reference=1000
Negative Response=0
Accuracy=0.5
Recall=1.0
Precision=0.5
Rejection Recall=0.0
Rejection Precision=NaN
F(1)=0.6666666666666666
Fowlkes-Mallows=1414.2135623730949
Jaccard Coefficient=0.5
Yule's Q=NaN
Yule's Y=NaN
Reference Likelihood=0.5
Response Likelihood=1.0
Random Accuracy=0.5
Random Accuracy Unbiased=0.625
kappa=0.0
kappa Unbiased=-0.3333333333333333
kappa No Prevalence=0.0
chi Squared=NaN
phi Squared=NaN
Accuracy Deviation=0.011180339887498949
CATEGORY[1]=spa VERSUS ALL
First-Best Precision/Recall Evaluation
Total=2000
True Positive=0
False Negative=1000
False Positive=0
True Negative=1000
Positive Reference=1000
Positive Response=0
Negative Reference=1000
Negative Response=2000
Accuracy=0.5
Recall=0.0
Precision=NaN
Rejection Recall=1.0
Rejection Precision=0.5
F(1)=NaN
Fowlkes-Mallows=NaN
Jaccard Coefficient=0.0
Yule's Q=NaN
Yule's Y=NaN
Reference Likelihood=0.5
Response Likelihood=0.0
Random Accuracy=0.5
Random Accuracy Unbiased=0.625
kappa=0.0
kappa Unbiased=-0.3333333333333333
kappa No Prevalence=0.0
chi Squared=NaN
phi Squared=NaN
Accuracy Deviation=0.011180339887498949
/-----------------------------------------------------------------------/
~6.Then we tried to make a real evaluation with spanish text:
/-------------------Command line----------------------------------/
javac -cp lingpipe-4.1.2.jar: ClassifyLang.java
java -cp lingpipe-4.1.2.jar: ClassifyLang
/-------------------------------------------------------------------------/
<---------------------------------Result------------------------------------>
Text: Yo soy una persona increíble y muy inteligente, me admiro a mi mismo lo que me hace sentir ansiedad de lo que viene, por que es algo grandioso lleno de cosas buenas y de ahora en adelante estaré enfocado y optimista aunque tengo que aclarar que no lo haré por querer algo, sino por que es mi pasión.
Best Language: eng <------------- Wrong Result
<----------------------------------------------------------------------->
Code for ClassifyLang.java:
import com.aliasi.classify.Classification;
import com.aliasi.classify.Classified;
import com.aliasi.classify.ConfusionMatrix;
import com.aliasi.classify.DynamicLMClassifier;
import com.aliasi.classify.JointClassification;
import com.aliasi.classify.JointClassifier;
import com.aliasi.classify.JointClassifierEvaluator;
import com.aliasi.classify.LMClassifier;
import com.aliasi.lm.NGramProcessLM;
import com.aliasi.util.AbstractExternalizable;
import java.io.File;
import java.io.IOException;
import com.aliasi.util.Files;
public class ClassifyLang {
public static String text = "Yo soy una persona increíble y muy inteligente, me admiro a mi mismo"
+ " estoy ansioso de lo que viene, por que es algo grandioso lleno de cosas buenas"
+ " y de ahora en adelante estaré enfocado y optimista"
+ " aunque tengo que aclarar que no lo haré por querer algo, sino por que no es difícil serlo. ";
private static File MODEL_DIR
= new File("/home/samuel/leipzig/langid-leipzig.classifier");
public static void main(String[] args)
throws ClassNotFoundException, IOException {
System.out.println("Text: " + text);
LMClassifier classifier = null;
try {
classifier = (LMClassifier) AbstractExternalizable.readObject(MODEL_DIR);
} catch (IOException | ClassNotFoundException ex) {
// Handle exceptions
System.out.println("Problem with the Model");
}
Classification classification = classifier.classify(text);
String bestCategory = classification.bestCategory();
System.out.println("Best Language: " + bestCategory);
}
}
~7.I tried with a 1 million metadata file, but it got the same result and also changing the ngram number by getting the same results.
I will be so thankfull for your help.
Well, after days working in Natural Language Processing I found a way to determine the language of one text using OpenNLP.
Here is the Sample Code:
https://github.com/samuelchapas/languagePredictionOpenNLP/tree/master/TrainingLanguageDecOpenNLP
and over here is the training Corpus for the model created to make language predictions.
I decided to use OpenNLP for the issue described in this question, really this library has a complete stack of functionalities.
Here is the sample for model training>
https://mega.nz/#F!HHYHGJ4Q!PY2qfbZr-e0w8tg3cUgAXg
Hi I started using JavaCV two days ago,
I m trying to make an ANPR Open Source System at my git repository under Java SE and Maven.
I have detected my plate rectangle and now I'm trying to prepare a good image for OCR reading.
the original image :
Right now I have obtained this image :
,
Is there any way to turn my plate numbers black using JavaCv ? I don't have the slightest idea how to do so using javacv functions .
here I give you the methods that are producing this result :
first I call this after a blur
public void toB_A_W(JLabel jLabel){
Mat rgbImage = Imgcodecs.imread(original);
Mat destination = new Mat(rgbImage.rows(), rgbImage.cols(), rgbImage.type());
// l objectif et de corriger les erreur de la transformation en noire et blan
int dilation_size = 2;
// la matrice de la dilatation on cherche a dilater en forme de rectange ( Imgproc.MORPH_RECT )
Mat element1 = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(dilation_size + 1, dilation_size + 1));
// on dilate l image
Imgproc.dilate(rgbImage, destination, element1);
Mat labImage = new Mat();
cvtColor(destination, labImage, Imgproc.COLOR_BGR2GRAY);
Imgcodecs.imwrite(ocrReadFrom, labImage);
jLabel.setIcon(new ImageIcon(ocrReadFrom));
JOptionPane.showConfirmDialog(null, "");
}
then I call this :
public void toB_W(JLabel jLabelBlackAndWhiteImage) {
// cella est l image de l ocr
smouthedImage = opencv_imgcodecs.cvLoadImage(ocrReadFrom);
blackAndWhiteImageOCR = opencv_core.IplImage.create(smouthedImage.width(),
smouthedImage.height(), IPL_DEPTH_8U, 1);
// la fonction qui va executé la transformation en noire et blan
System.out.println("0");
//cvAdaptiveThreshold(smouthedImage, smouthedImage, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, opencv_imgproc.CV_THRESH_MASK, 15, -2);
opencv_imgproc.cvSmooth(smouthedImage, smouthedImage);
System.out.println("1");
cvCvtColor(smouthedImage, blackAndWhiteImageOCR, CV_BGR2GRAY);
System.out.println("2");
cvAdaptiveThreshold(blackAndWhiteImageOCR, blackAndWhiteImageOCR, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, 17, -4);
System.out.println("3");
opencv_imgproc.cvSmooth(blackAndWhiteImageOCR, blackAndWhiteImageOCR);
// fin de la transformation
cvSaveImage(ocrReadFrom, blackAndWhiteImageOCR);
...}
Thanks
You want to fill the numbers, you could have considered performing binary threshold rather than adaptive threshold.
I chose a threshold level of 40 to make the numbers distinct.
I'am trying to copy or duplicate an Array without success
for (int i=0;i<x*y+y;i++)
{
tmpInt = br.read();
//Wenn i%x 0 ist dann brakeY eins hochzählen, um damit die Anzahl der Zeilen zu bekommen
if (i%x==0 && brakeY<y-1) brakeY++;
if (tmpX<=x-1) tmpX++;
else tmpX = 0;
// Beim ersten Ausführen dieser Teilfunktion wird das Array aus der Textdatei ausgelesen. Beim zweiten Mal jedoch gibt es nur den aktuellen Stand der Map wieder um Veränderungen zu sehen.
spielFeld[tmpX][brakeY] = (char) tmpInt;
System.out.print(spielFeld[tmpX][brakeY]);
//System.out.println("----------");
}
I'am trying to copy the Array, called spielFeld (german for playground), in this line spielFeldT = spielFeld.clone(); , (spielFeldT = spielFeld didn't work either) so that I can interact with it globally. The results are:
1xwvutsrqpo
2 ü n
3 !öä m
4 " l
5 K §$% k
789abcdefgh
which is exactly how it's should look like,
but if I'am tyring to print the copied array exactly the same way as I printed this one something like this appears.
1 ü �
3 !öä �n
4 " � l
5 K §$%� k
6 � fgh
789abcdefgh
789abcdefgh
You can use the System.arraycopy(...) method.
Here is the Syntax of the method,
public static void arraycopy(Object src, int srcPos, Object dest, int destPos, int length)
For further information, you may want to take a look at this question.