I am getting incorrect LineMetrics (java.awt.font.LineMetrics) value (ascent -240, descent 240 and leading 240) when running in RHEL but getting correct value when running in Windows (ascent 10.053711, descent 2.1972656 and leading 0.32714844).
JDK Version: jdk1.8.0_51
OS: RHEL, fedora 7.3 //Getting incorrect value here
OS: Windows 10 //Getting correct value here
BufferedImage image = new BufferedImage(700, 500, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = image.createGraphics();
Font font = new Font("SansSerif",Font.PLAIN, 10);
LineMetrics metrics = font.getLineMetrics("ABCxyz", g2.getFontRenderContext());
System.out.println("Metrics: ");
System.out.println("\tAscent: " + metrics.getAscent());
System.out.println("\tDescent: " + metrics.getDescent());
System.out.println("\tHeight: " + metrics.getHeight());
System.out.println("\tLeading: " + metrics.getLeading());
Installed ttf file in the underlying OS was corrupt because of that I was getting incorrect metrics.
Related
I have created a neural network in Keras using the InceptionV3 pretrained model:
base_model = applications.inception_v3.InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(len(labels_list), activation='sigmoid')(x)
I trained the model successfully and want to following image: https://imgur.com/a/hoNjDfR. Therefore, the image is cropped to 299x299 and normalized (just devided by 255):
def img_to_array(img, data_format='channels_last', dtype='float32'):
if data_format not in {'channels_first', 'channels_last'}:
raise ValueError('Unknown data_format: %s' % data_format)
# Numpy array x has format (height, width, channel)
# or (channel, height, width)
# but original PIL image has format (width, height, channel)
x = np.asarray(img, dtype=dtype)
if len(x.shape) == 3:
if data_format == 'channels_first':
x = x.transpose(2, 0, 1)
elif len(x.shape) == 2:
if data_format == 'channels_first':
x = x.reshape((1, x.shape[0], x.shape[1]))
else:
x = x.reshape((x.shape[0], x.shape[1], 1))
else:
raise ValueError('Unsupported image shape: %s' % (x.shape,))
return x
def load_image_as_array(path):
if pil_image is not None:
_PIL_INTERPOLATION_METHODS = {
'nearest': pil_image.NEAREST,
'bilinear': pil_image.BILINEAR,
'bicubic': pil_image.BICUBIC,
}
# These methods were only introduced in version 3.4.0 (2016).
if hasattr(pil_image, 'HAMMING'):
_PIL_INTERPOLATION_METHODS['hamming'] = pil_image.HAMMING
if hasattr(pil_image, 'BOX'):
_PIL_INTERPOLATION_METHODS['box'] = pil_image.BOX
# This method is new in version 1.1.3 (2013).
if hasattr(pil_image, 'LANCZOS'):
_PIL_INTERPOLATION_METHODS['lanczos'] = pil_image.LANCZOS
with open(path, 'rb') as f:
img = pil_image.open(io.BytesIO(f.read()))
width_height_tuple = (IMG_HEIGHT, IMG_WIDTH)
resample = _PIL_INTERPOLATION_METHODS['nearest']
img = img.resize(width_height_tuple, resample)
return img_to_array(img, data_format=K.image_data_format())
img_array = load_image_as_array('https://imgur.com/a/hoNjDfR')
img_array = img_array/255
Then I predict it with the trained model in Keras:
predict(img_array.reshape(1,img_array.shape[0],img_array.shape[1],img_array.shape[2]))
The result is the following:
array([[0.02083278, 0.00425783, 0.8858412 , 0.17453966, 0.2628744 ,
0.00428194, 0.2307986 , 0.01038828, 0.07561868, 0.00983179,
0.09568241, 0.03087404, 0.00751176, 0.00651798, 0.03731382,
0.02220723, 0.0187968 , 0.02018479, 0.3416505 , 0.00586909,
0.02030778, 0.01660049, 0.00960067, 0.02457979, 0.9711478 ,
0.00666443, 0.01468313, 0.0035468 , 0.00694743, 0.03057212,
0.00429407, 0.01556832, 0.03173089, 0.01407397, 0.35166138,
0.00734553, 0.0508953 , 0.00336689, 0.0169737 , 0.07512951,
0.00484502, 0.01656419, 0.01643038, 0.02031735, 0.8343202 ,
0.02500874, 0.02459189, 0.01325032, 0.00414564, 0.08371573,
0.00484318]], dtype=float32)
The important point is that it has four values with a value greater than 0.8:
>>> y[y>=0.8]
array([0.9100583 , 0.96635956, 0.91707945, 0.9711707 ], dtype=float32))
Now I have converted my network to .pb and imported it in an android project. I wanted to predict the same image in android. Therefore I also resize the image and normalize it like I did in Python by using the following code:
// Resize image:
InputStream imageStream = getAssets().open("test3.jpg");
Bitmap bitmap = BitmapFactory.decodeStream(imageStream);
Bitmap resized_image = utils.processBitmap(bitmap,299);
and then normalize by using the following function:
public static float[] normalizeBitmap(Bitmap source,int size){
float[] output = new float[size * size * 3];
int[] intValues = new int[source.getHeight() * source.getWidth()];
source.getPixels(intValues, 0, source.getWidth(), 0, 0, source.getWidth(), source.getHeight());
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
output[i * 3] = Color.blue(val) / 255.0f;
output[i * 3 + 1] = Color.green(val) / 255.0f;
output[i * 3 + 2] = Color.red(val) / 255.0f ;
}
return output;
}
But in java I get other values. None of the four indices has a value greater than 0.8.
The value of the four indices are between 0.1 and 0.4!!!
I have checked my code several times, but I don't understand why in android I don't get the same values for the same image? Any idea or hint?
SOLVED & WHY:
Path to image contains Unicode character, I have to say it's a bug.
ORIGINAL POST:
I am new to OpenCV and just use java with OpenCV 3.2.0, 3.1.0, and 2.4.3 to read this image without any success, namely, no width or height can be read, though my aim is to find the harris corners, and use other image is without this problem.
code:
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
public class Test
{
public static void main (String []args)
{
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat img_object = Highgui.imread("E:/ℤIMAGEℂ/ℤtestℂ.png");
System.out.println(
"img_object.width() = " + img_object.width()
+ ",\n img_object.height() = " + img_object.height()
+ ",\n img_object.depth() = " + img_object.depth()
+ ",\n img_object.channels() = " + img_object.channels()
+ ",\n img_object.total() = " + img_object.total()
+ ",\n img_object.type() = " + img_object.type()
);
}
}
Image:
error:
img_object.width() = 0,
img_object.height() = 0,
img_object.depth() = 0,
img_object.channels() = 1,
img_object.total() = 0,
img_object.type() = 0
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data
OpenCV Error: Assertion failed (code) in cv::imencode, file ..\..\..\..\opencv\modules\highgui\src\loadsave.cpp, line 430
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: ..\..\..\..\opencv\modules\highgui\src\loadsave.cpp:430: error: (-215) code in function cv::imencode
]
at org.opencv.highgui.Highgui.imencode_1(Native Method)
at org.opencv.highgui.Highgui.imencode(Highgui.java:243)
at Imshow.imshow(Imshow.java:29)
at test.main(Test.java:21)
SOLVED & WHY:
Path to image contains Unicode character.
As someone suggests if you solve your problem, delete it or answer it.
If you try to read image from Unicode path, errors occur, and I didn't see related solution, so you can read this.
OpenCV is bugged with Unicode file path, sadly. That's the problem of OpenCV but mine.
After decoding an image this way Iam passing it to a read file
Bitmap croppedBitmap = Bitmap.createBitmap(bitmap, 5, 5, bitmap.getWidth() - 10, bitmap.getHeight() - 10,matrix,true);
System.out.println("The new cropped bitmap>>>>>>>>>>>>>" + croppedBitmap.getWidth() + "<<<>>>>" +croppedBitmap.getHeight());
this.bmp = croppedBitmap;
Readfile.readBitmap(bitmap)
Now I am processing and binarizing an image this way.
Pix var2 = Binarize.otsuAdaptiveThreshold(var1);
var2 = Rotate.rotate(AdaptiveMap.backgroundNormMorph(Convert.convertTo8(var1), 8, 6, 250), Skew.findSkew(var2));
The above algorithm does a good job on the background service and produces the result but my present challenge is that it takes such a long time (40 seconds) to process the image. So my intention is to process and binarize the image at 10 seconds that made me arrive by the approach above, so am asking is there a way I can decrease the processing time to about 10 seconds.
Thanks!
I am learning how to get the local and global maximum in an image, and as far as know, in one image there is only one global Maximum and one global minimum, and i managed to get these values and their corresponding locations in the image. so my questions are:
how to get the local maxima in an image
how to get the local minima in an image
as you see in the code below, I am using mask, but at run time i receieve the below mentioned error message. so please let me know why do we need mask and how to use it properly.
update:
Line 32 is: MinMaxLocResult s = Core.minMaxLoc(gsMat, mask);
code:
public static void main(String[] args) {
MatFactory matFactory = new MatFactory();
FilePathUtils.addInputPath(path_Obj);
Mat bgrMat = matFactory.newMat(FilePathUtils.getInputFileFullPathList().get(0));
Mat gsMat = SysUtils.rgbToGrayScaleMat(bgrMat);
Log.D(TAG, "main", "gsMat.dump(): \n" + gsMat.dump());
Mat mask = new Mat(new Size(3,3), CvType.CV_8U);//which type i should set for the mask
MinMaxLocResult s = Core.minMaxLoc(gsMat, mask);
Log.D(TAG, "main", "s.maxVal: " + s.maxVal);//to get the global maximum
Log.D(TAG, "main", "s.minVal: " + s.minVal);//to get the global minimum
Log.D(TAG, "main", "s.maxLoc: " + s.maxLoc);//to get the coordinates of the global maximum
Log.D(TAG, "main", "s.minLoc: " + s.minLoc);//to get the coordinates of the global minimum
}
error message:
OpenCV Error: Assertion failed (A.size == arrays[i0]->size) in cv::NAryMatIterator::init, file ..\..\..\..\opencv\modules\core\src\matrix.cpp, line 3197
Exception in thread "main" CvException [org.opencv.core.CvException: ..\..\..\..\opencv\modules\core\src\matrix.cpp:3197: error: (-215) A.size == arrays[i0]->size in function cv::NAryMatIterator::init
]
at org.opencv.core.Core.n_minMaxLocManual(Native Method)
at org.opencv.core.Core.minMaxLoc(Core.java:7919)
at com.example.globallocalmaxima_00.MainClass.main(MainClass.java:32)
In order to calculate global min/max values you don't need to use mask completely.
For calculating local min/max values you can do a little trick. You need to perform dilate/erode operation and then compare pixel value with values of original image. If value of original image and dilated/eroded image are equal therefore this pixel is local min/max.
The code is following:
Mat eroded = new Mat();
Mat dilated = new Mat();
Imgproc.erode(gsMat, eroded, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5,5)));
Imgproc.dilate(gsMat, dilate, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5,5)));
Mat localMin = new Mat(gsMat.size(), CvType.CV_8U, new Scalar(0));
Mat localMax = new Mat(gsMat.size(), CvType.CV_8U, new Scalar(0));
for (int i=0; i<gsMat.height; i++)
for (int j=0; j<gsMat.width; j++)
{
if (gsMat.get(i,j) == eroded.get(i,j))
localMin.put(i,j,255);
if (gsMat.get(i,j) == dilated.get(i,j))
localMax.put(i,j,255);
}
Please note, I'm not a Java programmer. So, code is only illustration of algorithm.
I have a desktop java application (java 1.4.2), that needs to determine the information regarding two screens on linux environment:
# cat /etc/redhat-release
Red Hat Enterprise Linux WS release 4 (Nahant Update 7)
# lsb_release
cat /proc/versionLSB Version:
:core-3.0-ia32:core-3.0-noarch:graphics-3.0-ia32:graphics-3.0-noarch
# cat /proc/version
Linux version 2.6.9-78.ELsmp (brewbuilder#hs20-bc2-3.build.redhat.com)
(gcc version 3.4.6 20060404 (Red Hat 3.4.6-10)) #1 SMP Wed Jul 9 15:39:47 EDT 2008
and the screens are 2048x2048 and 1600x1200.
The code is
GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment();
GraphicsDevice[] allScreens = env.getScreenDevices();
log("=============================================");
log("Total num. of screen = " + allScreens.length);
for (int i = 0; i < allScreens.length; i++) {
log("--------------------------------------");
log(
allScreens[i].getIDstring() + " width: " + allScreens[i].getDisplayMode().getWidth() +
" - height: " + allScreens[i].getDisplayMode().getHeight());
GraphicsConfiguration dgc =
allScreens[i].getDefaultConfiguration();
Rectangle bounds = dgc.getBounds();
Insets insets = Toolkit.getDefaultToolkit().getScreenInsets(dgc);
log("Bounds: " + bounds);
log("Insets: " + insets);
log("--------------------------------------");
}
log("=============================================");
but the output is
=============================================
Total num. of screen = 2
--------------------------------------
:0.0 width: 2048 - height: 2048
Bounds: java.awt.Rectangle[x=0,y=0,width=2048,height=2048]
Insets: java.awt.Insets[top=0,left=0,bottom=0,right=0]
--------------------------------------
--------------------------------------
:0.1 width: 2048 - height: 2048
Bounds: java.awt.Rectangle[x=0,y=0,width=1600,height=1200]
Insets: java.awt.Insets[top=0,left=0,bottom=0,right=0]
--------------------------------------
=============================================
the screen :0.1 is 2048x2048 when using allScreens[i].getDisplayMode(), and is 1600x1200 when using getDefaultConfiguration().getBounds():
why I have different results ?
The API code for getDisplayMode() is
public DisplayMode getDisplayMode() {
GraphicsConfiguration gc = getDefaultConfiguration();
Rectangle r = gc.getBounds();
ColorModel cm = gc.getColorModel();
return new DisplayMode(r.width, r.height, cm.getPixelSize(), 0);
}
so the values should be the same: why are different ?
Thanks
That is something I also noticed on my own application which requires the front gui to fit in to different monitors in a multi-monitor environment. The problem is something related to the video card, an Intel video card will present you the same width and height for different monitors, basically the primary's, if you use allScreens[i].getDisplayMode(), whereas the NVidia and ATI(AMD) ones give you the actual resolution values corresponding to each monitor using the same function.
So the proper way to get the right resolution for each monitor in a multiple-monitor environments regardless of video card is to use getDefaultConfiguration().getBounds().width or height.
Hope it helps.