Box2D throws error when creating fixture with Polygonshape - java

i work with libGDX.
When i am trying in Box2D to create a fixture having a polygon-shape i get following error.:
java: ./Box2D/Collision/b2Distance.h:103: const b2Vec2& b2DistanceProxy::GetVertex(int32) const: Assertion `0 <= index && index < m_count' failed.
When i don't do Box2D's world.step(), i don't get this error any more.
So i commented out everything in my WorldContactListener and i added world.step() again.
I am still getting the same Error.
When i replace the polygon-shape with a circle-shape everything works fine.
So here is how i am creating my polygon-shape:
PolygonShape shape = new PolygonShape();
float ppm = Game.PixelsPerMeter;
Vector2[] vertices = new Vector2[3];
vertices[0] = new Vector2(0f/ppm , 0f );
vertices[1] = new Vector2(1/ppm , 1f/ppm );
vertices[2] = new Vector2(0f/ppm ,1f/ppm);
shape.set(vertices);
And here is how i am adding everything it the Box2D world:
float ppm = Game.PixelsPerMeter
BodyDef bdef = new BodyDef();
bdef.position.set(100/ ppm, 200/ ppm);
bdef.type = BodyDef.BodyType.DynamicBody;
b2dbody = world.createBody(bdef);
FixtureDef mainFdef = new FixtureDef();
mainFdef.shape = Shape; //this is the shape from above of course
b2dbody.createFixture(mainFdef).setUserData(this);
I would be really happy if you could tell me whats wrong!
Thank You

More of a wild guess, but do you have your ppm conversion going the right way? 1/ppm (which you indicate is 75) gives quite a small value. I didn't dig into the bowels of the box2d code, but as it works best when objects are defined in meters, creating a polygon with vertices 0,0 and 0,0.0133 (1cm) might "confuse it" (meaning some kind of rounding error or something somewhere in the code so it can't distinguish vertices and thinks there are not at least 3 of them.)
For example, a bare bones app with 3 versions of your vertices code yields runtime exceptions on the first 2 versions (small values) but no runtime exception with larger values:
/* Version 1 (your code) - Runtime error
vertices[0] = new Vector2(0f/ppm , 0f );
vertices[1] = new Vector2(1f/ppm , 1f/ppm );
vertices[2] = new Vector2(0f/ppm ,1f/ppm);
*/
/* Version 2 (your actual values) - Runtime error
vertices[0] = new Vector2(0f , 0f );
vertices[1] = new Vector2(0.0133f , .0133f );
vertices[2] = new Vector2(0f , 0.0133f);
*/
/* Version 3 (larger values) - No error
vertices[0] = new Vector2(0f , 0f );
vertices[1] = new Vector2(1f , 1f );
vertices[2] = new Vector2(0f ,1f);
*/

Related

Keras Neural Network output different than Java TensorFlowInferenceInterface output

I have created a neural network in Keras using the InceptionV3 pretrained model:
base_model = applications.inception_v3.InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(2048, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(len(labels_list), activation='sigmoid')(x)
I trained the model successfully and want to following image: https://imgur.com/a/hoNjDfR. Therefore, the image is cropped to 299x299 and normalized (just devided by 255):
def img_to_array(img, data_format='channels_last', dtype='float32'):
if data_format not in {'channels_first', 'channels_last'}:
raise ValueError('Unknown data_format: %s' % data_format)
# Numpy array x has format (height, width, channel)
# or (channel, height, width)
# but original PIL image has format (width, height, channel)
x = np.asarray(img, dtype=dtype)
if len(x.shape) == 3:
if data_format == 'channels_first':
x = x.transpose(2, 0, 1)
elif len(x.shape) == 2:
if data_format == 'channels_first':
x = x.reshape((1, x.shape[0], x.shape[1]))
else:
x = x.reshape((x.shape[0], x.shape[1], 1))
else:
raise ValueError('Unsupported image shape: %s' % (x.shape,))
return x
def load_image_as_array(path):
if pil_image is not None:
_PIL_INTERPOLATION_METHODS = {
'nearest': pil_image.NEAREST,
'bilinear': pil_image.BILINEAR,
'bicubic': pil_image.BICUBIC,
}
# These methods were only introduced in version 3.4.0 (2016).
if hasattr(pil_image, 'HAMMING'):
_PIL_INTERPOLATION_METHODS['hamming'] = pil_image.HAMMING
if hasattr(pil_image, 'BOX'):
_PIL_INTERPOLATION_METHODS['box'] = pil_image.BOX
# This method is new in version 1.1.3 (2013).
if hasattr(pil_image, 'LANCZOS'):
_PIL_INTERPOLATION_METHODS['lanczos'] = pil_image.LANCZOS
with open(path, 'rb') as f:
img = pil_image.open(io.BytesIO(f.read()))
width_height_tuple = (IMG_HEIGHT, IMG_WIDTH)
resample = _PIL_INTERPOLATION_METHODS['nearest']
img = img.resize(width_height_tuple, resample)
return img_to_array(img, data_format=K.image_data_format())
img_array = load_image_as_array('https://imgur.com/a/hoNjDfR')
img_array = img_array/255
Then I predict it with the trained model in Keras:
predict(img_array.reshape(1,img_array.shape[0],img_array.shape[1],img_array.shape[2]))
The result is the following:
array([[0.02083278, 0.00425783, 0.8858412 , 0.17453966, 0.2628744 ,
0.00428194, 0.2307986 , 0.01038828, 0.07561868, 0.00983179,
0.09568241, 0.03087404, 0.00751176, 0.00651798, 0.03731382,
0.02220723, 0.0187968 , 0.02018479, 0.3416505 , 0.00586909,
0.02030778, 0.01660049, 0.00960067, 0.02457979, 0.9711478 ,
0.00666443, 0.01468313, 0.0035468 , 0.00694743, 0.03057212,
0.00429407, 0.01556832, 0.03173089, 0.01407397, 0.35166138,
0.00734553, 0.0508953 , 0.00336689, 0.0169737 , 0.07512951,
0.00484502, 0.01656419, 0.01643038, 0.02031735, 0.8343202 ,
0.02500874, 0.02459189, 0.01325032, 0.00414564, 0.08371573,
0.00484318]], dtype=float32)
The important point is that it has four values with a value greater than 0.8:
>>> y[y>=0.8]
array([0.9100583 , 0.96635956, 0.91707945, 0.9711707 ], dtype=float32))
Now I have converted my network to .pb and imported it in an android project. I wanted to predict the same image in android. Therefore I also resize the image and normalize it like I did in Python by using the following code:
// Resize image:
InputStream imageStream = getAssets().open("test3.jpg");
Bitmap bitmap = BitmapFactory.decodeStream(imageStream);
Bitmap resized_image = utils.processBitmap(bitmap,299);
and then normalize by using the following function:
public static float[] normalizeBitmap(Bitmap source,int size){
float[] output = new float[size * size * 3];
int[] intValues = new int[source.getHeight() * source.getWidth()];
source.getPixels(intValues, 0, source.getWidth(), 0, 0, source.getWidth(), source.getHeight());
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
output[i * 3] = Color.blue(val) / 255.0f;
output[i * 3 + 1] = Color.green(val) / 255.0f;
output[i * 3 + 2] = Color.red(val) / 255.0f ;
}
return output;
}
But in java I get other values. None of the four indices has a value greater than 0.8.
The value of the four indices are between 0.1 and 0.4!!!
I have checked my code several times, but I don't understand why in android I don't get the same values for the same image? Any idea or hint?

Heatmap/Contours based on Transportation Time (Reverse Isochronic Contours)

Note: Solutions in r, python, java, or if necessary, c++ or c# are desired.
I am trying to draw contours based on transportation time. To be more clear, I want to cluster the points which have similar travel time (let's say 10 minute interval) to a specific point (destination) and map them as contours or a heatmap.
Right now, the only idea that I have is using R package gmapsdistance to find the travel time for different origins and then cluster them and draw them on a map. But, as you can tell, this is in no way a robust solution.
This thread on GIS-community and this one for python illustrate a similar problem but for an origin to destinations within reach in specific time. I want to find origins which I can travel to the destination within certain time.
Right now, the code below shows my rudimentary idea (using R):
library(gmapsdistance)
set.api.key("YOUR.API.KEY")
mdestination <- "40.7+-73"
morigin1 <- "40.6+-74.2"
morigin2 <- "40+-74"
gmapsdistance(origin = morigin1,
destination = mdestination,
mode = "transit")
gmapsdistance(origin = morigin2,
destination = mdestination,
mode = "transit")
This map also may help to understand the question:
Using this answer I can get the points which I can go to from a point of origin but I need to reverse it and find the points which have travel time equal-less-than a certain time to my destination;
library(httr)
library(googleway)
library(jsonlite)
appId <- "TravelTime_APP_ID"
apiKey <- "TravelTime_API_KEY"
mapKey <- "GOOGLE_MAPS_API_KEY"
location <- c(40, -73)
CommuteTime <- (5 / 6) * 60 * 60
url <- "http://api.traveltimeapp.com/v4/time-map"
requestBody <- paste0('{
"departure_searches" : [
{"id" : "test",
"coords": {"lat":', location[1], ', "lng":', location[2],' },
"transportation" : {"type" : "driving"} ,
"travel_time" : ', CommuteTime, ',
"departure_time" : "2017-05-03T07:20:00z"
}
]
}')
res <- httr::POST(url = url,
httr::add_headers('Content-Type' = 'application/json'),
httr::add_headers('Accept' = 'application/json'),
httr::add_headers('X-Application-Id' = appId),
httr::add_headers('X-Api-Key' = apiKey),
body = requestBody,
encode = "json")
res <- jsonlite::fromJSON(as.character(res))
pl <- lapply(res$results$shapes[[1]]$shell, function(x){
googleway::encode_pl(lat = x[['lat']], lon = x[['lng']])
})
df <- data.frame(polyline = unlist(pl))
df_marker <- data.frame(lat = location[1], lon = location[2])
google_map(key = mapKey) %>%
add_markers(data = df_marker) %>%
add_polylines(data = df, polyline = "polyline")
Moreover, Documentation of Travel Time Map Platform talks about Multi Origins with Arrival time which is exactly the thing I want to do. But I need to do that for both public transportation and driving (for places with less than an hour commute time) and I think since public transport is tricky (based on what station you are close to) maybe heatmap is a better option than contours.
This answer is based on obtaining an origin-destination matrix between a grid of (roughly) equally distant points. This is a computer intensive operation not only because it requires a good number of API calls to mapping services, but also because the servers must calculate a matrix for each call. The number of required calls grows exponentially along the number of points in the grid.
To tackle this problem, I would suggest that you consider running on your local machine or on a local server a mapping server. Project OSRM offers a relatively simple, free, and open-source solution, enabling you to run an OpenStreetMap server into a Linux docker (https://github.com/Project-OSRM/osrm-backend). Having your own local mapping server will allow you to make as many API calls as you desire. R's osrm package allows you to interact with OpenStreetMaps' APIs, Including those placed to a local server.
library(raster) # Optional
library(sp)
library(ggmap)
library(tidyverse)
library(osrm)
devtools::install_github("cmartin/ggConvexHull") # Needed to quickly draw the contours
library(ggConvexHull)
I create a grid of 96 roughly equally distant points around Bruxelles (Belgium) conurbation.
This grid does not take into consideration the earths curvature, which is negligible at the level of city distances.
For convenience, I employ the raster package to download a ShapeFile of Belgium and extract the nodes for Brussels city.
BE <- raster::getData("GADM", country = "BEL", level = 1)
Bruxelles <- BE[BE$NAME_1 == "Bruxelles", ]
df_grid <- makegrid(Bruxelles, cellsize = 0.02) %>%
SpatialPoints() %>%
## I convert the SpatialPoints object into a simple data.frame
as.data.frame() %>%
## create a unique id for each point in the data.frame
rownames_to_column() %>%
## rename variables of the data.frame with more explanatory names.
rename(id = rowname, lat = x2, lon = x1)
## I point osrm.server to the OpenStreet docker running in my Linux machine. ...
### ... Do not run this if you are getting your data from OpenStreet public servers.
options(osrm.server = "http://127.0.0.1:5000/")
## I obtain a list with distances (Origin Destination Matrix in ...
### ... minutes, origins and destinations)
Distance_Tables <- osrmTable(loc = df_grid)
OD_Matrix <- Distance_Tables$durations %>% ## subset the previous list
## convert the Origin Destination Matrix into a tibble
as_data_frame() %>%
rownames_to_column() %>%
## make sure we have an id column for the OD tibble
rename(origin_id = rowname) %>%
## transform the tibble into long/tidy format
gather(key = destination_id, value = distance_time, -origin_id) %>%
left_join(df_grid, by = c("origin_id" = "id")) %>%
## set origin coordinates
rename(origin_lon = lon, origin_lat = lat) %>%
left_join(df_grid, by = c("destination_id" = "id")) %>%
## set destination coordinates
rename(destination_lat = lat, destination_lon = lon)
## Obtain a nice looking road map of Brussels
Brux_map <- get_map(location = "bruxelles, belgique",
zoom = 11,
source = "google",
maptype = "roadmap")
ggmap(Brux_map) +
geom_point(aes(x = origin_lon, y = origin_lat),
data = OD_Matrix %>%
## Here I selected point_id 42 as the desired target, ...
## ... just because it is not far from the City Center.
filter(destination_id == 42),
size = 0.5) +
## Draw a diamond around point_id 42
geom_point(aes(x = origin_lon, y = origin_lat),
data = OD_Matrix %>%
filter(destination_id == 42, origin_id == 42),
shape = 5, size = 3) +
## Countour marking a distance of up to 8 minutes
geom_convexhull(alpha = 0.2,
fill = "blue",
colour = "blue",
data = OD_Matrix %>%
filter(destination_id == 42,
distance_time <= 8),
aes(x = origin_lon, y = origin_lat)) +
## Countour marking a distance of up to 16 minutes
geom_convexhull(alpha = 0.2,
fill = "red",
colour = "red",
data = OD_Matrix %>%
filter(destination_id == 42,
distance_time <= 15),
aes(x = origin_lon, y = origin_lat))
Results
The blue contour represent distances to the city center of up to 8 minutes.
The red contour represent distances of up to 15 minutes.
I came up with an approach that would be applicable comparing to making numerous api calls.
The idea is finding the places you can reach in certain time(look at this thread). Traffic can be simulated by changing the time from morning to evening. You will end up with an overlapped area which you can reach from both places.
Then you can use Nicolas answer and map some points within that overlapped area and draw the heat map for the destinations you have. This way, you will have less area (points) to cover and therefore you will make much less api calls (remember to use appropriate time for that matter).
Below, I tried to demonstrate what I mean by these and get you to the point that you can make the grid mentioned in the other answer to make your estimation more robust.
This shows how to map the intersected area.
library(httr)
library(googleway)
library(jsonlite)
appId <- "Travel.Time.ID"
apiKey <- "Travel.Time.API"
mapKey <- "Google.Map.ID"
locationK <- c(40, -73) #K
locationM <- c(40, -74) #M
CommuteTimeK <- (3 / 4) * 60 * 60
CommuteTimeM <- (0.55) * 60 * 60
url <- "http://api.traveltimeapp.com/v4/time-map"
requestBodyK <- paste0('{
"departure_searches" : [
{"id" : "test",
"coords": {"lat":', locationK[1], ', "lng":', locationK[2],' },
"transportation" : {"type" : "public_transport"} ,
"travel_time" : ', CommuteTimeK, ',
"departure_time" : "2018-06-27T13:00:00z"
}
]
}')
requestBodyM <- paste0('{
"departure_searches" : [
{"id" : "test",
"coords": {"lat":', locationM[1], ', "lng":', locationM[2],' },
"transportation" : {"type" : "driving"} ,
"travel_time" : ', CommuteTimeM, ',
"departure_time" : "2018-06-27T13:00:00z"
}
]
}')
resKi <- httr::POST(url = url,
httr::add_headers('Content-Type' = 'application/json'),
httr::add_headers('Accept' = 'application/json'),
httr::add_headers('X-Application-Id' = appId),
httr::add_headers('X-Api-Key' = apiKey),
body = requestBodyK,
encode = "json")
resMi <- httr::POST(url = url,
httr::add_headers('Content-Type' = 'application/json'),
httr::add_headers('Accept' = 'application/json'),
httr::add_headers('X-Application-Id' = appId),
httr::add_headers('X-Api-Key' = apiKey),
body = requestBodyM,
encode = "json")
resK <- jsonlite::fromJSON(as.character(resKi))
resM <- jsonlite::fromJSON(as.character(resMi))
plK <- lapply(resK$results$shapes[[1]]$shell, function(x){
googleway::encode_pl(lat = x[['lat']], lon = x[['lng']])
})
plM <- lapply(resM$results$shapes[[1]]$shell, function(x){
googleway::encode_pl(lat = x[['lat']], lon = x[['lng']])
})
dfK <- data.frame(polyline = unlist(plK))
dfM <- data.frame(polyline = unlist(plM))
df_markerK <- data.frame(lat = locationK[1], lon = locationK[2], colour = "#green")
df_markerM <- data.frame(lat = locationM[1], lon = locationM[2], colour = "#lavender")
iconK <- "red"
df_markerK$icon <- iconK
iconM <- "blue"
df_markerM$icon <- iconM
google_map(key = mapKey) %>%
add_markers(data = df_markerK,
lat = "lat", lon = "lon",colour = "icon",
mouse_over = "K_K") %>%
add_markers(data = df_markerM,
lat = "lat", lon = "lon", colour = "icon",
mouse_over = "M_M") %>%
add_polygons(data = dfM, polyline = "polyline", stroke_colour = '#461B7E',
fill_colour = '#461B7E', fill_opacity = 0.6) %>%
add_polygons(data = dfK, polyline = "polyline",
stroke_colour = '#F70D1A',
fill_colour = '#FF2400', fill_opacity = 0.4)
You can extract the intersected area like this:
# install.packages(c("rgdal", "sp", "raster","rgeos","maptools"))
library(rgdal)
library(sp)
library(raster)
library(rgeos)
library(maptools)
Kdata <- resK$results$shapes[[1]]$shell
Mdata <- resM$results$shapes[[1]]$shell
xyfunc <- function(mydf) {
xy <- mydf[,c(2,1)]
return(xy)
}
spdf <- function(xy, mydf){
sp::SpatialPointsDataFrame(
coords = xy, data = mydf,
proj4string = CRS("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0"))}
for (i in (1:length(Kdata))) {Kdata[[i]] <- xyfunc(Kdata[[i]])}
for (i in (1:length(Mdata))) {Mdata[[i]] <- xyfunc(Mdata[[i]])}
Kshp <- list(); for (i in (1:length(Kdata))) {Kshp[i] <- spdf(Kdata[[i]],Kdata[[i]])}
Mshp <- list(); for (i in (1:length(Mdata))) {Mshp[i] <- spdf(Mdata[[i]],Mdata[[i]])}
Kbind <- do.call(bind, Kshp)
Mbind <- do.call(bind, Mshp)
#plot(Kbind);plot(Mbind)
x <- intersect(Kbind,Mbind)
#plot(x)
xdf <- data.frame(x)
xdf$icon <- "https://i.stack.imgur.com/z7NnE.png"
google_map(key = mapKey,
location = c(mean(latmax,latmin), mean(lngmax,lngmin)), zoom = 8) %>%
add_markers(data = xdf, lat = "lat", lon = "lng", marker_icon = "icon")
This is just an illustration of the intersected area.
Now, You can get the coordinates from xdf dataframe and construct your grid around those points to finally come up with a heat map. To respect the other user who came up with that idea/answer I am not including it in mine and am just referencing to it.
Nicolás Velásquez - Obtaining an Origin-Destination Matrix between a Grid of (Roughly) Equally Distant Points

OpenCV 3.0 svm always predicts 0 (java)

I'm new to opencv's svm's. I tried a sample classifier but it only returns 0 as the predicted label. I even used the value 5 for training as well as the prediction.
I've been changing the values for about a hundred times but i just don't get what's wrong. I'm using OpenCV 3.0 with Java. Here's my code:
Mat labels = new Mat(new Size(1,4),CvType.CV_32SC1);
labels.put(0, 0, 1);
labels.put(1, 0, 1);
labels.put(2, 0, 1);
labels.put(3, 0, 0);
Mat data = new Mat(new Size(1,4),CvType.CV_32FC1);
data.put(0, 0, 5);
data.put(1, 0, 2);
data.put(2, 0, 3);
data.put(3, 0, 8);
Mat testSamples = new Mat(new Size(1,1),CvType.CV_32FC1);
testSamples.put(0,0,5);
SVM svm = SVM.create();
TermCriteria criteria = new TermCriteria(TermCriteria.EPS + TermCriteria.MAX_ITER,100,0.1);
svm.setKernel(SVM.LINEAR);
svm.setType(SVM.C_SVC);
svm.setGamma(0.5);
svm.setNu(0.5);
svm.setC(1);
svm.setTermCriteria(criteria);
//data is N x 64 trained data Mat , labels is N x 1 label Mat with integer values;
svm.train(data, Ml.ROW_SAMPLE, labels);
Mat results = new Mat();
int predictedClass = (int) svm.predict(testSamples, results, 0);
Even if i change the lables to 1 and 2, I still get 0.0 as a response. So something has to be absolutely wrong... I just don't know what to do. Please help! :)
I had a similar problem in C++. I'm not too sure if it's the same in Java but in C++ the predictions were saved in the results Matrix instead of returned as a float.

Java: How capturing pen data in Wacom STU-530 Tablet?

I'm trying to capture all the pen data: touch, pressure at one point, coordinates of the touch ...
Any suggestions?
SigCtl sigCtl = new SigCtl();
DynamicCapture dc = new DynamicCapture();
int rc = dc.capture(sigCtl, "who", "why", null, null);
if(rc == 0) {
System.out.println("signature captured successfully");
String fileName = "sig1.png";
SigObj sig = sigCtl.signature();
sig.extraData("AdditionalData", "CaptureImage.java Additional Data");
int flags = SigObj.outputFilename | SigObj.color32BPP | SigObj.encodeData;
sig.renderBitmap(fileName, 200, 150, "image/png", 0.5f, 0xff0000, 0xffffff, 0.0f, 0.0f, flags );
}
I managed to solve the problem using the wgssStu library, this jar there is PenData class, which has the following methods: getPressure, getX, getY, getRdy, getSw...

OpenCV Java inpaint image format requirement

Been trying to get inpainting to work on Android,
int height = (int) viewMat.size().height;
int width = (int) viewMat.size().width;
Mat maskMat = new Mat();
maskMat.create(viewMat.size(), CvType.CV_8U);
maskMat.setTo(bColor);
Point r1 = new Point(width/2-width/10, height/2-height/10);
Point r2 = new Point(width/2+width/10, height/2+height/10);
Scalar color = new Scalar(1);
Core.rectangle(maskMat, r1, r2, color, Core.FILLED);
outMat.create(viewMat.size(), CvType.CV_8UC3);
viewMat.convertTo(outMat, CvType.CV_8UC3);
Photo.inpaint(outMat, maskMat, outMat, 1, Photo.INPAINT_TELEA);
Was greeted with,
Caused by: CvException [org.opencv.core.CvException: /home/reports/ci/slave_desktop/50-SDK/opencv/modules/photo/src/inpaint.cpp:744:
error: (-210) Only 8-bit 1-channel and 3-channel input/output images are supported in function void cvInpaint(const CvArr*, const CvArr*, CvArr*, double, int)
in logcat.
Been trying for hours creating Mats in various ways but to no valid.
CV_8U = 8 bit per channel, 1 channel. Right?
CV_8UC3 = 8 bit per channel, 3 channels. Right?
So what am I missing? I'm totally stumped.
...
Point r2 = new Point(width/2+width/10, height/2+height/10);
Scalar color = new Scalar(1);
Core.rectangle(maskMat, r1, r2, color, Core.FILLED);
Imgproc.cvtColor(viewMat, outMat, Imgproc.COLOR_RGBA2RGB);
Photo.inpaint(outMat, maskMat, outMat, 1, Photo.INPAINT_TELEA);
...
Turned out it was simply a matter of getting rid of the alpha channel via color conversion.
Image Inpaint Using OpenCv Android Studio
ImgMat = Mat()
Maskmat = Mat()
destmat=Mat()
Utils.bitmapToMat(BitmapImage, ImgMat)
Utils.bitmapToMat(BitmapMask, Maskmat)
Imgproc.cvtColor(ImgMat, ImgMat, Imgproc.COLOR_RGB2XYZ)
Imgproc.cvtColor(Maskmat, Maskmat, Imgproc.COLOR_BGR2GRAY)
Photo.inpaint(ImgMat,Maskmat,destmat, 15.0, INPAINT_NS)
InpaintBitmap= Bitmap.createBitmap(BitmapImage.getWidth(),
BitmapImage.getHeight(),
Bitmap.Config.ARGB_8888)
Imgproc.cvtColor(destmat, destmat, Imgproc.COLOR_XYZ2RGB);
Utils.matToBitmap(destmat, InpaintBitmap)
Description:
InpaintBitmap is an Inpaint color Bitmap
src image is a 3 channel image but mask bitmap is a 1 channel image

Categories

Resources