for (i in 0 until imagesArray.length()) {
val imageRes = imagesArray.getJSONObject(i).getString("image_url")
Log.e(TAG, "setImageSlider_466: "+imageRes)
val slide = TextSliderView(this).image(imageRes)
.setScaleType(BaseSliderView.ScaleType.CenterInside)
.empty(R.drawable.no_image_placeholder)
image_slider.addSlider(slide)
val scaledSlide = DialogTouchImageSlider(this, R.drawable.no_image_placeholder)
.description("Description")
.image(imageRes).setScaleType(BaseSliderView.ScaleType.CenterInside)
.empty(R.drawable.no_image_placeholder)
dialogSlider.addSlider(scaledSlide)
slide.setOnSliderClickListener {
if (disableSliderTouch)
return#setOnSliderClickListener
dialogSlider.currentPosition = i
imageDialog.show()
}
}
I'm using SliderLayout of daimajia to show image in SlideView from API, but some image containing "http" url which is not showing, it will showing only when image url contain "https" into url, I already checked with Glide and Picasso to show this image.
I want to show image with http URL also.
Related
I'm currently developing an app to acquire a picture using the phone camera and request some results to the Google reverse image search.
Now, for what I've researched so far Google APIs accepts a picture URL as input to make the reverse search, but I don't have an URL, since it's a picture the user takes in that moment, and I only have the Bitmap.
How can I pass the picture as input to the Google reverse image search using a post HTML request?
I'm using Android Studio and Java.
Here's my code:
public void onActivityResult(ActivityResult result) {
if (result.getResultCode() == Activity.RESULT_OK && result.getData() != null) {
Bundle bundle = result.getData().getExtras();
Bitmap bitmap = (Bitmap) bundle.get("data");
requestQueue = Volley.newRequestQueue(context);
//getRequest();
//postRequest();
}
}
I am attempting to load another site's iframe into my android app via webview. I am able to properly load other websites but when I load a stream from sportsbay.org which provides you with an iframe embed code snippet, the stream goes black and it prints "Sandboxing is not allowed". I have gone through several other questions to find an answer to this. My android project is as follows.
The specific url that I am passing in as video_url is https://sportsbay.org/embed/45629/1/btn-big-ten-network-live.html. The iframe snippet that sportsbay provides is <iframe allow='encrypted-media' width='640' height='360' marginwidth='0' marginheight='0' scrolling='no' frameborder='0' allowfullscreen='yes' src='//sportsbay.org/embed/45629/1/btn-big-ten-network-live.html'></iframe>. This url loads two urls 1) https://lowend.xyz/stream/45629.html which is the actual stream and moments later loads 2) https://sportsbay.org/live-streams to redirect you to the home page of sportsbay. I have code in MyWebViewClient that prevents the main sportsbay page from loading which would interrupt the stream I want to play (THIS is where I get the sandboxing message). I have tried replacing loadUrl with loadData and other variations that pass in the iframe html string along with the mimeType but what I have currently is the closest I have come to loading the stream (others don't get far enough to post the sandboxing message).
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
// Bring Linear layout into view.
setContentView(R.layout.webview);
// Grab current intent & pull out video url.
Intent i = getIntent();
String video_url = i.getStringExtra("video_url");
// Removes app name banner at top. Allows for orientation changes without reload.
getSupportActionBar().hide();
// Creates webview object.
WebView web = findViewById(R.id.webView);
// Configure settings for webview.
WebSettings webSettings = web.getSettings();
// Allows use of the phones file storage.
webSettings.setDomStorageEnabled(true);
// Sets encoding standard for urls.
webSettings.setDefaultTextEncodingName("utf-8");
// Able to zoom.
webSettings.setSupportZoom(true);
// Needed for websites to load javascript enabled content (most videos/streams).
webSettings.setJavaScriptEnabled(true);
// Attached webview to java class MyWebViewClient that vets the incoming urls before loading.
// Blocks Ads / viruses / popups.
// Also keeps url from launch in a browser.
web.setWebViewClient(new MyWebViewClient());
// Checks if channel is sourced from sportsbay.org.
if(video_url.contains("sportsbay.org"))
{
// Changes the browser user agent since chrome user agent returns 403 Forbidden message.
webSettings.setUserAgentString("Mozilla/5.0 (platform; rv:geckoversion) Gecko/geckotrail Firefox/firefoxversion");
}
web.loadUrl(video_url);
}
public class MyWebViewClient extends WebViewClient {
public boolean shouldOverrideKeyEvent (WebView view, KeyEvent event) {
return true;
}
#SuppressWarnings("deprecation")
#Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
final Uri uri = Uri.parse(url);
return handleUri(uri);
}
#TargetApi(Build.VERSION_CODES.N)
#Override
public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) {
final Uri uri = request.getUrl();
return handleUri(uri);
}
private boolean handleUri(final Uri uri) {
Log.i(TAG, "Uri =" + uri);
final String host = uri.getHost();
final String scheme = uri.getScheme();
// Check requested URL to known good
if (host.equals("s1-tv.blogspot.com") ||
host.equals("reddit-tv-streams.blogspot.com") ||
host.equals("newdmn.icu") ||
host.equals("lowend.xyz"))
{
// Returning false means that you are going to load this url in the webView itself
return false;
} else {
// Do not load the requested URL
return true;
}
}
}
Great news! I figured it out. The sandboxing message was not because of the server incorrectly interacting with my app, it was because my app did not have permission to use the file storage outside of the android application (the app sandbox). This was fixed with:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
which needs to be placed in the Android Manifest.xml in the permissions area (under the top package).
I am implementing the MLKit face detection library with a simple application. The application is a facial monitoring system so i am setting up a preview feed from the front camera and attempting to detect a face. I am using camera2Api. At my ImageReader.onImageAvailableListener, I want to implement the firebase face detection on each read in the image. After creating my FirebaseVisionImage and running the FirebaseVisionFaceDetector I am getting an empty faces list, this should contain detected faces but I always get a face of size 0 even though a face is in the image.
I have tried other forms of creating my FirebaseVisionImage. Currently, I am creating it through the use of a byteArray which I created following the MlKit docs. I have also tried to create a FirebaseVisionImage using the media Image object.
private final ImageReader.OnImageAvailableListener onPreviewImageAvailableListener = new ImageReader.OnImageAvailableListener() {
/**Get Image convert to Byte Array **/
#Override
public void onImageAvailable(ImageReader reader) {
//Get latest image
Image mImage = reader.acquireNextImage();
if(mImage == null){
return;
}
else {
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseApp.initializeApp(MonitoringFeedActivity.this);
FirebaseVisionFaceDetectorOptions highAccuracyOpts =
new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS)
.build();
int rotation = getRotationCompensation(frontCameraId,MonitoringFeedActivity.this, getApplicationContext() );
FirebaseVisionImageMetadata metadata = new FirebaseVisionImageMetadata.Builder()
.setWidth(480) // 480x360 is typically sufficient for
.setHeight(360) // image recognition
.setFormat(FirebaseVisionImageMetadata.IMAGE_FORMAT_NV21)
.setRotation(rotation)
.build();
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
.getVisionFaceDetector(highAccuracyOpts);
Task<List<FirebaseVisionFace>> result =
detector.detectInImage(image)
.addOnSuccessListener(
new OnSuccessListener<List<FirebaseVisionFace>>() {
#Override
public void onSuccess(List<FirebaseVisionFace> faces) {
// Task completed successfully
if (faces.size() != 0) {
Log.i(TAG, String.valueOf(faces.get(0).getSmilingProbability()));
}
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
mImage.close();
The aim is to have the resulting faces list contain the detected faces in each processed image.
byte[] newImg = convertYUV420888ToNV21(mImage);
FirebaseVisionImage image = FirebaseVisionImage.fromByteArray(newImg, metadata);
These two lines are important. make sure Its creating proper VisionImage.
Checkout my project for all functionality
MLKIT demo
I am new to Android Development and I am trying to replicate the example of displaying images on a gridview using Async Task as illustrated on Google's Android Development Page.
My question is: How do I declare or initialize mPlaceHolderBitmap under the following code:
Here is the link to the Google code:
Displaying Bitmaps in Your UI
public void loadBitmap(int resId, ImageView imageView) {
if (cancelPotentialWork(resId, imageView)) {
final BitmapWorkerTask task = new BitmapWorkerTask(imageView);
final AsyncDrawable asyncDrawable =
new AsyncDrawable(getResources(), mPlaceHolderBitmap, task);
imageView.setImageDrawable(asyncDrawable);
task.execute(resId);
}
}
Converting #Luksprog's comment into an answer:
It represents your own Bitmap to be used as a placeholder while the actual image is loaded. You can download the sample and see the entire code.
I'm currently trying to develop an app whereby it visits the following site (Http://lulpix.com) and parses the HTML and gets the img src from the following section
<div class="pic rounded-8" style="overflow:hidden;"><div style="margin:0 0 36px 0;overflow:hidden;border:none;height:474px;"><img src="**http://lulpix.com/images/2012/April/13/4f883cdde3591.jpg**" alt="All clogged up" title="All clogged up" width="319"/></div></div>
Its of course different every time the page is loaded so I cannot give a direct URL to an Asynchronous gallery of images which is what i intend to do, for instance
Load Page > Parse img src > download ASync to imageview > Reload lulpix.com > start again
Then place each of these in an image view from which the user can swipe left and right to browse.
So the TL;DR of this is, how can i parse the html to retrieve the URL and has anyone got any experiences with libarys for displaying images.
Thank you v much.
Here's an AsyncTask that connects to lulpix, fakes a referrer & user-agent (lulpix tries to block scraping with some pretty lame checks apparently). Starts like this in your Activity:
new ForTheLulz().execute();
The resulting Bitmap is downloaded in a pretty lame way (no caching or checks if the image is already DL:ed) & error handling is overall pretty non-existent - but the basic concept should be ok.
class ForTheLulz extends AsyncTask<Void, Void, Bitmap> {
#Override
protected Bitmap doInBackground(Void... args) {
Bitmap result = null;
try {
Document doc = Jsoup.connect("http://lulpix.com")
.referrer("http://www.google.com")
.userAgent("Mozilla/5.0 (Windows; U; WindowsNT 5.1; en-US; rv1.8.1.6) Gecko/20070725 Firefox/2.0.0.6")
.get();
//parse("http://lulpix.com");
if (doc != null) {
Elements elems = doc.getElementsByAttributeValue("class", "pic rounded-8");
if (elems != null && !elems.isEmpty()) {
Element elem = elems.first();
elems = elem.getElementsByTag("img");
if (elems != null && !elems.isEmpty()) {
elem = elems.first();
String src = elem.attr("src");
if (src != null) {
URL url = new URL(src);
// Just assuming that "src" isn't a relative URL is probably stupid.
InputStream is = url.openStream();
try {
result = BitmapFactory.decodeStream(is);
} finally {
is.close();
}
}
}
}
}
} catch (IOException e) {
// Error handling goes here
}
return result;
}
#Override
protected void onPostExecute(Bitmap result) {
ImageView lulz = (ImageView) findViewById(R.id.lulpix);
if (result != null) {
lulz.setImageBitmap(result);
} else {
//Your fallback drawable resource goes here
//lulz.setImageResource(R.drawable.nolulzwherehad);
}
}
}
I recently used JSoup to parse invalid HTML, it works well! Do something like...
Document doc = Jsoup.parse(str);
Element img = doc.body().select("div[class=pic rounded-8] img").first();
String src = img.attr("src");
Play with the "selector string" to get it right, but I think the above will work. It first selects the outer div based on the value of its class attribute, and then any descendent img element.
No need to use webview now check this sample project
https://github.com/meetmehdi/HTMLImageParser.git
In this sample project I am parsing html and image tag, than extracting the image from image URL. Image is downloaded and is displayed.