How to print password protected PDF in android? - java

I am printing pdf using the below code which works fine for normal pdfs but crashes with password-protected pdf. is there any way by which we can print password-protected pdf or stop applications from crashing. application crashes even print() called inside a try-catch block.
Exception :
java.lang.RuntimeException:
at android.print.PrintManager$PrintDocumentAdapterDelegate$MyHandler.handleMessage (PrintManager.java:1103)
at android.os.Handler.dispatchMessage (Handler.java:106)
at android.os.Looper.loop (Looper.java:246)
at android.app.ActivityThread.main (ActivityThread.java:8506)
at java.lang.reflect.Method.invoke (Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:602)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1139)
code that causing Exception:
val printManager = this.getSystemService(Context.PRINT_SERVICE) as PrintManager
val jobName = this.getString(R.string.app_name) + " Document"
try{
printManager.print(jobName, pda, null)
}
catch(ex:RuntimeException)
{
Toast.makeText(this,"Can't print pdf file",Toast.LENGTH_SHORT).show()
}
PrintDocumentAdapter.kt
var pda: PrintDocumentAdapter = object : PrintDocumentAdapter() {
override fun onWrite(
pages: Array<PageRange>,
destination: ParcelFileDescriptor,
cancellationSignal: CancellationSignal,
callback: WriteResultCallback
) {
var input: InputStream? = null
var output: OutputStream? = null
try {
input = uri?.let { contentResolver.openInputStream(it) }
output = FileOutputStream(destination.fileDescriptor)
val buf = ByteArray(1024)
var bytesRead: Int
if (input != null) {
while (input.read(buf).also { bytesRead = it } > 0) {
output.write(buf, 0, bytesRead)
}
}
callback.onWriteFinished(arrayOf(PageRange.ALL_PAGES))
} catch (ee: FileNotFoundException) {
//Code to Catch FileNotFoundException
} catch (e: Exception) {
//Code to Catch exception
} finally {
try {
input!!.close()
output!!.close()
} catch (e: IOException) {
e.printStackTrace()
}
}
}
override fun onLayout(
oldAttributes: PrintAttributes,
newAttributes: PrintAttributes,
cancellationSignal: CancellationSignal,
callback: LayoutResultCallback,
extras: Bundle
) {
if (cancellationSignal.isCanceled) {
callback.onLayoutCancelled()
return
}
val pdi = PrintDocumentInfo.Builder("Name of file")
.setContentType(PrintDocumentInfo.CONTENT_TYPE_DOCUMENT).build()
callback.onLayoutFinished(pdi, true)
}
}
OR if not possible then how to get password removed from pdf.

pdfView.fromAsset(String)
.pages(0, 2, 1, 3, 3, 3) // all pages are displayed by default
.enableSwipe(true) // allows to block changing pages using swipe
.swipeHorizontal(false)
.enableDoubletap(true)
.defaultPage(0)
// allows to draw something on the current page, usually visible in the middle of the screen
.onDraw(onDrawListener)
// allows to draw something on all pages, separately for every page. Called only for visible pages
.onDrawAll(onDrawListener)
.onLoad(onLoadCompleteListener) // called after document is loaded and starts to be rendered
.onPageChange(onPageChangeListener)
.onPageScroll(onPageScrollListener)
.onError(onErrorListener)
.onPageError(onPageErrorListener)
.onRender(onRenderListener) // called after document is rendered for the first time
// called on single tap, return true if handled, false to toggle scroll handle visibility
.onTap(onTapListener)
.onLongPress(onLongPressListener)
.enableAnnotationRendering(false) // render annotations (such as comments, colors or forms)
.password(null)
.scrollHandle(null)
.enableAntialiasing(true) // improve rendering a little bit on low-res screens
// spacing between pages in dp. To define spacing color, set view background
.spacing(0)
.autoSpacing(false) // add dynamic spacing to fit each page on its own on the screen
.linkHandler(DefaultLinkHandler)
.pageFitPolicy(FitPolicy.WIDTH) // mode to fit pages in the view
.fitEachPage(false) // fit each page to the view, else smaller pages are scaled relative to largest page.
.pageSnap(false) // snap pages to screen boundaries
.pageFling(false) // make a fling change only a single page like ViewPager
.nightMode(false) // toggle night mode
.load();
Have you tried updating the ".password" line?

You need not to generate pdf without password to print. as you said you are using barteksc:android-pdf-viewer for viewing pdfs which uses PDfium for rendering pdf which has a method to render bitmap from method.
void getBitmaps() {
ImageView iv = (ImageView) findViewById(R.id.imageView);
ParcelFileDescriptor fd = ...;
int pageNum = 0;
PdfiumCore pdfiumCore = new PdfiumCore(context);
try {
PdfDocument pdfDocument = pdfiumCore.newDocument(fd);
pdfiumCore.openPage(pdfDocument, pageNum);
int width = pdfiumCore.getPageWidthPoint(pdfDocument, pageNum);
int height = pdfiumCore.getPageHeightPoint(pdfDocument, pageNum);
// ARGB_8888 - best quality, high memory usage, higher possibility of OutOfMemoryError
// RGB_565 - little worse quality, twice less memory usage
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
pdfiumCore.renderPageBitmap(pdfDocument, bitmap, pageNum, 0, 0,
width, height);
//if you need to render annotations and form fields, you can use
//the same method above adding 'true' as last param
iv.setImageBitmap(bitmap);
printInfo(pdfiumCore, pdfDocument);
pdfiumCore.closeDocument(pdfDocument); // important!
} catch(IOException ex) {
ex.printStackTrace();
}
}
store these bitmaps in arraylist and print them using android print framework

Related

GridView only create Images when they would be visible

Hei there
So I have the following problem. I have around 1500 images of playing cards. I want to display them in a "Gallery" where you could scroll through them. I was able to create a GridView with the ImageCell object and I was also able to add images to it. Now my problem is that if I add all Image's at once logically Java crashes because of a heap error. I have the image url's (local files) in a list. How could I implement that it only load lets say 15 images. If I then scroll it loads the next 15 and unloads the old ones. So it would only load the images of the actual visible images and not all 1500. How would I do this? I am completely out of ideas.
The Platform.runLater() is needed because some sort of issue with ControlsFX
Here my code:
public void initialize() {
GridView<Image> gridView = new GridView<>();
gridView.setCellFactory(gridView1 -> new ImageGridCell(true));
Image image = new Image("C:\\Users\\nijog\\Downloads\\cardpictures\\01DE001.png");
gridView.setCellWidth(340);
gridView.setCellHeight(512);
//Platform.runLater(()-> {
// for (int i = 0; i < 5000; i++){
// gridView.getItems().add(image);
// }
//});
Platform.runLater(() -> gridView.getItems().addAll(createImageListFromCardFiles()));
borderPane.setCenter(gridView);
}
protected List<Image> createImageListFromCardFiles(){
List<Image> imageViewList = new ArrayList<>();
App.getCardService().getCardArray().stream()
//.filter(Card::isCollectible)
.sorted(Comparator.comparingInt(Card::getCost))
.sorted(Comparator.comparing(Card::isChampion).reversed())
.skip(0)
//.limit(100)
.forEach(card -> {
try {
String url = String.format(App.pictureFolderPath +"%s.png", card.getCardCode());
imageViewList.add(new Image(url));
} catch (Exception e) {
System.out.println("Picture file not found [CardCode = " + card.getCardCode() + "]");
App.logger.writeLog(Logger.Operation.EXCEPTION, "Picture file not found [CardCode = " + card.getCardCode() + "]");
}
});
return imageViewList;
}
You might not need to use the strategy you describe. You're displaying the images in cells of size 340x512, which is 174,080 pixels. Image storage is 4 bytes per pixel, so this is 696,320 bytes per image; 1500 of them will consume about 1GB. You just need to make sure you load the image at the size you are displaying it (instead of its native size):
// imageViewList.add(new Image(url));
imageViewList.add(new Image(url, 340, 512, true, true, true));
If you need an image at full size later (e.g. if you want the user to select an image from your grid view and display it in a bigger pane), you'd just need to reload it from the url.
If you do need to implement the strategy you describe, GridView supports that out of the box. Just keep a list of the URLs, instead of the Images, and use a custom GridCell to load the image as needed. This will consume significantly less memory, at the cost of a lot more I/O (loading the images) and CPU (parsing the image format).
Make the items for the GridView the image urls, stored as Strings.
Then you can do something like:
GridView<String> gridView = new GridView<>();
gridView.getItems().addAll(getAllImageURLs());
gridView.setCellFactory(gv -> new GridCell<>() {
private final ImageView imageView = new ImageView();
{
imageView.fitWidthProperty().bind(widthProperty());
imageView.fitHeightProperty().bind(heightProperty());
imageView.setPreserveRatio(true);
}
#Override
protected void updateItem(String url, boolean empty) {
super.updateItem(url, empty);
if (empty || url == null) {
setGraphic(null);
} else {
double w = getGridView().getCellWidth();
double h = getGridView().getCellHeight();
imageView.setImage(new Image(url, w, h, true, true, true));
setGraphic(imageView);
}
}
});
protected List<String> getAllImageURLs(){
return App.getCardService().getCardArray().stream()
// isn't the first sort redundant here?
.sorted(Comparator.comparingInt(Card::getCost))
.sorted(Comparator.comparing(Card::isChampion).reversed())
.map(card -> String.format(App.pictureFolderPath +"%s.png", card.getCardCode()))
.collect(Collectors.toList());
}

Image to text with same format - MLkit firebase

I'm developing an app to extract the text from Nutrition fact. By using MLkit firebase I achieve that, but I have one problem that the text doesn't show in the same format as in the image. Here is my code for text recognition.
// ----TextRecognizer START ---
BitmapDrawable bitmapDrawable = (BitmapDrawable) mPreview.getDrawable();
Bitmap bitmap = bitmapDrawable.getBitmap();
final FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
FirebaseVisionCloudTextRecognizerOptions options = new FirebaseVisionCloudTextRecognizerOptions.Builder()
.setLanguageHints(Arrays.asList("en", "ar"))
.build();
// [END set_detector_options_cloud]
// [START get_detector_cloud]
// Or, to change the default settings:
FirebaseVisionTextRecognizer detector = FirebaseVision.getInstance()
.getCloudTextRecognizer(options);
// [END get_detector_cloud]
// [START run_detector_cloud]
Task<FirebaseVisionText> result2 = detector.processImage(image)
.addOnSuccessListener(new OnSuccessListener<FirebaseVisionText>() {
#Override
public void onSuccess(FirebaseVisionText result) {
// Task completed successfully
// [START_EXCLUDE]
// [START get_text_cloud]
StringBuilder sb = new StringBuilder();
for (FirebaseVisionText.TextBlock block : result.getTextBlocks()) {
if (result.getTextBlocks().size() == 0){
mResultEt.setText("NO Text Found");
}else {
Rect boundingBox = block.getBoundingBox();
Point[] cornerPoints = block.getCornerPoints();
String text = block.getText();
for (FirebaseVisionText.Line line: block.getLines()) {
sb.append(line.getText());
sb.append("\n");
for (FirebaseVisionText.Element element: line.getElements()) {
}
}
}
}
mResultEt.setText(sb);
// [END get_text_cloud]
// [END_EXCLUDE]
This is an image I want to extract the text from
this is the result but the format not as the image
I tried different solutions by adding a new line or tab but it's the same.
By the way, I want to use the numbers in the image to do some calculations.
If anyone can help me with it, I would appreciate.
ML Kit does not automatically detect the layout for now. But each detected text has its own coordinates. One solution is to use the text coordinates to detect decide the layout manually.
Say the 'Serving Size' and '1 Can' has the similar X-axle, then you may want to group them together.

Android camera2 preview image disorder when saved using ImageReader

I am taking a series of pictures using Android Camera2 API for real time pose estimation and environment reconstruction (the SLAM problem). Currently I simply save all of these pictures in my SD card for off-line processing.
I setup the processing pipeline according to google's Camera2Basic using a TextureView as well as an ImageReader, where they are both set as target surfaces for a repeat preview request.
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if(mIsShooting){
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = false;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
else{
try {
mCaptureSession.stopRepeating();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
mIsShooting = true;
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
});
The ImageReader is added/removed when pressing the button. The ImageReader's OnImageAvailableListener is implemented as follow:
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
if(null == img){
return;
}
if(img.getTimestamp() <= mLatestFrameTime){
Log.i(Tag, "disorder detected!");
return;
}
mLatestFrameTime = img.getTimestamp();
ImageSaver saver = new ImageSaver(img, img.getTimestamp());
saver.run();
}
};
I use acquireLatestImage (with buffer size set to 2) to discard old frames and have also checked the image's timestamp to make sure they are monotonously increasing.
The reader does receive images at an acceptable rate (about 25fps). However a closer look at the saved image sequence show they are not
always saved in chronological order.
The following pictures come from a long sequence shot by the program (sorry for not being able to post pictures directly :( ):
Image 1:
Image 2:
Image 3:
Such disorder does not occur very often but they can occur any time and seems not to be an initialization problem. I suppose it has something to do with the ImageReader's buffer size as with larger buffer less "flash backs" are occurred. Does anyone have the same problem?
I finally find that such disorder disappears when setting ImageReader's format to be YUV_420_888 in its constructor. Originally I set this field as JPEG.
Using JPEG format incurs not only large processing delay but also disorder. I guess the conversion from image sensor data to desired format utilizes other hardware such as DSP or GPU which does not guarantee chronological order.
Are you using TEMPLATE_STILL_CAPTURE for the capture requests when you enable the ImageReader, or just TEMPLATE_PREVIEW? What devices are you seeing issues with?
If you're using STILL_CAPTURE, make sure you check if the device supports the ENABLE_ZSL flag, and set it to false. When it is set to true (generally the default on devices that support it, for the STILL_CAPTURE template), images may be returned out of order since there's a zero-shutter-lag queue in place within the camera device.

Android Face Detection dont finding faces

I want to find a face on an image from camera. But detector can't find faces. My app does photo and save it in file.
Below code which create file, start camera, and in onActivityResult in trying detect face and save file path to the room, its saving correctrly and showing in recycler view as expected, but face detector dont finding faces. how can i fix this?
private fun takePhoto() {
val takePictureIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
if (takePictureIntent.resolveActivity(activity?.packageManager!!) != null) {
val photoFile: File
try {
photoFile = createImageFile()
} catch (e: IOException) {
error { e }
}
val photoURI = FileProvider.getUriForFile(activity?.applicationContext!!, "com.nasibov.fakhri.neurelia.fileprovider", photoFile)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
takePictureIntent.putExtra("android.intent.extras.CAMERA_FACING", 1)
startActivityForResult(takePictureIntent, PhotoFragment.REQUEST_TAKE_PHOTO)
}
}
#Suppress("SimpleDateFormat")
private fun createImageFile(): File {
val date = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val fileName = "JPEG_$date"
val filesDir = activity?.getExternalFilesDir(Environment.DIRECTORY_PICTURES)
val image = File.createTempFile(fileName, ".jpg", filesDir)
mCurrentImage = image
return mCurrentImage
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
val bitmap = BitmapFactory.decodeFile(mCurrentImage.absolutePath)
val frame = Frame.Builder().setBitmap(bitmap).build()
val detectedFaces = mFaceDetector.detect(frame)
mViewModel.savePhoto(mCurrentImage)
}
}
Android Face detection API tracks face in photos, videos using some landmarks like eyes, nose, ears, cheeks, and mouth.
Rather than detecting the individual features, the API detects the face at once and then if defined, detects the landmarks and classifications. Besides, the API can detect faces at various angles too.
https://www.journaldev.com/15629/android-face-detection
Android SDK contains an API for Face Detection: android.media.FaceDetector class. This class detects faces on the image. To detect faces call findFaces method of FaceDetector class. findFaces method returns a number of detected faces and fills the FaceDetector.Faces[] array. Please note, that findFaces method supports only bitmaps in RGB_565 format at this time.
Each instance of the FaceDetector.Face class contains the following information:
Confidence that it’s actually a face – a float value between 0 and 1.
Distance between the eyes – in pixels.
Position (x, y) of the mid-point between the eyes.
Rotations (X, Y, Z).
Unfortunately, it doesn’t contain a framing rectangle that includes the detected face.
Here is sample source code for face detection. This sample code enables a custom View that shows a saved image from an SD Card and draws transparent red circles on the detected faces.
class Face_Detection_View extends View {
private static final int MAX_FACES = 10;
private static final String IMAGE_FN = "face.jpg";
private Bitmap background_image;
private FaceDetector.Face[] faces;
private int face_count;
// preallocate for onDraw(...)
private PointF tmp_point = new PointF();
private Paint tmp_paint = new Paint();
public Face_Detection_View(Context context) {
super(context);
// Load an image from SD Card
updateImage(Environment.getExternalStorageDirectory() + "/" + IMAGE_FN);
}
public void updateImage(String image_fn) {
// Set internal configuration to RGB_565
BitmapFactory.Options bitmap_options = new BitmapFactory.Options();
bitmap_options.inPreferredConfig = Bitmap.Config.RGB_565;
background_image = BitmapFactory.decodeFile(image_fn, bitmap_options);
FaceDetector face_detector = new FaceDetector(
background_image.getWidth(), background_image.getHeight(),
MAX_FACES);
faces = new FaceDetector.Face[MAX_FACES];
// The bitmap must be in 565 format (for now).
face_count = face_detector.findFaces(background_image, faces);
Log.d("Face_Detection", "Face Count: " + String.valueOf(face_count));
}
public void onDraw(Canvas canvas) {
canvas.drawBitmap(background_image, 0, 0, null);
for (int i = 0; i < face_count; i++) {
FaceDetector.Face face = faces[i];
tmp_paint.setColor(Color.RED);
tmp_paint.setAlpha(100);
face.getMidPoint(tmp_point);
canvas.drawCircle(tmp_point.x, tmp_point.y, face.eyesDistance(),
tmp_paint);
}
}
}

BitmapFactory.decodeByteArray returns Bitmap with invalid dimensions?

I am trying to decode a jpeg buffer (by Camera.takePicture), using android.graphics.BitmapFactory. The android documentation states that decodeByteArray "Returns the decoded bitmap, or null if the image could not be decoded."
No exception is thrown and I get a non-null object with invalid width and height:
android.graphics.Bitmap#41c7d960
[
mBuffer = ...
mFinalizer = android.graphics.Bitmap$BitmapFinalizer#41ca20c0
mWidth = -1
mHeight = -1
mDensity = 240
mLayoutBounds = null
mNativeBitmap = 1373749696
...
]
My function call is as follows:
public Func(byte [] jpegBuffer) {
try {
mBitmap = android.graphics.BitmapFactory.decodeByteArray(jpegBuffer, 0, jpegBuffer.length);
}
catch (Exception e) {
mLog.e("Problem during jpeg decompression: " + e.toString());
}
}
What is going on? Is the Bitmap decoded successfully or not? If yes, why are its dimensions invalid? If no, why am I not receiving a null result?
Relevant question
Basically, the fields mWidth and mHeight (as seen by the Eclipse debugger) re lazily-evaluated and inaccessible programatically, meaning they default to -1 until getWidth() and getHeight() are called, respectively.
In conclusion, the android.Bitmap class may fool the debugging programmer by not updating some private fields that are visible to the debugger.

Categories

Resources