I have a weird problem,
I have an activity, within this activity I have a layout I want to make an image of it in order to share it on social networks.
This layout contains differents dynamic images and texts. That's why I can't just store it on as a static image and share it on demand. I need to generate the image right when the user touch the share button.
The problem is that I need to adjust the layout before sharing, what I do ?
ImageView profilPic = (ImageView)dashboardView.findViewById(R.id.profilePic);
RelativeLayout.LayoutParams params = (RelativeLayout.LayoutParams)profilPic.getLayoutParams();
params.setMargins(10, 62, 0, 0);
profilPic.invalidate();
profilPic.requestLayout();
First I modify the margin of my layout, then I make an image of it
Bitmap bitmap = null;
try {
bitmap = Bitmap.createBitmap(dashboardView.getWidth(),
dashboardView.getHeight(), Bitmap.Config.ARGB_4444);
dashboardView.draw(new Canvas(bitmap));
} catch (Exception e) {
// Logger.e(e.toString());
}
FileOutputStream fileOutputStream = null;
File path = Environment
.getExternalStorageDirectory();
File file = new File(path, "wayzupDashboard" + ".png");
try {
fileOutputStream = new FileOutputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedOutputStream bos = new BufferedOutputStream(fileOutputStream);
bitmap.compress(CompressFormat.PNG, 100, bos);
try {
bos.flush();
bos.close();
fileOutputStream.flush();
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
And finally I share it.
Basically it works, except that it captures the layout BEFORE redrawing the layout with modified layoutParams.
I need to capture this layout once and only when the new layout parameters are took into account, after layout is redraw.
If I remove the capture code, well it works, when I touch the share button I see the layout moving. But when I have the capture code in place it just capture the layout before modifying margins.
How can I ensure that the layout is redraw before capturing it ?
For the record, to correctly capture the layout only once the view was setup I needed to first setup the view and then capture the layout in a delayed thread. This is the only way I found to make it works.
public void onShareButton() {
dashboardController.setupViewBeforeSharing();
new Timer().schedule(new TimerTask() {
#Override
public void run() {
Bitmap bitmap = null;
try {
bitmap = Bitmap.createBitmap(dashboardView.getWidth(),
dashboardView.getHeight(), Bitmap.Config.ARGB_4444);
dashboardView.draw(new Canvas(bitmap));
} catch (Exception e) {
// Logger.e(e.toString());
}
FileOutputStream fileOutputStream = null;
File path = Environment
.getExternalStorageDirectory();
File file = new File(path, "wayzupDashboard" + ".png");
try {
fileOutputStream = new FileOutputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
BufferedOutputStream bos = new BufferedOutputStream(fileOutputStream);
bitmap.compress(CompressFormat.PNG, 100, bos);
try {
bos.flush();
bos.close();
fileOutputStream.flush();
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Intent share = new Intent(Intent.ACTION_SEND);
share.setType("image/png");
share.putExtra(Intent.EXTRA_TEXT, R.string.addPassengerButton);
share.putExtra(Intent.EXTRA_STREAM,
Uri.parse("file://" + file.getAbsolutePath()));
startActivity(Intent.createChooser(share, "Share image"));
MainActivity.this.runOnUiThread(new Runnable() {
public void run() {
dashboardController.setupViewAfterSharing();
}
});
}
}, 300);
}
An easier way, is to set-up a listener for layout pass on the dashboardView view.
Setting-up listener using View.addOnLayoutChangeListener() - addOnLayoutChangeListener.
Set the listener before changing the layout parameters and remove it after the image was saved to persistence.
Hope it'll add any improvement.
Related
Hello Guys i trying to resize a bitmap using the comand Bitmap.CreateScaleBitmap but that comand create an new image at the my dispositivo. i need in one solution for resize not create one new image.
my code:
if(requestCode ==0){
Bitmap bitmap = null;
try {
uri_image_boi=data.getData();
bitmap=MediaStore.Images.Media.getBitmap(getActivity().getContentResolver(),uri_image_boi);
Bitmap resizedBitmap = Bitmap.createScaledBitmap(bitmap,600,400,false);
image_boi.setImageDrawable(new BitmapDrawable(resizedBitmap));
botao_adicionar_foto.setAlpha(0);
resize= getImageUri(this,resizedBitmap);
} catch (Exception e) {
}
}
}
I'm using JavaCV. My program required to make a webcam photo and save it to the folder which is on the desktop.
Here is the path to the folder :
public static String webcamPath = System.getProperty("user.home") + "/Desktop/folder/webcam.png"
That is how i save the image :
FrameGrabber grabber = new VideoInputFrameGrabber(0);
try {
grabber.start();
Thread.sleep(1000);
while (true) {
IplImage img = grabber.grab();
if (img != null) {
cvSaveImage(webcamPath, img);
grabber.stop();
}
}
} catch (Exception e) {
e.printStackTrace();
}
But when the webcam starts working, it can't save the image and i'm getting this exception :
com.googlecode.javacv.FrameGrabber$Exception: videoInput is null. (Has start() been called?)
So is there any way to save the IplImage to a folder on the desktop?
Thanks.
In your code, this is the flow:
start the FrameGrabber
start loop
grab
stop the grabber
trying to grab again <<< exception occurs because grabber is not opened again
My guess is to place the open inside the loop:
FrameGrabber grabber = new VideoInputFrameGrabber(0);
try {
while (true) {
grabber.start();
Thread.sleep(1000);
IplImage img = grabber.grab();
if (img != null) {
cvSaveImage(webcamPath, img);
grabber.stop();
}
}
} catch (Exception e) {
e.printStackTrace();
}
Save your file to specific directory by adding:
cvSaveImage(<path>\\<imagename>, img);
JPanel with 3 JButton and I need only two of them to be captured...
public static void grabScreenShot(JPanel panel) {
BufferedImage image = (BufferedImage) panel.createImage(
panel.getSize().width, panel.getSize().height);
panel.paint(image.getGraphics());
File file = null;
file = new File("Customers");
if (!file.exists()) {
file.mkdir();
}
try {
file = new File("Customers" + File.separator
+ String.valueOf(System.currentTimeMillis()));
ImageIO.write(image, "png", file);
System.out.println("Image was created");
} catch (IOException e) {
System.out.println("Had trouble writing the image.");
e.printStackTrace();
}
}
How to avoid unnecessary components to be captured.?
You can try to override paintComponent() of the buttons and introduce a flag needPaint. The flag is true by default.
if (needPaint) {
super.paintComponent(g);
}
In your grabScreenShot() set the flag to false for the button to be hidden and reset it back after panel.paint(image.getGraphics()); call
I have a program that uses OpenCV to take a picture using your webcam. It works like a charm on windows, yet, it doesn't work on OSx. The Frame where the Webcam view should appear stays empty. And when I take a picture, it just shows a black void, as if it couldnt find the webcam
public void run(){
try {
grabber = new VideoInputFrameGrabber(0);
grabber.start();
while (active) {
IplImage originalImage = grabber.grab();
Label.setIcon(new ImageIcon( originalImage.getBufferedImage() ));
}
grabber.stop();
grabber.flush();
} catch (Exception ex) {
//Logger.getLogger(ChPanel.class.getName()).log(Leve l.SEVERE, null, ex);
}
}
public BufferedImage saveImage(){
IplImage img;
try {
//capture image
img = grabber.grab();
// save to file
File outputfile = new File(Project.getInstance().getFileURLStr() + " capture" + fotoCount++ + ".jpg");
ImageIO.write(img.getBufferedImage(), "jpg", outputfile);
//get file and set it in the project library
BufferedImage ImportFile = ImageIO.read(outputfile);
Project p = Project.getInstance();
MainScreen ms = MainScreen.getInstance();
ImageIcon takenPhoto = new ImageIcon(ImportFile);
p.setNextImage(takenPhoto);
ms.setPanels();
return ImportFile;
} catch (com.googlecode.javacv.FrameGrabber.Exception e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
Does anyone know how to solve this? I suspect something about rights to use the webcam or something like that
grabber = new VideoInputFrameGrabber(0);
Here 0 is specified for Capture device number 0
May be the number 0th device is not available for video capture
Use this code to get the list of devices and number respectively.
import com.googlecode.javacv.cpp.videoInputLib.videoInput;
class Main {
public static void main(String[] args) {
int n=videoInput.listDevices();
for(int i=0;i<n;i++)
{
System.out.println(i+" = "+videoInput.getDeviceName(i));
}
}
}
And then specify the number for that device
grabber = new VideoInputFrameGrabber(1); // 0 or 1 or 2
To interact with webcam I use this library webcam-capture you can easely add openCV dependency with maven. This is a great library
Can someone tell me how I can use the Zxing Library for an Augmented Reality App? I know the easiest way to use Zxing is via Intent, but I need the Camera View so I can not use the barcode App.
I have a SurfaceHolder.Callback which is added to the main activity and overwrites following method:
#Override
public void surfaceCreated(SurfaceHolder holder) {
mCamera = Camera.open();
try {
mCamera.setPreviewDisplay(holder);
} catch (IOException e) {
Log.d(TAG, "Can not set surface holder");
}
mCamera.startPreview();
Parameters parameters = mCamera.getParameters();
parameters.setPreviewSize(1280, 720);
parameters.setPictureSize(1280, 720);
mCamera.setParameters(parameters);
QrCodeReader reader = new QrCodeReader();
mCamera.setPreviewCallback(reader);
}
The setted picture size has to be available because it is in the list parameters.getSupportedPictureSizes().
And this method in the QrCodeReader class which implements PreviewCallback:
private Result result;
private MultiFormatReader reader = new MultiFormatReader();
private boolean init = false;
public QrCodeReader(){
Hashtable<DecodeHintType, Object> hints = new Hashtable<DecodeHintType,
Object>();
hints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
reader.setHints(hints);
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
PlanarYUVLuminanceSource source = new PlanarYUVLuminanceSource(data,
1280, 720, 0, 0, 1280, 720, true);
HybridBinarizer hybBin = new HybridBinarizer(source);
BinaryBitmap bitmap = new BinaryBitmap(hybBin);
try {
result = reader.decodeWithState(bitmap);
Log.d("Result", "Result found!");
} catch (NotFoundException e) {
Log.d(TAG, "NotFoundException");
} finally {
reader.reset();
}
}
The Logcat only shows NotFoundException.
NotFoundException is normal. If the frame doesn't have a barcode, that's the result. This doesn't mean anything is wrong per se. Keep scanning.