Why does Kernel32 OpenProcess function return null? - java

I'm trying to make an application that reads the memory of another (non-Java & 32bit) application using JNA. So far I know how to find process ID and base address of modules. And right before reading memory I need to open process and the OpenProcess function simply returns null. Also, I'm using Windows 10.
// process id (pid) is known
final int PROCESS_VM_READ=0x0010;
final int PROCESS_QUERY_INFORMATION=0x0400;
WinNT.HANDLE processHandle = Kernel32.INSTANCE.OpenProcess(PROCESS_VM_READ | PROCESS_QUERY_INFORMATION, true, pid);
How can I get a process handle?

You need to enable debug privilege for your current process in order to query information for processes owned by anyone other than the current user. The link shows the code in C, but you can port that code to JNA.
This is a one-time method call when your program starts up.
Here's how I do it (hat tip to #RbMm for improvements):
/**
* Enables debug privileges for this process, required for OpenProcess() to get
* processes other than the current user
*
* #return {#code true} if debug privileges were successfully enabled.
*/
private static boolean enableDebugPrivilege() {
HANDLEByReference hToken = new HANDLEByReference();
boolean success = Advapi32.INSTANCE.OpenProcessToken(Kernel32.INSTANCE.GetCurrentProcess(),
WinNT.TOKEN_QUERY | WinNT.TOKEN_ADJUST_PRIVILEGES, hToken);
if (!success) {
LOG.error("OpenProcessToken failed. Error: {}", Native.getLastError());
return false;
}
try {
WinNT.LUID luid = new WinNT.LUID();
success = Advapi32.INSTANCE.LookupPrivilegeValue(null, WinNT.SE_DEBUG_NAME, luid);
if (!success) {
LOG.error("LookupPrivilegeValue failed. Error: {}", Native.getLastError());
return false;
}
WinNT.TOKEN_PRIVILEGES tkp = new WinNT.TOKEN_PRIVILEGES(1);
tkp.Privileges[0] = new WinNT.LUID_AND_ATTRIBUTES(luid, new DWORD(WinNT.SE_PRIVILEGE_ENABLED));
success = Advapi32.INSTANCE.AdjustTokenPrivileges(hToken.getValue(), false, tkp, 0, null, null);
int err = Native.getLastError();
if (!success) {
LOG.error("AdjustTokenPrivileges failed. Error: {}", err);
return false;
} else if (err == WinError.ERROR_NOT_ALL_ASSIGNED) {
LOG.debug("Debug privileges not enabled.");
return false;
}
} finally {
Kernel32.INSTANCE.CloseHandle(hToken.getValue());
}
return true;
}

Related

Android Rename File using Google Drive REST API v3?

How do I rename a file using the Google Drive REST API on Android? I have searched the internet however I could, but I can't find how to do this.
I am trying to write a sync method that moves and renames the cloud files if it detects that the local copy has been moved or renamed:
void syncMetadataOnly (com.google.api.services.drive.model.File cloud,
java.io.File local) throws IOException {
Workspace.FINF fileInfo = Workspace.getFileInfo (this, local); // Just my metadata object.
Map<String, String> appProperties = cloud.getAppProperties ();
// We keep track of the last rename and move in our private app properties:
long cloudLastRename = appProperties.containsKey ("last-rename") ?
Long.valueOf (appProperties.get ("last-rename")) : 0;
long cloudLastMove = appProperties.containsKey ("last-move") ?
Long.valueOf (appProperties.get ("last-move")) : 0;
boolean needUpdate = false;
boolean needName = false;
boolean needMove = false;
if (fileInfo.lastRename > cloudLastRename) {
// The file was renamed locally since the last sync.
needName = true;
needUpdate = true;
} else fileInfo.title = cloud.getName ();
String oldRecognizedParent = null;
if (fileInfo.lastMove > cloudLastMove) {
// The file was moved to a different folder locally since the last sync.
oldRecognizedParent = getFirstKnownParentId (cloud); // May be null, if not found.
needMove = true;
needUpdate = true;
}
if (needUpdate) {
cloud.setAppProperties (appProperties);
Drive.Files.Update update = mDriveService.files ().update (fileInfo.driveResourceId, null);
if (needName) update.set /// How do I rename the file?
if (needMove) {
if (oldRecognizedParent != null)
update.setRemoveParents (oldRecognizedParent);
update.setAddParents (fileInfo.driveParentId); // Set to the NEW parent's ID.
}
com.google.api.services.drive.model.File result = update.execute ();
}
}
The closest answer I have found is this, but do I really have to use raw HTTP for this?

Amazon Java SDK - Upload to S3

I am using the Amazon Java SDK to upload files to Amazon s3
Whilst using version 1.10.62 of the artifact aws-java-sdk - the following code worked perfectly - Note all the wiring behind the scenes works
public boolean uploadInputStream(String destinationBucketName, InputStream inputStream, Integer numberOfBytes, String destinationFileKey, Boolean isPublic){
try {
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(numberOfBytes);
PutObjectRequest putObjectRequest = new PutObjectRequest(destinationBucketName, destinationFileKey, inputStream, metadata);
if (isPublic) {
putObjectRequest.withCannedAcl(CannedAccessControlList.PublicRead);
} else {
putObjectRequest.withCannedAcl(CannedAccessControlList.AuthenticatedRead);
}
final Upload myUpload = amazonTransferManager.upload(putObjectRequest);
myUpload.addProgressListener(new ProgressListener() {
// This method is called periodically as your transfer progresses
public void progressChanged(ProgressEvent progressEvent) {
LOG.info(myUpload.getProgress().getPercentTransferred() + "%");
LOG.info("progressEvent.getEventCode():" + progressEvent.getEventCode());
if (progressEvent.getEventCode() == ProgressEvent.COMPLETED_EVENT_CODE) {
LOG.info("Upload complete!!!");
}
}
});
long uploadStartTime = System.currentTimeMillis();
long startTimeInMillis = System.currentTimeMillis();
long logGap = 1000 * loggingIntervalInSeconds;
while (!myUpload.isDone()) {
if (System.currentTimeMillis() - startTimeInMillis >= logGap) {
logUploadStatistics(myUpload, Long.valueOf(numberOfBytes));
startTimeInMillis = System.currentTimeMillis();
}
}
long totalUploadDuration = System.currentTimeMillis() - uploadStartTime;
float totalUploadDurationSeconds = Float.valueOf(totalUploadDuration) / 1000;
String uploadedPercentageStr = getFormattedUploadPercentage(myUpload);
boolean isUploadDone = myUpload.isDone();
if (isUploadDone) {
Object[] params = new Object[]{destinationFileKey, totalUploadDuration, totalUploadDurationSeconds};
LOG.info("Successfully uploaded file {} to Amazon. The upload took {} milliseconds ({} seconds)", params);
result = true;
}
LOG.debug("Post put the inputStream to th location {}", destinationFileKey);
} catch (AmazonServiceException e) {
LOG.error("AmazonServiceException:{}", e);
result = false;
} catch (AmazonClientException e) {
LOG.error("AmazonServiceException:{}", e);
result = false;
}
LOG.debug("Exiting uploadInputStream - result:{}", result);
return result;
}
Since I migrated to version 1.11.31 of the aws-java-sdk - this code stopped working
All classes remain intact and there were no warnings in my IDE
However - I do see the following logged to my console
[2016-09-06 22:21:58,920] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.requestId - x-amzn-RequestId: not available
[2016-09-06 22:21:58,931] [s3-transfer-manager-worker-1] [DEBUG] com.amazonaws.request - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Moved Permanently (Service: null; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: D67813C8A11842AE), S3 Extended Request ID: 3CBHeq6fWSzwoLSt3J7D4AUlOaoi1JhfxAfcN1vF8I4tO1aiOAjqB63sac9Oyrq3VZ4x3koEC5I=
The upload still continues but from the progress listener - the event code is 8 which stands for transfer failed
Does anyone have any idea what I need to do to get this chunk of code working again?
Thank you
Damien
try changing it to this:
public void progressChanged(ProgressEvent progressEvent) {
LOG.info(myUpload.getProgress().getPercentTransferred() + "%");
LOG.info("progressEvent.getEventCode():" + progressEvent.getEventType());
if (progressEvent.getEventType() == ProgressEventType.TRANSFER_COMPLETED_EVENT) {
LOG.info("Upload complete!!!");
}
}
It looks like you are running some deprecated code.
In com.amazonaws.event.ProgressEventType, value 8 refers to HTTP_REQUEST_COMPLETED_EVENT
COMPLETED_EVENT_CODE is deprecated
getEventCode is deprecated
refer to this -> https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/java/com/amazonaws/event/ProgressEvent.java
I updated my versions of the S3 library, generated new Access Keys and also a new bucket
Now everything works as expected

logCat in separate process creates right number of logs but ignores file size

In our Android App we wish to capture Log messages.
We kick off a Process which invokes LogCat, asking for rolling logs.
LogCat starts. We get rolling logs. We get the right number of files. In fact allAll the arguments we specify are honored except for the individual file size argument, -r. No matter what argument we specify for -r we get individual 16K files. We've tried -r 10000 and -r 250 and -r 100 and -r 64. Always 16K files.
EDIT: -r 12 (smaller value) also ignored - get 16K files.
EDIT: As of a few months ago, LogCat spontaneously started to honor our size argument. It must have been a bug that they fixed. Thanks all.
What are we doing wrong?
Here's the call we make:
logcat -f /storage/emulated/0/Android/data/com.example.my.app/files/MyAppLogs.txt -v time -n 20 -r 64 CookActivity:D CookingService:D destroy:W distcooked:D finishedSpeaking:D freeMealReceived:D gotFocus:D keptMenu:D settingNextWakeAlarm:D settingReTryAlarm:D speakBroadcast:D speakerDestroyed:D stopped:D timeupdate:D dalvikvm:W dalvikvm-heap:W MyImageCache:D LogCatManager:D MainActivity:I MainActivity:D *:W *:E
If you're curious:
We take the code structure and file system checks from this and other StackOverflow pages:
Android unable to run logcat from application
We take the call to exec() from this page and others:
http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html#exec(java.lang.String[])
From other places we learned that we have to request the app permissions for WRITE_EXTERNAL_STORAGE and READ_LOGS
EDIT: The class we use to manage logCat:
package com.example.my.app.main;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import android.app.Activity;
import android.os.Environment;
import android.os.SystemClock;
import android.util.Log;
/**
* Manages starting the external logCat process and shutting it down.
*
* When this is asked to start an instance of logCat for this application and one is already running,
* a second one is not started.
*
* When this application is shut down gracefully, it tries to shut down its instance of logCat.
*
*/
public class LogCatManager {
private static final String TAG = "LogCatManager";
//The process for logCat - set up initially, tear down when we're done.
Process logSaveProcess = null;
// logcat -f /storage/emulated/0/Android/data/com.example.my.app/files/MyAppLogs.txt -v time -n 20 -r 64 CookActivity:D CookingService:D destroy:W distcooked:D finishedSpeaking:D freeMealReceived:D gotFocus:D keptMenu:D settingNextWakeAlarm:D settingReTryAlarm:D speakBroadcast:D speakerDestroyed:D stopped:D timeupdate:D dalvikvm:W dalvikvm-heap:W MyImageCache:D LogCatManager:D MainActivity:I MainActivity:D *:W *:E
//The index to which the output file name string should be assigned.
static final int logCatOutputFileIndex = 2 ;
//The index to which the number-of-Kbytes-in-each-logfile threshhold should be assigned.
static final int logCatLogKSizeIndex = 8;
//The index to which the number-of-logfiles should be assigned
static final int logCatLogNLogsIndex = 6;
String logCatTokenArray[] = {
"logcat"
,"-f"
,"--THE_OUTPUT_FILE_NAME_GOES_HERE--"
,"-v"
,"time"
,"-n"
,"--THE_NUMBER_OF_LOGFILES_GOES_HERE--"
,"-r"
,"--THE_MAX_LOGFILE_SIZE_GOES_HERE--"
/*\
* entries after this are search criteria - logCat entries matching these are included in the saved file
\*/
,"CookActivity:D"
,"CookingService:D"
,"destroy:W"
,"distcooked:D"
,"finishedSpeaking:D"
,"freeMealReceived:D"
,"gotFocus:D"
,"keptMenu:D"
,"settingNextWakeAlarm:D"
,"settingReTryAlarm:D"
,"speakBroadcast:D"
,"speakerDestroyed:D"
,"stopped:D"
,"timeupdate:D"
,"dalvikvm:W"
,"dalvikvm-heap:W"
,"MyImageCache:D"
,"LogCatManager:D"
,"MainActivity:I"
,"MainActivity:D"
/*\
* these mean 'anything with W or E, include'.
\*/
,"*:W"
,"*:E"
};
/** Turns on and off saving logCat data to a file. **/
public void setLogSaving(Activity act, boolean shouldStart)
{
try
{
if (shouldStart)
{
if (logSaveProcess == null)
{
Log.i(TAG, "logCat starting LogSaving");
startLogSaving(act);
} else {
Log.e(TAG,"logCat asked to start LogSaving - but saving already happening (process reference not null)! request ignored");
}
}
else
{
if (logSaveProcess == null)
{
Log.e(TAG,"could not simply destroy logCat process - have no reference");
}
else
{
Log.i(TAG,"have reference to logCat process object - stopping LogSaving");
logSaveProcess.destroy();
logSaveProcess = null;
}
}
}
catch(Exception e)
{
Log.e(TAG,"exception in setLogSaving("+ shouldStart + ") - " + e.toString());
}
}
//10-18-2014 current experience with current software and logCat arguments
// has RM being able to record about 140 minutes of log data in a megabyte.
//The settings below attempt to give us more than 10 hours on average before rollover.
public final static String SAVINGLOGFILENAME = "MyAppLogs.txt" ;
public final static String SAVINGLOGMAXSIZE = "12" ; //... "64", "100", "250", "10000" (10Meg) too big.
public final static String SAVINGLOGMAXNLOGS = "20" ; //10 has also been honored...
// Took structure and file system checks from this and other StackOverflow pages:
// https://stackoverflow.com/questions/6219345/android-unable-to-run-logcat-from-application?rq=1
//
// Took call to exec() from this and others:
// http://docs.oracle.com/javase/6/docs/api/java/lang/Runtime.html#exec(java.lang.String[])
//
// ** make sure your app has the WRITE_EXTERNAL_STORAGE and READ_LOGS permissions **
//
private void startLogSaving(Activity act) {
String filename = null;
String directory = null;
String fullPath = null;
String externalStorageState = null;
// The directory will depend on if we have external storage available to us or not
try {
filename = SAVINGLOGFILENAME ;
externalStorageState = Environment.getExternalStorageState();
if (externalStorageState.equals(Environment.MEDIA_MOUNTED)) {
if(android.os.Build.VERSION.SDK_INT <= 7) {
directory = Environment.getExternalStorageDirectory().getAbsolutePath();
} else {
directory = act.getExternalFilesDir(null).getAbsolutePath();
}
} else {
directory = act.getFilesDir().getAbsolutePath();
}
fullPath = directory + File.separator + filename;
// Finish token array by contributing the output file name.
logCatTokenArray[ logCatOutputFileIndex ] = fullPath ;
// ...and the max size.
logCatTokenArray[ logCatLogKSizeIndex ] = SAVINGLOGMAXSIZE ;
// ...and the max number of log files (rotating).
logCatTokenArray[ logCatLogNLogsIndex ] = SAVINGLOGMAXNLOGS ;
Log.w(TAG, "About to start LogCat - " + strJoin(logCatTokenArray, " ") );
logSaveProcess =
Runtime.getRuntime().exec( logCatTokenArray );
} catch (Exception e) {
Log.e(TAG, "startLogSaving, exception: " + e.getMessage());
}
}
// from https://stackoverflow.com/questions/1978933/a-quick-and-easy-way-to-join-array-elements-with-a-separator-the-oposite-of-spl
private static String strJoin(String[] aArr, String sSep) {
StringBuilder sbStr = new StringBuilder();
for (int i = 0, il = aArr.length; i < il; i++) {
if (i > 0)
sbStr.append(sSep);
sbStr.append(aArr[i]);
}
return sbStr.toString();
}
}

How to call C/C++ asynchronous callback (in DLL) from JAVA

I'd like to monitor Informatica ETL Workflows from custom Java program, via informatica Development Platform (LMapi), v9.1
I already have got C program, it works fine, but it would be great port to Java.
We get a lot of C DLLs, with asynchronous functions e.g.: JavaLMApi.dll, INFA_LMMonitorServerA (with detailed event-log possibilities)
In header we can see:
PMLM_API_EXTSPEC INFA_API_STATUS
INFA_LMMonitorServerA
(
INFA_UINT32 connectionId,
struct INFA_LMAPI_MONITOR_SERVER_REQUEST_PARAMS *request,
void *clientContext,
INFA_UINT32 *requestId
);
There is no any documentation for this problem, only with this information should I use for soulution.
The question is: how to call/use INFA_LMMonitorServerA in Java? (Dll-load with JNA/JNI is not problem, only the callback).
static INFA_UINT32 nConnectionId = 0;
/* C-"skeleton": */
void GetSrvDetailcallback(struct INFA_API_REPLY_CONTEXT* GetSrvDetailReplyCtxt)
{
INFA_API_STATUS apiRet;
struct INFA_LMAPI_SERVER_DETAILS *serverDetails = NULL;
char *serverStatus = NULL;
/* Check if the return status is Acknowledgement */
if (GetSrvDetailReplyCtxt->returnStatus == INFA_REQUEST_ACKNOWLEDGED)
{
fprintf(stdout, "\nINFA REQUEST ACKNOWLEDGED \n\n",NULL);
return;
}
apiRet = INFA_LMGetServerDetailsResults(GetSrvDetailReplyCtxt, &serverDetails);
/* Check the return code if if is an error */
if (INFA_SUCCESS != apiRet)
{
fprintf(stderr, "Error: INFA_LMGetServerDetailsResults returns %d\n", apiRet);
return;
}
printResults(serverDetails);
}
static void myServer()
{
struct INFA_LMAPI_CONNECT_SERVER_REQUEST_PARAMS_EX connectParamsex;
INFA_API_STATUS apiRet;
struct INFA_LMAPI_LOGIN_REQUEST_PARAMS loginparams;
apiRet = INFA_LMLogin(nConnectionId, &loginparams, NULL);
if (INFA_SUCCESS != apiRet)
{
fprintf(stderr, "Error: INFA_LMLogin returns %d\n", apiRet);
return;
}
struct INFA_LMAPI_MONITOR_SERVER_REQUEST_PARAMS strMonitorRequestParams;
//Only Running Tasks
strMonitorRequestParams.monitorMode = INFA_LMAPI_MONITOR_RUNNING;
strMonitorRequestParams.continuous = INFA_TRUE;
/* Get Server Details */
INFA_UINT32 GetSrvDetailsrequestId = 0;
/* Register a callback function. */
INFA_LMRegisterCallback(INFA_LMAPI_MONITOR_SERVER, &GetSrvDetailcallback);
apiRet = INFA_LMMonitorServerA(nConnectionId, &strMonitorRequestParams, NULL, &GetSrvDetailsrequestId);
if (INFA_SUCCESS != apiRet && INFA_REQUEST_PENDING != apiRet)
{
fprintf(stderr, "Error: INFA_LMMonitorServerA returns %d\n", apiRet);
return;
}
}

Couchbase: net.spy.memcached.internal.CheckedOperationTimeoutException

I'm loading local Couchbase instance with application specific json objects.
Relevant code is:
CouchbaseClient getCouchbaseClient()
{
List<URI> uris = new LinkedList<URI>();
uris.add(URI.create("http://localhost:8091/pools"));
CouchbaseConnectionFactoryBuilder cfb = new CouchbaseConnectionFactoryBuilder();
cfb.setFailureMode(FailureMode.Retry);
cfb.setMaxReconnectDelay(1500); // to enqueue an operation
cfb.setOpTimeout(10000); // wait up to 10 seconds for an operation to succeed
cfb.setOpQueueMaxBlockTime(5000); // wait up to 5 seconds when trying to
// enqueue an operation
return new CouchbaseClient(cfb.buildCouchbaseConnection(uris, "my-app-bucket", ""));
}
Method to store entry (I'm using suggestions from Bulk Load and Exponential Backoff):
void continuosSet(CouchbaseClient cache, String key, int exp, Object value, int tries)
{
OperationFuture<Boolean> result = null;
OperationStatus status = null;
int backoffexp = 0;
do
{
if (backoffexp > tries)
{
throw new RuntimeException(MessageFormat.format("Could not perform a set after {0} tries.", tries));
}
result = cache.set(key, exp, value);
try
{
if (result.get())
{
break;
}
else
{
status = result.getStatus();
LOG.warn(MessageFormat.format("Set failed with status \"{0}\" ... retrying.", status.getMessage()));
if (backoffexp > 0)
{
double backoffMillis = Math.pow(2, backoffexp);
backoffMillis = Math.min(1000, backoffMillis); // 1 sec max
Thread.sleep((int) backoffMillis);
LOG.warn("Backing off, tries so far: " + tries);
}
backoffexp++;
}
}
catch (ExecutionException e)
{
LOG.error("ExecutionException while doing set: " + e.getMessage());
}
catch (InterruptedException e)
{
LOG.error("InterruptedException while doing set: " + e.getMessage());
}
}
while (status != null && status.getMessage() != null && status.getMessage().indexOf("Temporary failure") > -1);
}
When continuosSet method called for a large amount of objects to store (single thread) e.g.
CouchbaseClient cache = getCouchbaseClient();
do
{
SerializableData data = queue.poll();
if (data != null)
{
final String key = data.getClass().getSimpleName() + data.getId();
continuosSet(cache, key, 0, gson.toJson(data, data.getClass()), 100);
...
it generates CheckedOperationTimeoutException inside of continuosSet method in result.get() operation.
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: 127.0.0.1/127.0.0.1:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:160) ~[spymemcached-2.8.12.jar:2.8.12]
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:133) ~[spymemcached-2.8.12.jar:2.8.12]
Can someone shed light into this how to overcome and recover from this situation? Is there a good technique/workaround on how to bulk load in Java client for Couchbase? I already explored documentation Performing a Bulk Set which is unfortunately for PHP Couchbase client.
My suspicion is that you may be running this in a JVM spawned from the command line that doesn't have that much memory. If that's the case, you could hit longer GC pauses which could cause the timeout you're mentioning.
I think the best thing to do is to try a couple of things. First, raise the -Xmx argument to the JVM to use more memory. See if the timeout happens later or goes away. If so, then my suspicion about memory is correct.
If that doesn't work, raise the setOpTimeout() and see if that reduces the error or makes it go away.
Also, make sure you're using the latest client.
By the way, I don't think this is directly bulk loading related. It may happen owing to a lot of resource consumption during bulk loading, but it looks like the regular backoff must be working or you're not ever hitting it.

Categories

Resources