What is the best way to go about creating a program that would change the desktop wallpaper periodically? I would also like to create a GUI around the program. I am a Computer Science student, and as such I know basic programming in Java, and C++ among others. This will be done on Windows 7 OS.
What would be the best language to use for a project like this?
Ideally I would like to use the system clock to trigger the change. Is this possible?
Am I in over my head?
Any answers will be very much appreciated. Thank you.
In Java :
import java.util.*;
public class changer
{
public static native int SystemParametersInfo(int uiAction,int uiParam,String pvParam,int fWinIni);
static
{
System.loadLibrary("user32");
}
public int Change(String path)
{
return SystemParametersInfo(20, 0, path, 0);
}
public static void main(String args[])
{
String wallpaper_file = "c:\\wallpaper.jpg";
changer mychanger = new changer();
mychanger.Change(wallpaper_file);
}
}
In Win32 C++, You can use SetTimer to trigger a change.
#define STRICT 1
#include <windows.h>
#include <iostream.h>
VOID CALLBACK TimerProc(HWND hWnd, UINT nMsg, UINT nIDEvent, DWORD dwTime)
{
LPWSTR wallpaper_file = L"C:\\Wallpapers\\wallpaper.png";
int return_value = SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, wallpaper_file, SPIF_UPDATEINIFILE);
cout << "Programmatically change the desktop wallpaper periodically: " << dwTime << '\n';
cout.flush();
}
int main(int argc, char *argv[], char *envp[])
{
int Counter=0;
MSG Msg;
UINT TimerId = SetTimer(NULL, 0, 2000, &TimerProc); //2000 milliseconds
cout << "TimerId: " << TimerId << '\n';
if (!TimerId)
return 16;
while (GetMessage(&Msg, NULL, 0, 0))
{
++Counter;
if (Msg.message == WM_TIMER)
cout << "Counter: " << Counter << "; timer message\n";
else
cout << "Counter: " << Counter << "; message: " << Msg.message << '\n';
DispatchMessage(&Msg);
}
KillTimer(NULL, TimerId);
return 0;
}
This is a reasonably straightforward project, and can be done easily with any language that can call Win32 API functions (C++, for example). The non-obvious function to change the wallpaper is SystemParametersInfo with the SPI_SETDESKWALLPAPER flag. You give it a file name of a new image, and the wallpaper changes.
Related
The picture linked below shows the specific exception I'm getting. I'm not quite sure why I'm having this particular issue as I've built everything in the same directory, so the library file is there. From what I understand this has something to do with what I'm returning to my main method from my c++ function.
What I'm essentially trying to do is pass the name (printId) of the recognized person, as a string, from my c++ function to java.
Picture of command line:
Here's my C++ code:
#include <jni.h>
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
#include "opencv2/objdetect.hpp"
#include "opencv2/face.hpp"
#include "opencv2/face/facerec.hpp"
#include <vector>
#include <string>
#include "recognitionJNI.h"
#include <fstream>
#include <sstream>
using namespace cv;
using namespace std;
String face_cascade_name = "/Users/greg/Downloads/opencv-3.4.2/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
String fn_csv = "/Users/greg/Desktop/faceBuild/faceRecognition/faceRecognition/csv.txt";
//User Defined Function for reading csv
static void read_csv(const string& filename, vector<Mat>& images, vector<int>& labels, char separator = ';') {
ifstream file(filename.c_str(), ifstream::in); //opens file for reading
if(!file) {
cout << "ERROR: There was a problem loading the csv file" << endl;
}
string line, path, classlabel;
while(getline(file,line)) {
stringstream liness(line);
getline(liness, path, separator); //read stream object up to the semicolon
getline(liness, classlabel); //read the rest of stream object up to null terminated character
//make sure that the filepath and userID are not empty
if(!path.empty() && !classlabel.empty()) {
images.push_back(imread(path,0)); //appends grayscale image to images vector
labels.push_back(atoi(classlabel.c_str())); //appends userID to labels vector
}
}
}
JNIEXPORT jstring JNICALL Java_testJNIString_userName(JNIEnv *env, jobject thisObj, jstring inJNIStr) {
const char *inCStr = env->GetStringUTFChars(inJNIStr, NULL);
if (NULL == inCStr) return NULL;
string outCppStr;
cout << "In C++, the received string is: " << inCStr << endl;
env->ReleaseStringUTFChars(inJNIStr, inCStr);
string printId;
vector<Mat> images; //This vector will hold the images
vector<int> labels; //This vector will hold the userID
//read the csv file contain image paths and userID's
try {
read_csv(fn_csv, images, labels);
} catch (Exception& e) {
cerr << "Error opening file\"" << fn_csv << "\". Reason: " << e.msg << endl;
exit(1);
}
//we'll need to resize the images to their origal size
//These two lines capture the length and width of the mat object
int im_width = images[0].cols;
int im_height = images[0].rows;
for(int j=0; j < images.size(); j++) {
resize(images[j],images[j],Size(im_width, im_height),1.0,1.0,INTER_CUBIC);
}
//int numComponents = 2;
//double threshold = 10.0;
//creats a faceRecognizer to train with given images
Ptr<cv::face::FisherFaceRecognizer> model = cv::face::FisherFaceRecognizer::create();
model->train(images, labels);
string camera_msg = "No camera found";
Mat webcam; // creates Mat object for to store frames
VideoCapture cap(0); // opens default webcam
if(!cap.isOpened()) {
return env->NewStringUTF(camera_msg.c_str());
}
face_cascade.load(face_cascade_name); //loads xml file into classifier
//load capture device into Mat object
while (cap.read(webcam)) {
vector<Rect> faces;
Mat frame_gray; //will be used to store grayscale copy of webcam
cvtColor(webcam, frame_gray, COLOR_BGR2GRAY); //coverts Mat object frames into grayscale
equalizeHist(frame_gray, frame_gray); //maps input distrubution to more uniform distribution
//locate the faces in the frame
face_cascade.detectMultiScale(frame_gray, faces, 1.1, 5, 0|CASCADE_SCALE_IMAGE,Size(30,30));
for(size_t i=0; i < faces.size(); i++) {
Rect face_i = faces[i]; //process faces by frame
Mat face = frame_gray(face_i); //takes the face from the live images
//resize faces for prediction
Mat face_resized;
resize(face,face_resized,Size(im_width, im_height),1.0,1.0,INTER_CUBIC);
int prediction = model->predict(face_resized); //predict based on resize faces
if(prediction == 1 ) {
printId = "Matthew";
}
else if (prediction == 2) {
printId = "Greg";
return env->NewStringUTF(printId.c_str());
}
else if(prediction != 1 || 2 ){
printId = "Unknown";
}
rectangle(webcam, face_i, CV_RGB(0,255,0), 1); //draws a rectangle around the face
string box_text = "Prediction = " + printId;
int pos_x = std::max(face_i.tl().x - 10, 0);
int pos_y = std::max(face_i.tl().y - 10, 0);
putText(webcam, box_text, Point(pos_x,pos_y), FONT_HERSHEY_PLAIN, 1.0, CV_RGB(0,255,0), 1);
}
imshow("Webcam", webcam);
waitKey(1);
destroyAllWindows();
}
return env->NewStringUTF(printId.c_str());
}
Here's my Java code:
public class recognitionJNI{
static {
System.loadLibrary("recogjni");
}
private native String userName(String msg);
public static void main(String args[]) {
String result = new recognitionJNI().userName("Pass arg from c++ function");
System.out.println(result);
}
}
Try regenerating the header file, it looks like you changed your class name in the meantime and the name is no longer up to date. The name I get from that java class is:
Java_recognitionJNI_userName
But you have
Java_testJNIString_userName
File "~/workspace/Test.txt" does exist, but fd always returns -1. Can somebody please give a hint as to what is wrong with the code? Thanks.
int fd = open("~/workspace/Test.txt", O_RDONLY);
cout << "fd is "<<fd<<endl;
if (fd < 0) {
cout << "did not find file"<<endl;
return false;
}
(Assuming your OS is some Posix like Linux)
The ~ should be expanded. Usually the shell expands it. But open wants a real file path.
You could try:
std::string fname (getenv("HOME"));
fname += "/workspace/Test.txt";
int fd = open(fname.c_str(), O_RDONLY);
if (fd<0) {
std::cerr << "failed to open " << fname
<< " : " << strerror(errno) << std::endl;
return false;
}
See glob(7), wordexp(3), getenv(3), strerror(3), open(2), environ(7)
Read Advanced Linux Programming
public byte[] toBytes() {
size = 12;
ByteBuffer buf = ByteBuffer.allocate(size);
buf.putInt(type.ordinal());//type is a enum
buf.putInt(id);
buf.putInt(size);
return buf.array();
}
#Override
public void fromBytes(byte[] data) {
ByteBuffer buf = ByteBuffer.allocate(data.length);
buf.put(data);
buf.rewind();
type = MessageType.values()[buf.getInt()];
id = buf.getInt();
size = buf.getInt();
}
I have two java methods and want to write an objective C method..
For the first method I wrote it into an Objective C code like
- (NSMutableData *) toBytes{
size = 12;
NSMutableData *buf = [[NSMutableData alloc] initWithCapacity:size];
NSData *dataType = [NSData dataWithBytes: &type length: sizeof(type)];
NSData *dataId = [NSData dataWithBytes: &msgId length: sizeof(msgId)];
NSData *dataSize = [NSData dataWithBytes: &size length: sizeof(size)];
[buf appendData:dataType];
[buf appendData:dataId];
[buf appendData:dataSize];
[dataType release];
[dataId release];
[dataSize release];
return buf;
}
But not sure how to read it back...
It could've been easier if I add only one data into the buffer
but I added total three data so I don't know how to read those back..
Thanks in advance...
Note to LCYSoft: i'm making this a community wiki. please correct any issues. i didn't compile this. since you posted one direction and really want an answer, i provided one. sorry, i am kinda busy atm.
this demonstrates both directions, and expands on the OP:
typedef enum t_mon_enum_type {
MONEnum_Edno = 1,
MONEnum_Dve = 2,
MONEnum_Tre = 3
} t_mon_enum_type;
#interface MONObject : NSObject
{
t_mon_enum_type type;
int msgId;
int size;
}
#end
#implementation MONObject
/* ... */
- (NSMutableData *)dataRepresentation
{
const int typeAsInt = (int)type;
const size_t capacity = sizeof(typeAsInt) + sizeof(msgId) + sizeof(size);
NSMutableData * data = [[NSMutableData alloc] initWithCapacity:capacity];
[data appendBytes:&typeAsInt length:sizeof(typeAsInt)];
[data appendBytes:&msgId length:sizeof(msgId)];
[data appendBytes:&size length:sizeof(size)];
return [data autorelease];
}
- (BOOL)isDataRepresentationValid:(NSData *)data { /* #todo */ }
- (BOOL)restoreFromDataRepresentation:(NSData *)data
{
if (![self isDataRepresentationValid]) {
return NO;
}
NSRange range = { 0, 0 };
int tmp = 0;
/* restore `type` */
range.length = sizeof(tmp);
[data getBytes:&tmp range:range];
type = (t_mon_enum_type)tmp;
/* advance read position */
range.location += range.length;
/* restore `msgId` */
range.length = sizeof(msgId);
[data getBytes:&msgId range:range];
/* advance read position */
range.location += range.length;
/*
setting the length here is redundant in this case, but it's how we
write it when dealing with more complex pod types.
*/
range.length = sizeof(size);
[data getBytes:&size range:range];
return YES;
}
i'm not going to rewrite the program for you, but i'll provide a tip:
you can use c++ in objc programs. specifically, you can compile as C (.c), ObjC (.m), C++ (.cpp), and ObjC++ (.mm). note: one common extension follows each language. the compiler will (by default) compile using the language implied by the file extension.
now, many java programs more closely resemble c++ programs. if you're porting a program, also consider writing it in c++ since the program will often be closer to the java variant.
for objc, you'd probably use CF/NS-MutableData
for c++, you can use std::vector
good luck
I'm trying to read / write multiple Protocol Buffers messages from files, in both C++ and Java. Google suggests writing length prefixes before the messages, but there's no way to do that by default (that I could see).
However, the Java API in version 2.1.0 received a set of "Delimited" I/O functions which apparently do that job:
parseDelimitedFrom
mergeDelimitedFrom
writeDelimitedTo
Are there C++ equivalents? And if not, what's the wire format for the size prefixes the Java API attaches, so I can parse those messages in C++?
Update:
These now exist in google/protobuf/util/delimited_message_util.h as of v3.3.0.
I'm a bit late to the party here, but the below implementations include some optimizations missing from the other answers and will not fail after 64MB of input (though it still enforces the 64MB limit on each individual message, just not on the whole stream).
(I am the author of the C++ and Java protobuf libraries, but I no longer work for Google. Sorry that this code never made it into the official lib. This is what it would look like if it had.)
bool writeDelimitedTo(
const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput) {
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL) {
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
} else {
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError()) return false;
}
return true;
}
bool readDelimitedFrom(
google::protobuf::io::ZeroCopyInputStream* rawInput,
google::protobuf::MessageLite* message) {
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
Okay, so I haven't been able to find top-level C++ functions implementing what I need, but some spelunking through the Java API reference turned up the following, inside the MessageLite interface:
void writeDelimitedTo(OutputStream output)
/* Like writeTo(OutputStream), but writes the size of
the message as a varint before writing the data. */
So the Java size prefix is a (Protocol Buffers) varint!
Armed with that information, I went digging through the C++ API and found the CodedStream header, which has these:
bool CodedInputStream::ReadVarint32(uint32 * value)
void CodedOutputStream::WriteVarint32(uint32 value)
Using those, I should be able to roll my own C++ functions that do the job.
They should really add this to the main Message API though; it's missing functionality considering Java has it, and so does Marc Gravell's excellent protobuf-net C# port (via SerializeWithLengthPrefix and DeserializeWithLengthPrefix).
I solved the same problem using CodedOutputStream/ArrayOutputStream to write the message (with the size) and CodedInputStream/ArrayInputStream to read the message (with the size).
For example, the following pseudo-code writes the message size following by the message:
const unsigned bufLength = 256;
unsigned char buffer[bufLength];
Message protoMessage;
google::protobuf::io::ArrayOutputStream arrayOutput(buffer, bufLength);
google::protobuf::io::CodedOutputStream codedOutput(&arrayOutput);
codedOutput.WriteLittleEndian32(protoMessage.ByteSize());
protoMessage.SerializeToCodedStream(&codedOutput);
When writing you should also check that your buffer is large enough to fit the message (including the size). And when reading, you should check that your buffer contains a whole message (including the size).
It definitely would be handy if they added convenience methods to C++ API similar to those provided by the Java API.
IsteamInputStream is very fragile to eofs and other errors that easily occurs when used together with std::istream. After this the protobuf streams are permamently damaged and any already used buffer data is destroyed. There are proper support for reading from traditional streams in protobuf.
Implement google::protobuf::io::CopyingInputStream and use that together with CopyingInputStreamAdapter. Do the same for the output variants.
In practice a parsing call ends up in google::protobuf::io::CopyingInputStream::Read(void* buffer, int size) where a buffer is given. The only thing left to do is read into it somehow.
Here's an example for use with Asio synchronized streams (SyncReadStream/SyncWriteStream):
#include <google/protobuf/io/zero_copy_stream_impl_lite.h>
using namespace google::protobuf::io;
template <typename SyncReadStream>
class AsioInputStream : public CopyingInputStream {
public:
AsioInputStream(SyncReadStream& sock);
int Read(void* buffer, int size);
private:
SyncReadStream& m_Socket;
};
template <typename SyncReadStream>
AsioInputStream<SyncReadStream>::AsioInputStream(SyncReadStream& sock) :
m_Socket(sock) {}
template <typename SyncReadStream>
int
AsioInputStream<SyncReadStream>::Read(void* buffer, int size)
{
std::size_t bytes_read;
boost::system::error_code ec;
bytes_read = m_Socket.read_some(boost::asio::buffer(buffer, size), ec);
if(!ec) {
return bytes_read;
} else if (ec == boost::asio::error::eof) {
return 0;
} else {
return -1;
}
}
template <typename SyncWriteStream>
class AsioOutputStream : public CopyingOutputStream {
public:
AsioOutputStream(SyncWriteStream& sock);
bool Write(const void* buffer, int size);
private:
SyncWriteStream& m_Socket;
};
template <typename SyncWriteStream>
AsioOutputStream<SyncWriteStream>::AsioOutputStream(SyncWriteStream& sock) :
m_Socket(sock) {}
template <typename SyncWriteStream>
bool
AsioOutputStream<SyncWriteStream>::Write(const void* buffer, int size)
{
boost::system::error_code ec;
m_Socket.write_some(boost::asio::buffer(buffer, size), ec);
return !ec;
}
Usage:
AsioInputStream<boost::asio::ip::tcp::socket> ais(m_Socket); // Where m_Socket is a instance of boost::asio::ip::tcp::socket
CopyingInputStreamAdaptor cis_adp(&ais);
CodedInputStream cis(&cis_adp);
Message protoMessage;
uint32_t msg_size;
/* Read message size */
if(!cis.ReadVarint32(&msg_size)) {
// Handle error
}
/* Make sure not to read beyond limit of message */
CodedInputStream::Limit msg_limit = cis.PushLimit(msg_size);
if(!msg.ParseFromCodedStream(&cis)) {
// Handle error
}
/* Remove limit */
cis.PopLimit(msg_limit);
Here you go:
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/io/coded_stream.h>
using namespace google::protobuf::io;
class FASWriter
{
std::ofstream mFs;
OstreamOutputStream *_OstreamOutputStream;
CodedOutputStream *_CodedOutputStream;
public:
FASWriter(const std::string &file) : mFs(file,std::ios::out | std::ios::binary)
{
assert(mFs.good());
_OstreamOutputStream = new OstreamOutputStream(&mFs);
_CodedOutputStream = new CodedOutputStream(_OstreamOutputStream);
}
inline void operator()(const ::google::protobuf::Message &msg)
{
_CodedOutputStream->WriteVarint32(msg.ByteSize());
if ( !msg.SerializeToCodedStream(_CodedOutputStream) )
std::cout << "SerializeToCodedStream error " << std::endl;
}
~FASWriter()
{
delete _CodedOutputStream;
delete _OstreamOutputStream;
mFs.close();
}
};
class FASReader
{
std::ifstream mFs;
IstreamInputStream *_IstreamInputStream;
CodedInputStream *_CodedInputStream;
public:
FASReader(const std::string &file), mFs(file,std::ios::in | std::ios::binary)
{
assert(mFs.good());
_IstreamInputStream = new IstreamInputStream(&mFs);
_CodedInputStream = new CodedInputStream(_IstreamInputStream);
}
template<class T>
bool ReadNext()
{
T msg;
unsigned __int32 size;
bool ret;
if ( ret = _CodedInputStream->ReadVarint32(&size) )
{
CodedInputStream::Limit msgLimit = _CodedInputStream->PushLimit(size);
if ( ret = msg.ParseFromCodedStream(_CodedInputStream) )
{
_CodedInputStream->PopLimit(msgLimit);
std::cout << mFeed << " FASReader ReadNext: " << msg.DebugString() << std::endl;
}
}
return ret;
}
~FASReader()
{
delete _CodedInputStream;
delete _IstreamInputStream;
mFs.close();
}
};
I ran into the same issue in both C++ and Python.
For the C++ version, I used a mix of the code Kenton Varda posted on this thread and the code from the pull request he sent to the protobuf team (because the version posted here doesn't handle EOF while the one he sent to github does).
#include <google/protobuf/message_lite.h>
#include <google/protobuf/io/zero_copy_stream.h>
#include <google/protobuf/io/coded_stream.h>
bool writeDelimitedTo(const google::protobuf::MessageLite& message,
google::protobuf::io::ZeroCopyOutputStream* rawOutput)
{
// We create a new coded stream for each message. Don't worry, this is fast.
google::protobuf::io::CodedOutputStream output(rawOutput);
// Write the size.
const int size = message.ByteSize();
output.WriteVarint32(size);
uint8_t* buffer = output.GetDirectBufferForNBytesAndAdvance(size);
if (buffer != NULL)
{
// Optimization: The message fits in one buffer, so use the faster
// direct-to-array serialization path.
message.SerializeWithCachedSizesToArray(buffer);
}
else
{
// Slightly-slower path when the message is multiple buffers.
message.SerializeWithCachedSizes(&output);
if (output.HadError())
return false;
}
return true;
}
bool readDelimitedFrom(google::protobuf::io::ZeroCopyInputStream* rawInput, google::protobuf::MessageLite* message, bool* clean_eof)
{
// We create a new coded stream for each message. Don't worry, this is fast,
// and it makes sure the 64MB total size limit is imposed per-message rather
// than on the whole stream. (See the CodedInputStream interface for more
// info on this limit.)
google::protobuf::io::CodedInputStream input(rawInput);
const int start = input.CurrentPosition();
if (clean_eof)
*clean_eof = false;
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size))
{
if (clean_eof)
*clean_eof = input.CurrentPosition() == start;
return false;
}
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit = input.PushLimit(size);
// Parse the message.
if (!message->MergeFromCodedStream(&input)) return false;
if (!input.ConsumedEntireMessage()) return false;
// Release the limit.
input.PopLimit(limit);
return true;
}
And here is my python2 implementation:
from google.protobuf.internal import encoder
from google.protobuf.internal import decoder
#I had to implement this because the tools in google.protobuf.internal.decoder
#read from a buffer, not from a file-like objcet
def readRawVarint32(stream):
mask = 0x80 # (1 << 7)
raw_varint32 = []
while 1:
b = stream.read(1)
#eof
if b == "":
break
raw_varint32.append(b)
if not (ord(b) & mask):
#we found a byte starting with a 0, which means it's the last byte of this varint
break
return raw_varint32
def writeDelimitedTo(message, stream):
message_str = message.SerializeToString()
delimiter = encoder._VarintBytes(len(message_str))
stream.write(delimiter + message_str)
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
#In place version that takes an already built protobuf object
#In my tests, this is around 20% faster than the other version
#of readDelimitedFrom()
def readDelimitedFrom_inplace(message, stream):
raw_varint32 = readRawVarint32(stream)
if raw_varint32:
size, _ = decoder._DecodeVarint32(raw_varint32, 0)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message.ParseFromString(data)
return message
else:
return None
It might not be the best looking code and I'm sure it can be refactored a fair bit, but at least that should show you one way to do it.
Now the big problem: It's SLOW.
Even when using the C++ implementation of python-protobuf, it's one order of magnitude slower than in pure C++. I have a benchmark where I read 10M protobuf messages of ~30 bytes each from a file. It takes ~0.9s in C++, and 35s in python.
One way to make it a bit faster would be to re-implement the varint decoder to make it read from a file and decode in one go, instead of reading from a file and then decoding as this code currently does. (profiling shows that a significant amount of time is spent in the varint encoder/decoder). But needless to say that alone is not enough to close the gap between the python version and the C++ version.
Any idea to make it faster is very welcome :)
Just for completeness, I post here an up-to-date version that works with the master version of protobuf and Python3
For the C++ version it is sufficient to use the utils in delimited_message_utils.h, here a MWE
#include <google/protobuf/io/zero_copy_stream_impl.h>
#include <google/protobuf/util/delimited_message_util.h>
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
template <typename T>
bool writeManyToFile(std::deque<T> messages, std::string filename) {
int outfd = open(filename.c_str(), O_WRONLY | O_CREAT | O_TRUNC);
google::protobuf::io::FileOutputStream fout(outfd);
bool success;
for (auto msg: messages) {
success = google::protobuf::util::SerializeDelimitedToZeroCopyStream(
msg, &fout);
if (! success) {
std::cout << "Writing Failed" << std::endl;
break;
}
}
fout.Close();
close(outfd);
return success;
}
template <typename T>
std::deque<T> readManyFromFile(std::string filename) {
int infd = open(filename.c_str(), O_RDONLY);
google::protobuf::io::FileInputStream fin(infd);
bool keep = true;
bool clean_eof = true;
std::deque<T> out;
while (keep) {
T msg;
keep = google::protobuf::util::ParseDelimitedFromZeroCopyStream(
&msg, &fin, nullptr);
if (keep)
out.push_back(msg);
}
fin.Close();
close(infd);
return out;
}
For the Python3 version, building on #fireboot 's answer, the only thing thing that needed modification is the decoding of raw_varint32
def getSize(raw_varint32):
result = 0
shift = 0
b = six.indexbytes(raw_varint32, 0)
result |= ((ord(b) & 0x7f) << shift)
return result
def readDelimitedFrom(MessageType, stream):
raw_varint32 = readRawVarint32(stream)
message = None
if raw_varint32:
size = getSize(raw_varint32)
data = stream.read(size)
if len(data) < size:
raise Exception("Unexpected end of file")
message = MessageType()
message.ParseFromString(data)
return message
Was also looking for a solution for this. Here's the core of our solution, assuming some java code wrote many MyRecord messages with writeDelimitedTo into a file. Open the file and loop, doing:
if(someCodedInputStream->ReadVarint32(&bytes)) {
CodedInputStream::Limit msgLimit = someCodedInputStream->PushLimit(bytes);
if(myRecord->ParseFromCodedStream(someCodedInputStream)) {
//do your stuff with the parsed MyRecord instance
} else {
//handle parse error
}
someCodedInputStream->PopLimit(msgLimit);
} else {
//maybe end of file
}
Hope it helps.
Working with an objective-c version of protocol-buffers, I ran into this exact issue. On sending from the iOS client to a Java based server that uses parseDelimitedFrom, which expects the length as the first byte, I needed to call writeRawByte to the CodedOutputStream first. Posting here to hopegully help others that run into this issue. While working through this issue, one would think that Google proto-bufs would come with a simply flag which does this for you...
Request* request = [rBuild build];
[self sendMessage:request];
}
- (void) sendMessage:(Request *) request {
//** get length
NSData* n = [request data];
uint8_t len = [n length];
PBCodedOutputStream* os = [PBCodedOutputStream streamWithOutputStream:outputStream];
//** prepend it to message, such that Request.parseDelimitedFrom(in) can parse it properly
[os writeRawByte:len];
[request writeToCodedOutputStream:os];
[os flush];
}
Since I'm not allowed to write this as a comment to Kenton Varda's answer above; I believe there is a bug in the code he posted (as well as in other answers which have been provided). The following code:
...
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
uint32_t size;
if (!input.ReadVarint32(&size)) return false;
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
sets an incorrect limit because it does not take into account the size of the varint32 which has already been read from input. This can result in data loss/corruption as additional bytes are read from the stream which may be part of the next message. The usual way of handling this correctly is to delete the CodedInputStream used to read the size and create a new one for reading the payload:
...
uint32_t size;
{
google::protobuf::io::CodedInputStream input(rawInput);
// Read the size.
if (!input.ReadVarint32(&size)) return false;
}
google::protobuf::io::CodedInputStream input(rawInput);
// Tell the stream not to read beyond that size.
google::protobuf::io::CodedInputStream::Limit limit =
input.PushLimit(size);
...
You can use getline for reading a string from a stream, using the specified delimiter:
istream& getline ( istream& is, string& str, char delim );
(defined in the header)
I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active.
Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake.
Any ideas?
I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though.
It's a hack, not an elegant solution.
import java.awt.*;
import java.util.*;
public class Hal{
public static void main(String[] args) throws Exception{
Robot hal = new Robot();
Random random = new Random();
while(true){
hal.delay(1000 * 60);
int x = random.nextInt() % 640;
int y = random.nextInt() % 480;
hal.mouseMove(x,y);
}
}
}
On Windows, use the SystemParametersInfo function. It's a Swiss army-style function that lets you get/set all sorts of system settings.
To disable the screen shutting off, for instance:
SystemParametersInfo( SPI_SETPOWEROFFACTIVE, 0, NULL, 0 );
Just be sure to set it back when you're done...
A much cleaner solution is use JNA to tap into the native OS API. Check your platform at runtime, and if it happens to be Windows then the following will work:
import com.sun.jna.Native;
import com.sun.jna.Structure;
import com.sun.jna.Structure.FieldOrder;
import com.sun.jna.platform.win32.WTypes.LPWSTR;
import com.sun.jna.platform.win32.WinBase;
import com.sun.jna.platform.win32.WinDef.DWORD;
import com.sun.jna.platform.win32.WinDef.ULONG;
import com.sun.jna.platform.win32.WinNT.HANDLE;
import com.sun.jna.win32.StdCallLibrary;
/**
* Power management.
*
* #see https://stackoverflow.com/a/20996135/14731
*/
public enum PowerManagement
{
INSTANCE;
#FieldOrder({"version", "flags", "simpleReasonString"})
public static class REASON_CONTEXT extends Structure
{
public static class ByReference extends REASON_CONTEXT implements Structure.ByReference
{
}
public ULONG version;
public DWORD flags;
public LPWSTR simpleReasonString;
}
private interface Kernel32 extends StdCallLibrary
{
HANDLE PowerCreateRequest(REASON_CONTEXT.ByReference context);
/**
* #param powerRequestHandle the handle returned by {#link #PowerCreateRequest(REASON_CONTEXT.ByReference)}
* #param requestType requestType is the ordinal value of {#link PowerRequestType}
* #return true on success
*/
boolean PowerSetRequest(HANDLE powerRequestHandle, int requestType);
/**
* #param powerRequestHandle the handle returned by {#link #PowerCreateRequest(REASON_CONTEXT.ByReference)}
* #param requestType requestType is the ordinal value of {#link PowerRequestType}
* #return true on success
*/
boolean PowerClearRequest(HANDLE powerRequestHandle, int requestType);
enum PowerRequestType
{
PowerRequestDisplayRequired,
PowerRequestSystemRequired,
PowerRequestAwayModeRequired,
PowerRequestMaximum
}
}
private final Kernel32 kernel32;
private HANDLE handle = null;
PowerManagement()
{
// Found in winnt.h
ULONG POWER_REQUEST_CONTEXT_VERSION = new ULONG(0);
DWORD POWER_REQUEST_CONTEXT_SIMPLE_STRING = new DWORD(0x1);
kernel32 = Native.load("kernel32", Kernel32.class);
REASON_CONTEXT.ByReference context = new REASON_CONTEXT.ByReference();
context.version = POWER_REQUEST_CONTEXT_VERSION;
context.flags = POWER_REQUEST_CONTEXT_SIMPLE_STRING;
context.simpleReasonString = new LPWSTR("Your reason for changing the power setting");
handle = kernel32.PowerCreateRequest(context);
if (handle == WinBase.INVALID_HANDLE_VALUE)
throw new AssertionError(Native.getLastError());
}
/**
* Prevent the computer from going to sleep while the application is running.
*/
public void preventSleep()
{
if (!kernel32.PowerSetRequest(handle, Kernel32.PowerRequestType.PowerRequestSystemRequired.ordinal()))
throw new AssertionError("PowerSetRequest() failed");
}
/**
* Allow the computer to go to sleep.
*/
public void allowSleep()
{
if (!kernel32.PowerClearRequest(handle, Kernel32.PowerRequestType.PowerRequestSystemRequired.ordinal()))
throw new AssertionError("PowerClearRequest() failed");
}
}
Then when the user runs powercfg /requests they see:
SYSTEM:
[PROCESS] \Device\HarddiskVolume1\Users\Gili\.jdks\openjdk-15.0.2\bin\java.exe
Your reason for changing the power setting
You should be able to do something similar for macOS and Linux.
Adding to scarcher2's code snippet above and moving mouse by only 1 pixel. I have moved the mouse twice so that some change occurs even if pointer is on extremes:
while(true){
hal.delay(1000 * 30);
Point pObj = MouseInfo.getPointerInfo().getLocation();
System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y);
hal.mouseMove(pObj.x + 1, pObj.y + 1);
hal.mouseMove(pObj.x - 1, pObj.y - 1);
pObj = MouseInfo.getPointerInfo().getLocation();
System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y);
}
I have a very brute-force technique of moving the mouse 1 point in the x direction and then back every 3 minutes.
There may me a more elegant solution but it's a quick fix.
Wouldn't all the suggestions moving the mouse back and forth drive the user crazy? I know I'd remove any app that would do that as soon as I can isolate it.
Here is completed Batch file that generates java code, compile it, cleans the generated files, and runs in the background.. (jdk is required on your laptop)
Just save and run this as a Bat File. (somefilename.bat) ;)
#echo off
setlocal
rem rem if JAVA is set and run from :startapp labeled section below, else the program exit through :end labeled section.
if not "[%JAVA_HOME%]"=="[]" goto start_app
echo. JAVA_HOME not set. Application will not run!
goto end
:start_app
echo. Using java in %JAVA_HOME%
rem writes below code to Energy.java file.
#echo import java.awt.MouseInfo; > Energy.java
#echo import java.awt.Point; >> Energy.java
#echo import java.awt.Robot; >> Energy.java
#echo //Mouse Movement Simulation >> Energy.java
#echo public class Energy { >> Energy.java
#echo public static void main(String[] args) throws Exception { >> Energy.java
#echo Robot energy = new Robot(); >> Energy.java
#echo while (true) { >> Energy.java
#echo energy.delay(1000 * 60); >> Energy.java
#echo Point pObj = MouseInfo.getPointerInfo().getLocation(); >> Energy.java
#echo Point pObj2 = pObj; >> Energy.java
#echo System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y); >> Energy.java
#echo energy.mouseMove(pObj.x + 10, pObj.y + 10); >> Energy.java
#echo energy.mouseMove(pObj.x - 10, pObj.y - 10); >> Energy.java
#echo energy.mouseMove(pObj2.x, pObj.y); >> Energy.java
#echo pObj = MouseInfo.getPointerInfo().getLocation(); >> Energy.java
#echo System.out.println(pObj.toString() + "x>>" + pObj.x + " y>>" + pObj.y); >> Energy.java
#echo } >> Energy.java
#echo } >> Energy.java
#echo } >> Energy.java
rem compile java code.
javac Energy.java
rem run java application in background.
start javaw Energy
echo. Your Secret Energy program is running...
goto end
:end
rem clean if files are created.
pause
del "Energy.class"
del "Energy.java"
I've been using pmset to control sleep mode on my Mac for awhile now, and it's pretty easy to integrate. Here's a rough example of how you could call that program from Java to disable/enable sleep mode. Note that you need root privileges to run pmset, and therefore you'll need them to run this program.
import java.io.BufferedInputStream;
import java.io.IOException;
/**
* Disable sleep mode (record current setting beforehand), and re-enable sleep
* mode. Works with Mac OS X using the "pmset" command.
*/
public class SleepSwitch {
private int sleepTime = -1;
public void disableSleep() throws IOException {
if (sleepTime != -1) {
// sleep time is already recorded, assume sleep is disabled
return;
}
// query pmset for the current setting
Process proc = Runtime.getRuntime().exec("pmset -g");
BufferedInputStream is = new BufferedInputStream(proc.getInputStream());
StringBuffer output = new StringBuffer();
int c;
while ((c = is.read()) != -1) {
output.append((char) c);
}
is.close();
// parse the current setting and store the sleep time
String outString = output.toString();
String setting = outString.substring(outString.indexOf(" sleep\t")).trim();
setting = setting.substring(7, setting.indexOf(" ")).trim();
sleepTime = Integer.parseInt(setting);
// set the sleep time to zero (disable sleep)
Runtime.getRuntime().exec("pmset sleep 0");
}
public void enableSleep() throws IOException {
if (sleepTime == -1) {
// sleep time is not recorded, assume sleep is enabled
return;
}
// set the sleep time to the previously stored value
Runtime.getRuntime().exec("pmset sleep " + sleepTime);
// reset the stored sleep time
sleepTime = -1;
}
}
You can use the program Caffeine caffiene to keep your workstation awake. You could run the program via the open command in os X.
On OS X, just spawn caffeinate. This will prevent the system from sleeping until caffeinate is terminated.
In Visual Studio create a simple form.
From the toolbar, drag a Timer control onto the form.
In the Init code, set the timer interval to 60 seconds (60000 ms.).
Implement the timer callback with the following code "SendKeys.Send("{F15}");"
Run the new program.
No mouse movement needed.
Edit: At least on my Army workstation, simply programmatically generating mouse and key messages isn't enough to keep my workstation logged in and awake. The early posters with the Java Robot class are on the right track. JAVA Robot works on or below the OS's HAL (Hardware Abstraction Layer) However I recreated and tested the Java/Robot solution and it did not work - until I added a Robot.keyPress(123) to the code.
To go with the solution provided by user Gili for Windows using JNA, here's the JNA solution for MacOS.
First, the JNA library interface:
import com.sun.jna.Library;
import com.sun.jna.Native;
import com.sun.jna.platform.mac.CoreFoundation;
import com.sun.jna.ptr.IntByReference;
public interface ExampleIOKit extends Library {
ExampleIOKit INSTANCE = Native.load("IOKit", ExampleIOKit.class);
CoreFoundation.CFStringRef kIOPMAssertPreventUserIdleSystemSleep = CoreFoundation.CFStringRef.createCFString("PreventUserIdleSystemSleep");
CoreFoundation.CFStringRef kIOPMAssertPreventUserIdleDisplaySleep = CoreFoundation.CFStringRef.createCFString("PreventUserIdleDisplaySleep");
int kIOReturnSuccess = 0;
int kIOPMAssertionLevelOff = 0;
int kIOPMAssertionLevelOn = 255;
int IOPMAssertionCreateWithName(CoreFoundation.CFStringRef assertionType,
int assertionLevel,
CoreFoundation.CFStringRef reasonForActivity,
IntByReference assertionId);
int IOPMAssertionRelease(int assertionId);
}
Here's an example of invoking the JNA method to turn sleep prevention on or off:
public class Example {
private static final Logger _log = LoggerFactory.getLogger(Example.class);
private int sleepPreventionAssertionId = 0;
public void updateSleepPrevention(final boolean isEnabled) {
if (isEnabled) {
if (sleepPreventionAssertionId == 0) {
final var assertionIdRef = new IntByReference(0);
final var reason = CoreFoundation.CFStringRef.createCFString(
"Example preventing display sleep");
final int result = ExampleIOKit.INSTANCE.IOPMAssertionCreateWithName(
ExampleIOKit.kIOPMAssertPreventUserIdleDisplaySleep,
ExampleIOKit.kIOPMAssertionLevelOn, reason, assertionIdRef);
if (result == ExampleIOKit.kIOReturnSuccess) {
_log.info("Display sleep prevention enabled");
sleepPreventionAssertionId = assertionIdRef.getValue();
}
else {
_log.error("IOPMAssertionCreateWithName returned {}", result);
}
}
}
else {
if (sleepPreventionAssertionId != 0) {
final int result = ExampleIOKit.INSTANCE.IOPMAssertionRelease(sleepPreventionAssertionId);
if (result == ExampleIOKit.kIOReturnSuccess) {
_log.info("Display sleep prevention disabled");
}
else {
_log.error("IOPMAssertionRelease returned {}", result);
}
sleepPreventionAssertionId = 0;
}
}
}
}
Wouldn't it be easier to disable the power management on the server? It might be argued that servers shouldn't go into powersave mode?
This code moves the pointer to the same location where it already is so the user doesn't notice any difference.
while (true) {
Thread.sleep(180000);//this is how long before it moves
Point mouseLoc = MouseInfo.getPointerInfo().getLocation();
Robot rob = new Robot();
rob.mouseMove(mouseLoc.x, mouseLoc.y);
}
Run a command inside a timer like pinging the server..
I'd just do a function (or download a freebie app) that moves the mouse around. Inelegant, but easy.
This will work:
public class Utils {
public static void main(String[] args) throws AWTException {
Robot rob = new Robot();
PointerInfo ptr = null;
while (true) {
rob.delay(4000); // Mouse moves every 4 seconds
ptr = MouseInfo.getPointerInfo();
rob.mouseMove((int) ptr.getLocation().getX() + 1, (int) ptr.getLocation().getY() + 1);
}
}
}
One simple way which i use to avoid "Windows desktop Auto lock" is "Switch On/Off NumLock" every 6 seconds.
Here a Java Program to Switch ON/OFF NumLock.
import java.util.*;
import java.awt.*;
import java.awt.event.*;
public class NumLock extends Thread {
public void run() {
try {
boolean flag = true;
do {
flag = !flag;
Thread.sleep(6000);
Toolkit.getDefaultToolkit().setLockingKeyState(KeyEvent. VK_NUM_LOCK, flag);
}
while(true);
}
catch(Exception e) {}
}
public static void main(String[] args) throws Exception {
new NumLock().start();
}
}
Run this Java program in a separate command prompt; :-)