Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a Java application that will run on a battery-powered, cellular-enabled device (not a mobile phone by the way), and needs to send commands to a server.
These commands are in the form of JSON-objects, so they can easily be serialized and deserialized.
As internet connectivity may not be completely reliable, and the battery of the device may run out, I need a way of saving my commands to disk in case the battery runs out (which could, in some cases, cause power to switch off without warning).
The commands can be 'worth' a few euros a piece, so it's important that I take every precaution (within certain bounds of course) to make sure no commands are lost. Sending a command twice is not a problem, as every command is tagged with a GUID, and my server will make sure duplicates are ignored. The queue may contain up to a thousand commands, but most of the time it will be empty.
What I'm actually looking for is a Queue-like (FIFO) object with a backing file store that is made to survive an instant crash. I need to be able to peek at the next in line, and remove it after processing is finished.
Up to now, I've been working with MapDB 3.0, but the documentation is a bit confusing as to how to create a queue-like object. And besides, it seems to be a bit much for what I'm trying to achieve
You could have a directory of files. One file per message. The file name could be a timestamp or name which records the ordered. A directory with 1000 files should still perform ok.
Once you close the file, it should be persisted to disk, although exact how safe any operation is will depend on the device and how it is implemented.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I've searched StackOverflow and elsewhere and haven't found anything appropriate for my needs.
I don't want a polling solution, it needs to be event driven and to receive every such event in (near) real-time. The only data I need from the event object is the path/name of the modified (or created, or deleted) file.
Please note, I don't want to receive events for a specific file or directory, I want to receive events for an entire volume (eg "C:" - not required to support network drives!).
Ideally I'm looking for a Java API, but I suspect none exists, so I'm happy to write wrappers to C/C++.
NB, if this is possible from just the Windows command prompt or WMI that would be great too!
You need to use FindFirstChangeNotification. Specify the root of the volume, and specify that you want to be notified for changes to the subdirectory too. Then wait on the notification handle, and read the changes with ReadDirectoryChanges.
See this example from Microsoft:
Obtaining Directory Change Notifications
(Note, I found that with 10 seconds of searching for "windows file watcher" - you may find some of the other links there helpful if you want a non C or C++ solution.)
Call SHChangeNotifyRegister with the last parameter set to NULL.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Does anybody know where I can get the current plain-and-simple-no-nonsense UTC time accurately and really quickly? Our system clocks are variable due to a number of factors but we need a guiding light to run one of our applications. I was wondering if there was a free service where I can get the time via HTTP without much overhead (i.e. I prefer not to scrape it off somewhere like Google search's with a lot of other data because the application would be looking it up quite often). Does anyone know a reliable service for this?
Depends how accurate you want to get and how much you trust the source.
Possibilities include:
http://www.timeapi.org/utc/now
http://timezonedb.com/api
http://json-time.appspot.com/time.json
It really depends on how accurate you want the time to be. If it is to synchronise apps/servers, then using a http request to an external server may not be the best approach as you have to take network latency into account. i.e. the time returned may be in the past if the round trip for the request is slowed down by the network, especially if your going via a proxy. And if the apps are running on different machines, with variable network latency the times will not be synced.
An alternative approach would be to ensure that the machines you are running have there systems clocks all synced. A common solution for this is NTP (Network Time Protocol), which allows servers to keeps there clocks updated accurately.
Here a resource for NTP configuration on Linux, I am sure Google will find you more.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am not sure if this is the correct forum to ask this but i am not sure where to ask either. So here is my question:
what does "deep ping" mean. I tried google but still did not get any information about it. Also who does deep ping mean in web servlet`s context. Thanks.
I'm not sure it's the "official definition" if there is such a thing, but I've head "deep ping" used about functionality that allows you to (in contrast to a regular ping) send a message to the server that passes through as much of the webstack as possible before returning an "ok" response.
As an example, you can make a ping transaction that passes from the network straight down the stack to the database and there does a dummy select to read the ok from a dummy table and return that result. That allows you to (in contrast to a "normal ping" that tests only the network) have confidence that all layers in the application including the database are actually alive.
- Ping is one of the most basic and useful network commands. It sends request to networked computer and waits for response. It’s easy to ping single PC but it’s pain to ping dozens (or even hundreds) of them.
- The process of Pinging the entire Subnet which can have N nos of PCs are known as Deep Ping. Network scanners are usually used to do this....
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
We have a website which generates MB/TB of data which needs to be mined. What technologies should we use to process terra bytes of data in real time ? Hadoop , Cassandra are good for batch processing; but not for real time.
Real-time; means process the data as it is happening and show reports on that.
Any ideas or suggestions ?
Have you looked into the Storm project? It's used by Twitter. It's like real-time Hadoop.
We use it for one of our stream processing project. It's awesome. Documentation, development, deployment, scalability awesome. We recently ran a 20K message/sec with processing (storing in Cassandra, modifying and broadcasting, calculating mean), it worked reliably and like magic. Definitely worth giving a shot. The mailing list is very friendly, I rarely had to use it to ask a question.
You can process TBs of data with the same technologies as you can process 1 MB of data, but it will take longer.
I don't see how you intend to use the data in "real time" and I suspect you mean real world.
If you mean quickly, then you need to summarise the data for human consumption. You can only present to the user kilo-bytes or mega-bytes of information at once.
Using memory mapped files can make this more efficient if you have a need to load the data all at once. This can be used to process tens of millions of records per second.
Check this page: http://hadoop.apache.org/
There are listed related frameworks/libraries to work with large amount of data in distributed environment.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for a library that can run on Java ME (Foundation Profile 1.1, CDC) and allows me to basically do something along the lines of
FILE OF type;
in Pascal.
Background: I need to have a largish (approx. 100MB) set of around 500.000 records for lookups by a known index value quickly. Do I really have to write this myself? Databases like Derby are way too big and bring lots of features (stored procedures, anyone?) I do not need.
Ideally I would just like to define a class with a few fields based on primitive types and Strings as a value holder object and persist these in a file I could - should the need arise - manually recover. That's why I am not too much into serialization. From the past I have fought several occasions of corrupted binary data files which could not be recovered at all.
Your biggest problem here is establishing a correspondence between field names and columns in the file, as you really shouldn't assume that the class layout matches the field ordering in the source file.
If the file were to contain a header row then it's a simple matter of using reflection/introspection and shouldn't take more than a day to implement yourself.
Alternatively, you'll have to use an annotation of some sort to specify, for each field, where it appears in the file.
Have you instead considered alternative text serialization methods, such as CSV, JSON or XML using XStream? These avoid the risks of binary corruption and would get you up and running faster, but might also impose a higher memory footprint which could be an issue as you're targeting a mobile device.
After looking around for quite some time, I have finally come to xBaseJ from SourceForge. It relies on java.nio, which is normally not included in the JavaME CDC profile, but we had a contractor port the relevant parts to the mobile J9 VM. Armed with this, we are now building our application on top of DBase III compatible files. Apart from being pretty reasonably fast, even on the mobile platform, this gives us access to a plethora of tools that can handle this format, without having to teach non-tech folks about a JDBC based DB admin tool they do not feel comfortable with.
There has just been a recent release of a whole eBook, called "The Minimum You Need To Know About xBaseJ", which is available for free from the project's website, too.