We tried using the JavaMail API for a while, but we sporadically received ArrayIndexOutOfBounds errors documented by this bug report:
http://kenai.com/bugzilla/show_bug.cgi?id=3155
It was marked as fixed, but we continued to have that error actually. Additionally, the performance using JavaMail to retrieve attachments was consistently very slow. On the order of around 10 KB/s even when on the same network as Exchange so we decided to try a different approach.
We found that we were able to get significantly better performance using a Mapi client called MoonRug: http://www.moonrug.com/features.html
This worked substantially better than JavaMail, but we still have errors every now and then connecting to Exchange and downloading attachments (of varying sizes from 3KB to 20MB).
I am starting to question if having Java call Exchange directly is even the right approach for us to take. Does anyone have a recommendation for a better way to do this? We are primarily a java shop, but if there is a different tool or technology that can take emails from Exchange and put them on the file system somewhere for a java process to read and process that is an option for us.
The goal of this application is to take incoming mail on a certain inbox and save them and the attachments to a back end system that users can interact with. We currently do a pull process where we poll the inbox every minute. If there is some way to make this a push process by integrating something into Exchange Server directly to take email and automatically export them onto a file system that would also be an option for us.
You may want to take a look at DavMail. It's Java based, and it does access Exchange. It may or may not provide you with code you can use. Otherwise, it may allow you to use it as a gateway between your JavaMail-based app and Exchange.
Related
During my studies, I have to make a project connected with programming in Java. I learn Java from a few months and I would like to make something interesting (not an application for bank, library, car renting etc). I'm wondering whether it is possible to create real-time web game/application, where you can type something and your friend on another laptop see this message and can send you response? (using internet/Bluetooth) If yes, what I should look for to find information about this type of applications?
Yes, creating something like this is definitely possible. It really will just depend on exactly how you want to implement this (it sounds like you're still not sure EXACTLY what you want, as your description is vague).
What I mean by that, is what do you want as your medium? Would you like the two users to be on their laptops and communicating through their web browsers? Or would you rather have a standalone application that accomplishes this? If so, what Operating Systems will you support? Will it have a graphical user interface, or will it run on the command line?
Let's assume that you want to develop a standalone Windows application that allows the users to exchange messages. Keep in mind that doing this gracefully would involve users logging into your system with authentication, a fairly sophisticated GUI, and lots of encryption for privacy reasons. That being said, a very basic implementation of this could probably be as follows:
You'd have an app that runs locally on the users machine, and also some sort of database backend that your app communicates with. I'd recommend using a mySQL database hosted by Amazons RDS (Here's a tutorial that got me using Javas JDBC library to work with an Amazon RDS database - https://www.youtube.com/watch?v=2i4t-SL1VsU).
Rather than worry about a GUI, I'd suggest trying to get your prototype working on the command line. Your app could preform the following steps when booting up:
Ask user to input the word send followed by a message to send messages ("SEND %MESSAGE%"), or "RECEIVE" to receive messages.
If "SEND %MESSAGE%" is input, add message to database
if "RECEIVE" is input, query the database for all message entries and output them to the user.
You can see that this would accomplish a very crude version of what you asked for, and the devil is in the details. I'd suggest building something very simple like this, and then adding functionality by tweaking and improving features one at a time.
I'm implementing a web based chat client using Openfire API with Java.
Things are setup and running smoothly, however, I'm a little confused as to where to store chat history.
From what I've observed, Desktop based clients typically store chat history on the client side filesystem using formats such as xml, txt etc.
On the server side, I have the following options:
Using plain text file
Using json (mongodb, hbase etc)
Using database
But I would like to know which is the best of the above options (or any other ones if you can suggest) in terms of speed and performance.
Thanks.
As mentioned in isnot2bad's comment, you can add server-side message archiving through the use of the Openfire Monitoring Plugin. Once you have that setup you can try using the XEP-0126 to fetch archived 1-to-1 chat messages over XMPP.
Openfire Plugins
Unfortunately I have had nothing but trouble when trying to get messages out of the archive using the stanza's defined in XEP-0136. If you look around the OF support forum you will find other people are also running into problems with this plugin. For example, the plugin will not return the list of conversations in correct order, it will not filter the list of conversations or messages by the date specified by the start attribute, etc. To say the least, the plugin could use some work. As a work around, I've left the plugin in place to take care of inserting the messages into the database, but I've written a custom AJAX solution for the retrieval of the archived messages. I just pull them directly out of OF's database and return them in a JSON object to my client side javascript.
XEP-0136 is due for replacement, it's always been overly complicated. XEP-0313 seeks to replace it, but I haven't found any implementations for OF. Good luck.
I'm well aware of how I can communicate with an outside server using Android. Currently I'm using the relatively new AppEngine Connected Android project to do it, and everything works fairly well. The only thing that I'm concerned with is handling downtime for the server (if at all) and internet loss on the client side.
With that in mind, I have the following questions on an implementation:
What is the standard technique for caching values in a SQLite Database for Android while still trying to constantly receive data from the web-application.
How can I be sure that I have the most up-to-date information when that information is available.
How would I wrap up this logic (of determining which one to pull from and whether or not the data is recent) into a ContentProvider?
Is caching the data even that good of an idea? Or should I simply assume that if the user isn't connected to the internet, then the information isn't readily available.
Good news! There's a android construct built just for doing this kind of stuff, its called a SyncAdapter. This is how all the google apps do database syncing. Plus there's a great google IO video all about using it! It's actually one of my favorites. It gives you a nice really high level overview of how to go about keeping remote resources synced with your server using something called REST.
We have some service running on 'n' number of hosts behind a VIP. When there is some fault that occurs with specific request call, we might be interested to know the reason by looking at the logs on the respected host where the fault occurred. since the request could go to any host, when it comes to tracking logs, we need to know from which host the fault originated.
One solution is to store the host name in the database of our service along with other information.
The alternative is, pushing the logs onto a common store and tracing it there.
I personally feel that if we go with the first approach, we might end up in adding many such debugging related attributes in the application database thereby polluting it. However the second option is also not that easy to implement and incurs some overhead. Moreover on which host the fault occurred does not help much except in case the fault occurred due to some hardware specific issue.
What do you guys suggest?
Without knowing more about your infrastructure, it's hard to be precise, but here are some general points of view.
I don't like using databases for storing application logs - if the database falls over, you wouldn't be able to log it! It's also not really relational data, and you can't get the monitoring tools that are available for other solutions.
My recommendation is to use your operating system's built in event logging solution; most logging frameworks support this out of the box. On Windows, that's the event log; on *nix there's the syslog system. Logging should be quick, cheap, and bullet proof - that's what you get from the OS tools.
The second question is then how you use those logs for trouble shooting and monitoring. There are lots of tools for doing this, though mostly aimed at system administrators rather than developers. Microsoft have MoM, there's Tivoli and Big Brother - as well as a whole bunch of open source tools. I'd use those, rather than build your own solution.
The key point is - logging should be fast, cheap and robust; the analysis and monitoring stuff should be entirel separate from your application logic, so you can reuse the tools and processes acros multiple projects.
storing the hostname should be quite cheap I guess. I understand you are appending logs to a db?
you could also store the pid for each process, that can help you in case you have multiple processes running on same hostname. The combination hostname/pid/timestamp will ensure you identify uniquely a process.
I have Google'd my butt off, and I can't find anything on this topic.
I am trying to create a download client using Java, and I have figured out how to download files with Java, but I want to accelerate the download speed. I know how this works (opening several connections to the download server), but how can I achieve this?
I am looking for either some detailed explanation of such an algorithm or some code examples.
This is only possible if the server side supports range requests. You can determine that by checking using a HEAD request if the HTTP response header contains Accept-Ranges: bytes. If that is the case, then you can just spawn several threads which downloads the file in parts using the Range header. The URLConnection and ExecutorService are helpful in this.
Keep in mind that you also take the limitation in amount of threads and network bandwidth of your own machine into account.
Related questions:
Reading first part of file using HTTP
How to use URLConnection to fire and handle HTTP requests
Make simultaneous web requests in Java
BalusC described the trick and here is a reference to some source-code you can review and start with:
JDownLoader[Java]: http://svn.jdownloader.org/projects/show/jd
Free Download Manager[CPP]: http://freedownload.svn.sourceforge.net/viewvc/freedownload/
#BalusC Nice Work
I'm a bit unclear, are you writing a Java client that will talk to a server (perhaps a Java servlet?), so you control both sides of the data transfer? If so, you can do nearly anything you want. Java has java.util.zip, which has functions to do the compression.
If you want to download four (or N) files at once, just start up N threads and pass the HTTP requests to the server in parallel. This may not actually improve things, depending on link speed, network congestion, etc.
Writing your own client and making it properly multi-thread safe is a whole lot of work, which is why people just use the Apache HTTP client code. Its rock solid.