I have a Document Management System that stores documents in a database. I'm looking for a simple way (not too much and complicated protocol to implement) to show the database as a drive in windows (so it can be browsed and manipulated using any windows program, like explorer or office).
I have something in mind that I provide some kind of network share and that can be mounted as a drive in windows. Unfortunately all candidate network protocols for file sharing seem to require substantial effort to implement.
I first considered CIFS, but after reading up on that quickly decided that its BY FAR to complicated for me to implement. Next thougt was NFS, but its not supported by Windows (XP) natively and also seems quite complicated to implement.
FTP might be an option, but implementing an FTP server is again much more complicated than I naively expected.
There might be a simpler protocol to use I haven't thought of.
Is there anything I can (ab)use easily for this purpose?
Ideally I want some kind of (pure Java) premade server where I could easly strip out the part that accesses a local file system and replace it with my own code accessing the database OR a protocol simple enough that I can implement it myself reasonably quickly and more importantly, compatible and reliable.
First, you need to make the correct bindings between your DBS and the database, and define/write an API in your program that will describe an easy access to the needed resources. Writing an API Will allow to maximize inter-operablity of your solution with other services or plugins you might want to add later.
After that, you should serve this services to clients through a permissive and robust protocol such as WebDav which stands for Web-based Distributed Authoring and Versioning. He is natively supported by Windows, you can this way interact with every services implementing WebDav (Windows, most web browsers ...), and you can of course also mount this kind of services as a virtual drive. Plus, it is also supported on Linux and MacOS X, I believe nativ'ely but i'm not sure. In fact, WebDav is an extension to HTTP, and is described in the RFC 4918.
Basically every HTTP (which handles server side response management) library for Java could be used to implement WebDav, if you havé some time and want to do it yourself.
To implement WebDav in a way and with an acceptable effort, I searched for some Java librairies on the web and found these ones, it's now up to you to decide which one really fits your needs :
Milton http://milton.ettrema.com/index.html
A list of some of the existing implementations of the WebDav protocol into open source projects on WebDAV.org (you can find there some pretty amazing projects such as The Jakarta Slide project although I think it is not supported anymore and other projects/librairies that shows the importance of WebDav today). http://www.webdav.org/projects/
Related
Firstly Cheers to all PROGRAMMERS [ Today = Programmers day :) ]
Secondly,
I'm working on a project where the specifications require using a server as a front end and an application in the back end. The project is an advanced smart home system. The server will handle commands coming from the client through the internet (let's say like a remote control from outside the house) and send them (through a channel of communication) to the application (planning on using JAVA application) which will handle the main logic like controlling hardware stuff (lights ...) , reading from a microphone (local mic) and accessing a database to act as a speech recognition system (offline).
Now I'm still in the planning phase and I'm not sure which technologies are the best for this project. I'm thinking to use Node.js or Apache as the server and a JAVA application as the back end and any SQL database for the application's SRS.
I hope this illustration demonstrates clearly how the system works:
The main question is:
What is the best way to make the Java application communicate with the server (communication channel [must be bidirectional]) ?
and Do you recommend a specific server other than the mentioned ones for this job ?
What crossed my mind so far:
1- JSP and servlets (making the server is the application too). But I don't want a server to handle the offline stuff and I'm not sure if java servlets can access hardware interface. I also want the server to be separate from making critical decisions (different layer for security reasons and since it won't be used as frequently as the local [offline] system).
2- Communication channel :
A- A shared file, but it's a bad idea since I don't want the application to check if the file contents changed (command received) or not from time to time (excessive operations).
B- A an inter-process-communication through a port (socket communication) seems the best solution but I don't know how that would turn in terms of operation cost and communication errors.
OS used : Linux Raspbian
EDIT:
I'm sure ZMQ+Apache is good enough for this task, but how is it in comparison to WebServices (like SOAP) ? Would WebServices be a better solution in terms of standard implementation and security ?
All related suggestions are welcomed, TQ
ZeroMQ is great for internal communications, or any other similar communication solutions.
For specifically your case, I can see that ZeroMQ would be a best fit.
Reasons:
You offline server have to be agnostic to web solution.
Communication can be reliable and bi-directional, possibly another patterns like (pub>sub, req<>res, etc).
Restarting any of sides would not require to restart the sockets (connection) on other side, as messages are queued.
Possibility to scale not just on same hardware, but as well to local area network or even through internet.
Big community of support. It might look a bit hard to get into, but in reality it is dead simple, just go to examples and once concept understood - it is very easy and neat to work with.
ZeroMQ has lots drivers for most popular languages, that includes Java and Node.js.
Considerations:
You need to think over packets and data will be sent. So some popular data protocols like XML or JSON is good way of thinking.
Responsibilities over different services - make sure they are not dependant on each other too much. Or if main offline server - is a core of system, make sure it does not depend on web facing service, so that web face can be removed/replaced/improved etc.
Few more points to think about:
Why Java, and what about modular approach? For example if you want to expand and scale - add more sensors into smart home solutions, then having one giant application would require to change it, it is harder to maintain as well as maintain different clients with own needs. Think modular way - some core functionality for offline stuff, but many aggregator processes that would talk to different sensors. This makes easier to support different setups and environments, as well maintain the system as a whole by improving independent components.
I've been doing a lot of research on web platforms (mainly .Net vs Java), and have found that both seem to serve a lot of purposes. What I'd like to know is, does ASP.Net provide enough control, flexibility, and customization in terms of how the server hosts and runs the website, or does Java with, say, Tomcat and Swing or Struts2 offer more flexibility?
Since Tomcat is from Apache, I'd imagine that they implemented the same design and methodology which came from Apache (which I do like). I question whether or not IIS and Windows Server actually provide this sort of thing. Is my assumption correct?
Here's my own personal analysis of a comparison of the two frameworks having worked on .NET for 4 years and Java for around 5 years.
NET has a cookie-cutter like development stack. Java you have a bit more freedom. This can be a positive or a negative depending upon the developer preference. In other words, if you want to do something in .NET there is typically one standard way of doing so. This is not the case with Java where you can choose from a myriad of libraries and/or strategies to complete your task. In my opinion, Java tends to cater to a more skilled community (especially, since almost every major university teaches on Java) that has a bit more confidence and ability when designing/building applications. That is not a generalization of all Java EE vs. .NET developers as I've met equal talents in skill and ability on both platforms, I'm making a generalization of the larger community as a whole. At the end of the day, it is a harder environment to setup and run, but with that comes the added flexibility.
As far as server environments go, you can host Java EE apps on a number of servers (Tomcat, Glassfish/Apache, JBoss, etc..). Most of them open source, so if you're skilled enough, you can dig down into that code and figure out exactly what you're getting and modify it if necessary. This is absolutely not the case with the Microsoft environment. You basically have a windows server running IIS and that's pretty much it. IIS is a decent web hosting tool in my opinion. It's easy to learn the basics, however out of the box it does not have anywhere near the customization and configuration that you can setup with Apache. For example, you have to install Helicon(or some other tool) as an add-on to IIS if you want to create complex Rewrite rules for your site. Rewrite rules are easily implemented in Apache as a standard.
In conclusion, a Java web environment is harder to setup and support, but you'll get that added flexibility with your language and your server environment as you gain more knowledge.
Choosing a platform should be really about your available expertise - your own or whatever resources you (will) have.
If you know the platform you can make it fly. As #iamkrillin stated, I too have yet to hit a roadblock with IIS/.Net platform. This is true whether or not I am hosting the site myself (colo - I own hardware/OS, etc) or via a hosting provider. On Windows hosting though, choose wisely if you host. If you know you need some bare metal access then make sure you get that privilege from whomever you choose to host with. A good practice is to set your application to medium-trust while developing - unless you will go for a dedicated server, this will likely be your hosting provider's application security setting on shared/cloud environments.
As far as routing is concerned (brought up by #GeorgeMcDowd), IIS now has it bolted in, or, if you prefer to do this at the (ASP.Net) application level (instead of IIS), you can do that too (RouteTables). I don't know how complex you envision your routes will be, so I can't tell whether you will run into some limitation either of these options offer.
As far as "standard" or "cookie cutter" is concerned, I'm not actually sure what that means. You have a (massive and growing) .Net base library (from Microsoft). If you need something very specialized (and not offered by the base lib), you can scour Codeplex and other sources (too) for libraries that you can use in your application. If you use Visual Studio, you can use NuGet to do this with a few clicks.
Don't take Microsoft's stewardship of .Net lightly as well. It's consistently being improved, updated by Microsoft.
There is an area where Microsoft is critically lagging though - and that is in mobile. Against Android and IOS and their respective devices, its a tough climb for Windows (phone, Windows 8 tablet, etc.). There are however, tools that allow you to develop in .Net and push to either device. I am only personally beginning to get into that so I cannot say how perfect or disastrous it is.
Just about the only thing left is cost. Its still cheaper to host on non-Windows platforms. If you already have expertise in non-Windows platforms, then its a no-brainer. However, the hosting cost difference shouldn't be your deciding factor if you have a learning curve (that is commonly the hidden cost).
Without a specific example of what you are referring to configuration wise. I feel pretty comforable saying that there are a variety of ways to configure a website and how it is hosted/ran etc from within the IIS control panel. To that end, I have never ran into a situation where I thought "I wish I could do x with my website" and was not able to make IIS do it.
I am developing an client-server application where client gets updates every second (lets say 1000 fields) . I also need to draw waveforms from at client side.Server is already existing.
For this type of application which will be better ? GWT with intermediate server or Java Web Start which directly connects to existing server in terms of performance, difficulty to code ?
I don't see that much difference among them. I'd even say that they are separate things. Webstart is how your client will get the app: from some site.
Webstart is a bit easier to mantain, since your client will get it everytime it starts.
Deploying a stand-alone can be a bit harder, depending on your infrastructure.
Performance: just the "download" part of the webstart can be a bit heavier. I thinkg performance is almost the same, after ir began to execute.
Difficulty to code: it's just a matter of experience. Your code in both them will be almost the same, since they'll do the same things.
Mantain / upgrade : easier to mantain and upgrade the Webstart than installing a client on each machine.
Consider using a JFreeChart DynamicTimeSeriesCollection, seen here, distributed via java-web-start. A thousand fields in a scroll pane is possible, but JList or JTable would be considerably more efficient.
If the server already exists, the issue is more about the best way to connect it to the client, than whether or not to use GWT. For example, if you want server-push rather than client-pull for the updates, that changes things somewhat. However, assuming you need to do some vector graphics, and pull information from a server (I'll assume that you can use either JSON or XML to get server information), you could use several different JavaScript toolkits to do this directly, without Java or GWT needed at all.
For this type of application, Dojo would be one fair option. It has fairly good portable vector graphics, it's pure JavaScript and it is finally at a stage where documentation is OK. GWT would be a useful bet if the server didn't already exist, and where you wanted a decent set of controls usable on the client side. But for rich graphics, I'd look at JavaScript options like Dojo, Raphael, or even jQuery. Dojo does support line charts, and that might be a good basis for waveforms.
Some of this depends on the nature of the server. If it uses a different protocol from HTTP, or doesn't really provide easy JSON or XML access, you're probably better looking at a client-server package that does make the bridge between client and server simple.
GWT might be an option here, but it is designed more for robustness than for very fast development. And if you are fitting with an existing protocol, it could be a fair amount of work.
We have a C++ application on Windows that starts a java process. These two apps need to communicate with each other (via snippets of xml).
What interprocess communication method would you choose, and why?
Methods on the table for us are: a shared file(s), pipes and sockets (although I think this has some security concerns). I'm open to other methods.
I'm not sure why you think socket-based communication would have security concerns (use SSL). It is often a very good approach as it is language agnostic, assuming that you have a well-defined communication protocol. Have a look at Google's protocol buffers, for example - they generate the required Java classes and streams.
In my experience, file systems (especially network file systems) are not well suited to such communication as they are not necessarily tuned for messaging (I've seen caching issues result in files being not picked up by the target process for example).
Another option is a messaging layer (AMQ or Tibco for example) although this will likely involve a greater administrative overhead (plus expertise) to set up.
Personally I would opt for a pure-socket approach because of its flexibility and simplicity. You will be in complete control.
I've used named pipes for communication between C# and a cross-platform c++ app and had nothing but good results. Barring that sockets is definitely the way to go.
Sockets are nice. They give you the ability to very easily create a blackbox testing layer around each component, as well as run each component on its own machine.
Security is definitely a concern, but there are a good range of options depending on how important it is. You can use SSL, custom handshaking, password protected logins and firewalls to help secure it.
Edit:
Not something I'd recommend, but there's also shared memory using JNI. Just thought I'd mention it because it's not on your list.
Ice is pretty cool :)
I am implementing a website using PHP for the front end and a Java service as the back end. The two parts are as follows:
PHP front end listens to http requests and interacts with the database.
The Java back end run continuously and responds to calls from the front end.
More specifically, the back end is a daemon that connects and maintain the link to several IM services (AOL, MSN, Yahoo, Jabber...).
Both of the layers will be deployed on the same system (a CentOS box, I suppose) and introducing a middle layer (for instance: using XML-RPC) will reduce the performance (the resource is also rather limited).
Question: Is there a way to link the two layers directly? (no more web services in between)
Since this is communication between two separate running processes, a "direct" call (as in JNI) is not possible. The easiest ways to do such interprocess communcation are probably named pipes and network sockets. In both cases, you'll have to define a communication protocol and implement it on both sides. Using a standard protocol such as XML-RPC makes this easier, but is not strictly necessary.
There are generally four patterns for application integration:
via Filesystem, ie. one producers writes data to a directory monitored by the consumer
via Database, ie. two applications share a schema or table and use it to swap data
via RMI/RPC/web service/any blocking, sync call from one app to another. For PHP to Java you can pick from the various integration libraries listed above, or use some web services standards like SOAP.
via messaging/any non-blocking, async operation where one app sends a message to another app.
Each of these patterns has pros and cons, but a good rule of thumb is to pick the one with the loosest coupling that you can get away with. For example, if you selected #4 your Java app could crash without also taking down your PHP app.
I'd suggest before looking at specific libraries or technologies listed in the answers here that you pick the right pattern for you, then investigate your specific options.
I have tried PHP-Java bridge(php-java-bridge.sourceforge.net/pjb/) and it works quite well. Basically, we need to run a jar file (JavaBridge.jar) which listens on port(there are several options available like Local socket, 8080 port and so on). Your java class files must be availabe to the JavaBridge in the classpath. You need to include a file Java.inc in your php and you can access the Java classes.
Sure, there are lots of ways, but you said about the limited resource...
IMHO define your own lightweight RPC-like protocol and use sockets on TCP/IP to communicate. Actually in this case there's no need to use full advantages of RPC etc... You need only to define API for this particular case and implement it on both sides. In this case you can serialize your packets to quite small. You can even assign a kind of GUIDs to your remote methods and use them to save the traffic and speed-up your intercommunication.
The advantage of sockets usage is that your solution will be pretty scalable.
You could try the PHP/Java integration.
Also, if the communication is one-way (something like "sendmail for IM"), you could write out the PHP requests to a file and monitor that in your Java app.
I was also faced with this problem recently. The Resin solution above is actually a complete re-write of PHP in Java along the lines of JRuby, Jython and Rhino. It is called Quercus. But I'm guessing for you as it was for me, tossing out your Apache/PHP setup isn't really an option.
And there are more problems with Quercus besides: the free version is GPL, which is tricky if you're developing commercial software (though not as tricky as Resin would like you to believe (but IANAL)) and on top of that the free version doesn't support compiling to byte code, so its basically an interpreter written in Java.
What I decided on in the end was to just exchange simple messages over HTTP. I used PHP's json_encode()/json_decode() and Java's json-lib to encode the messages in JSON (simple, text-based, good match for data model).
Another interesting and light-weight option would be to have Java generate PHP code and then use PHP include() directive to fetch that over HTTP and execute it. I haven't tried this though.
If its the actual HTTP calls you're concerned about (for performance), neither of these solutions will help there. All I can say is that I haven't had problems with the PHP and Java on the same LAN. My feeling is that it won't be a problem for the vast majority of applications as long as you keep your RPC calls fairly course-grained (which you really should do anyway).
Sorry, this is a bit of a quick answer but: i heard the Resin app server has support for integrating java and PHP.
They claim they can smash php and java together: http://www.caucho.com/resin-3.0/quercus/
I've used resin for serving J2ee applications, but not for its PHP support.
I'd be interested to hear of such adventures.
Why not use web service?
Make a Java layer and put a ws access(Axis, SpringWS, etc...) and the Php access the Java layer using one ws client.
I think it's simple and useful.
I've come across this page which introduces a means to link the two layers. However, it still requires a middle layer (TCP/IP). Moreover, other services may exploit the Java service as well because it accepts all incoming connections.
http://www.devx.com/Java/Article/20509
[Researching...]