I working on java web application. I need to implement auto complete feature on whole site.
Irrespective of form/application autocomplete should bring all date available in my database(even from outside if possible) .
The Basic idea is to make user type less . I know some plugins which implements auto complete, but we have to preprocess data and give on json format and apply on html element .
I am looking for some great tool which reads (keeps in memory) whole database and makes some buffer list and gives suggestions whenever user types something on form /input elements .
I am using following technologies :
java
hibernate
spring
jquery
bootstrap
mysql
maven
... some great tool which reads (keeps in memory) whole database and makes some buffer list and gives suggestions...
Believe me, Keeping whole DB into memory is really horrible. But to get better performance, you can always cache the required data and keep that in the memory. This will improve performance as well as make the application simple to configure and maintain.
Look at such a framework - http://lucene.apache.org/core/
Related
In our application , in one of our microservice we will query the DB , get the result ( 100k rows ) and generate Excel using Apache POI.In couple of other services they also does the same process ( get DB rows and generate excel) . Here Excel generation process is common , IS this right design to separate this excel generation process as separate micorservice and use in all other services ?
The challenge is passing the data ( 100k rows ) between microservices over HTTP .
How can we achieve it ?
I personally never put the export feature as a separate service.
Providing such a table based data, I provide a table view of the data with paging, and also give export function as an octet streamed data without paging limit. Export could be a type of a view.
I've used the Apache POI library for report rendering but only for the small pages and complex shapes previously. POI also provides streaming version of workbook classes such as SXSSFWorkbook.
To be a microservice, it should have a proper reason to be a external system. If the system only provides just export something, negative. It's too simple and overkill. If you're considering to add versioning, permission, distribution, folder zipping, or... storage management, well.. that could be an option.
By the way, exporting such a big data into a file, Excel has max row limit to 1M size so you may hit the limit if your data size grow more.
Why don't use use just a CSV format? Easy to use, Easy to jump, Easy to process.
You need to ask this question as to what define a service. Reading a chunks of data from a while, does this come under a service?
When I think of separating my services I think along multiple lines like what this module needs to do. Who all will be using it, what all dependencies do I have, how I need to scale it up in future and above all. Which business team will be taking care of it. I tend to divide the modules based on the answers I get to these questions.
Here in your case I see this as less of a service and more of a utility function that can be put in a jar and shared across. A new service will be more along a line of say reporting service reading legacy excel files to create reports or migrating service which uses a utility to read excel.
Also there is no final answer you need to keep questioning your design unless you are happy with it.
I am new to Java and I am working on a project called (in-memory database server). In this project I am supposed to build the database structure for the tables and the relations between them (I am not going to use any DB language, I should build the structure by myself), and then save these structures to XML files in the server (It's a fixed schema of three tables), next I am supposed to handle the CRUD operations on the saved data sent from clients over sockets to the DB-server(using TCP). Also I must use a caching method to access the data fast from memory instead of the HDD.
Well, when thinking in the project generally I see it very complex and I don't know from where to start! Should I start with the client or the server?
I tried to divide the problem into smaller problems so I have these question that I need answers to it in order to catch the starting point.
How can I build the tables and save them in XML files?
After building the tables' structures, how can I make the relations between the tables? (The primary keys,foreign keys, and else).
How will client and server communicate to handle the CRUD operations?
What is the best caching method and how can I implement it.
I want to start building "Users" table, the client has a GUI Login form that will send the username and password to the server, the server will check them and log the user in.
I know it's a lot of questions, but I need help to understand how the work will be done, I need useful topics and videos, any related link may help me.
I would start by trying to define one of the tables in XML format. Probably starting with an XML Schema definition about the format the XML will take in the "table definition" file(s). Once I had that ready, I would start unit testing the heck out of that/those XML. Only after I had everything working to my satisfaction in the unit testing stage would I introduce any complexity of the web client/server variety. Baby steps win.
EDIT: (Useful links: Google this: "XML Schema definition example")
You're new to Java, and you haven't written a database management system before. You've got quite a learning curve ahead of you. Breaking the problem down into manageable chunks is very wise. I'd start reading about principles of DBMS (independent of any implementation language), and then maybe download some open source Java DBMS and study how others have solved the problem.
There is a getUpdated() method available with partner API to get the list of updated records for each object in Salesforce, But is there a way to do the same thing with Schema?
http://boards.developerforce.com/t5/Apex-Code-Development/How-to-I-get-the-last-modified-date-of-a-custom-object-or-field/td-p/237501/highlight/false
The above link exactly explains my problem :( no solution though!
Interesting question!
Write some script over the ANT-based "Migration Tool" that would periodically download whole metadata, commit changes to SVN? Read about "retrieveCode" operation (not the best name in the world. But since it accepts package.xml as config source I believe you can rely on it to retrieve objects too).
Automate access to https://instance.salesforce.com/setup/build/webservices.apexp. Especially the Enterprise WSDL ( https://instance.salesforce.com/soap/wsdl.jsp?type=* ) should be good for data model changes. And again - you'd need something to detect differences between the files.
Really weird idea - can you try to automate screen-scraping of the Setup Audit Trail page? Or manually download the csv, then feed it to small processing program (even excel sheet should do). This one will actually list the changes for you but the format isn't best suited to parse out what was changed...
Safe harbor blah blah blah
Let's hope the upcoming Tooling API will deal with it better. There are gossips on releasing improved API, rewriting the Eclipse "Force.com IDE" plugin and releasing it to the community...
I was planning to use XML to store the data for a Java DVD database application I'm writing. I know that the word "database" is right there in the title, but XML just seemed so much more portable, was human readable and (I assumed before looking into it) simpler to implement.
Parsing XML seems to be the easiest thing in the world... even creating a new XML file isn't much trouble, but changing records, inserting them or deleting them, I can only see to do by creating a fresh XML file.
Am I missing something? Or is the thing that I'm missing that I should switch over to a database format (but there's some wonderful database format I've not heard of, that's totally portable and users won't need to install something separate to use :) )
the most popular way to use a file as a database is probably with sqlite http://www.sqlite.org/ and that's what i would use if i were solving your problem (it's pretty much a standard SQL database, but uses just one file as storage). another, pure-java option is apache derby http://db.apache.org/derby/
however, pure xml databases do exist (and were quite fashionable about 10 years ago - the "nosql" of their time) - the associated standards are xpath http://en.wikipedia.org/wiki/XPath and xquery http://en.wikipedia.org/wiki/Xquery . i haven't used it, but it seems like basex http://basex.org/open-source/ is an open-source implementation that you could use (and it does claim to provide ACID guarantees - http://basex.org/products/ ).
if you're more familiar with xml than sql i don't see any great harm in using an xml database for a small project. just structure your code so that most of the program doesn't care what the storage is (ie by providing a neutral interface). then if xml doesn't work out you can switch to sql by re-implementing just that interface and leaving the rest of your program alone (and if it does work, post back here saying so - it would be interesting to know).
If you're going to have a web-based front end, it seems that a regular database is the way to go as the back end. I don't believe your users would have a need to download anything new, since that's all taken care of server-side. A real database also has the ACID advantage over a pseudobase; it should be atomic, consistent, isolated, and durable, and I can't imagine XML would be a good substitute in those respects.
I am developing a Java based desktop application. There are some data generated from the application object model that I need to persist (preferably to a file). There is also a requirement to protect the persisted file so that others can't derive the object model details from the data. What's the best strategy for doing these? I was in the impression that these requirements are very common for desktop apps. However, I haven't been able to found much useful info on it. Any suggestion appreciated.
Your question has two parts. 1st: How to persist data? 2nd: How to protect them?
There is a lot of ways how to persist data. From simple XML, java serialization to own data format. There is no way how to prevent revers engineering data just by "plain text". You can just make it harder, but not impossible. To make it quite impossible you need to use strong encryption and here comes a problem. How to encrypt data and don't reveal secure token. If you are distributing secure token with your application it is just a matter of time to find it and problem is solved. So entering a secure token during installation is not an option. If user has to authenticate to use application it should help, but it is the same problem. The next option is to use custom protected bijection algorithm to obfuscate data. And the last option is to do nothing just keep the data format private and don't publish them and obfuscate your application to prevent from reverse engineering.
At the best value comes simple obfuscation of data (XOR primenumber) with custom data format and obfuscated application.
If you don't need to modify this file you can serialize the object graph to a file. The contents are binary and they could only be read using the classes where they were written.
You can also use Java DB ( shipped with java since 1.5 I think ) and an ORM tool for that such as Hibernate.
EDIT
It is bundled since 1.6 http://developers.sun.com/javadb/
XStream works if you want to do simple xml reading and writing to a file. Xstream allows you to take any java object and write it to and read it from you file.
I think "serialization" is the word:
http://java.sun.com/developer/technicalArticles/Programming/serialization/
If you really need the security implied in your statement ("...protect the persisted file so that others can't derive the object model details from the data."), I'd serialize the data in memory (to Java serialized form, XML, or whatever) and then encrypt that byte-stream to a file.
You can try using an embedded database like Berkeley DB Java Edition (http://www.oracle.com/database/berkeley-db/je/index.html). Their direct persistent layer API will most likely suit your needs. The database contents are synced to files on disk. From just looking at the files directly, it's not easy to figure out the object model from the data. I've had good experiences with it, it's lightning fast and works well with desktop applications.