I'm having a problem. I would like to create Document object, and I would like to have a user property with com.google.appengine.api.users.User type (on GAE's docs site, they said we should use this object instead of email address or something else, because this object probably will be enchanced to be unique). But now the object can't be compiled by GWT, because I don't have the source for that object.
How can I resolve the problem?
I was searching for documents about DTOs, but I realized that maybe that's not the best pattern I should use.
What do you recommend?
Very thanks for your help!
Regards,
Bálint Kriván
to avoid DTOs for objects with com.google.appengine.api.users.User inside you can probably use the work from
http://www.resmarksystems.com/code/
He has build wrappers for the Core GAE data types (Key, Text, ShortBlob, Blob, Link, User). I've tested it with datastore.Text and it worked well.
There is a lot of debate about whether you should be able to reuse objects from the server on the client. However, the reuse rarely works out well in real applications so I generally recommend creating pure java objects that you copy your data into to send to the client. This allows you to tailor the data to what you need on the client and avoids pitfalls where you accidently send sensitive information over the wire.
So in this case, I would recommend that you create a separate object to send over the wire. BTW if you have the AppEngine SDK for Java (http://code.google.com/appengine/downloads.html), it includes a demo application that I did (sticky) that demonstrates this technique.
this question also addresses the issue:
It links to a semi workable solution for automatically making your persistent objects gwt-rpc compatible.
I had the same question, your answer is interesting, but i am always sad to copy twice a data... Plus, when your dao gets the data, you will have to parse all the results to copy them to the pure java object, isn't it? It seems to be a heavy operation. What's your opinion about those question?
Related
I have a collection of 350 locations in the United States with each containing about 25 subcategories. The data structure looks something like this:
Location (ex: Albany, NY)
--> Things to do
--> Population
... 23 More
Which of the following would be best for loading this data into the app: JSON, XML, or SQLite? Just to clarify, I don't need to edit this data in any way. I simply need to read it so that the information can be loaded into TextView's.
Edit:
I'm attempting to implement Room and XML and so far the XML seems to be the simplest to implement. Is it bad practice to use the XML solution? It doesn't seem to be using too many resources and it isn't running slow at all when tested on a few devices. Would it still be a better practice to implement the Room solution?
Undoubtedly, among all of these RDB is the most efficient one, both in terms of storage and query response. I personally do not see any point in using xml and json as these have been traditionally used for exchange of data and are inefficient for storage and queries.
I would suggest that you evaluate the following:
a) how are you going to store the data: single file vs multiple files(for example by subject)
b) are you going to be doing updates on the strings or just appending(SQL will be better suited for updates but if it just reading data after a batch processing flat files might be better suited)
c) How complex are the queries that you want to implement.XML and SQL are better suited for queries that might try to address metadata (date stored, original location address, etc.) than JSON
Once you determine what you want to optimize: whether it is on adding metadata, fast updates, fast querying, ease of storage, fast retrieval of subject files, etc. then you can decide the tradeoffs with other less important goals. In this specific instance the devil is very much in the details.
In most cases it would be better to use a database because it increases readability and maintainability. Especially if you want to show these information inside a kind of list-view. If you use JSON or XML you'll have to parse or write a lot of code to switch between things or load them with a good performance. Consider the case of using Room, LiveData and a RecyclerView, this will reduce the code you'll need and improve( a lot) performance and readability of your app code. By the way you should provide more information about how you want to use and where you want to show these information. XML (or the Android resource system) should be used if you plan to use the resource system itself with its qualifiers to reduce your work. Most of the time JSON is used to communicate outside or with another app in an easy way or for REST requests/responses.
The one option that wouldn't make sense to use at all for your use case is SQLite. Unless you plan on running specific queries on the data for preprocessing before loading them into your view it doesn't worth the overhead (even if I don't imagine is a lot with 350 locations)
XML vs JSON serve the same usecase without much difference, read up their specifics in this website: https://www.json.org/xml.html
I would personally go for JSON due to the simplicity of the format.
Edit:
#simo-r Argument is also a valid one in regards to readability of your code. While there are libraries that can make reading json/xml easier by default Android has really good SQLite support so it might make sense to use it. Ultimately it is in your personal preference and where you see the project growing.
I have a collection of 350 locations in the United States with each containing about 25 subcategories.
The main issue is scalability
Will you, in the next few years, keep just a few hundred locations, or do you imagine, that, if your software becomes successful, your data would grow to many thousands of locations?
If yes: choose SQLite because it could store many records, in an efficient way. Don't forget to have a good database schema with appropriate indexes. See this and read about database normalization. Also, an SQLite database could later be migrated (with efforts) to PostGreSQL.
If no (your data has just a few megabytes): keep JSON or XML. The data is in the page cache.
Consider also YAML, and sometimes a mixed approach.
don't forget to document how your data is organized and accessed.
See also the data persistence chapter of this draft report
If you gonna simply bind data into text views, you can just store the text as strings.xml. As simple as that.
Go with JSON.
Advantages :
Low overhead ( Vs SQLite )
Lightweight parsers like Jackson available using which you can easily convert your data into custom object or data-structure if you need.
Maintainable. As most of the developers understand the format.
I would suggest using JSON. Reason below
JSON vs XML
JSON is lightweight than XML and would take fewer resources(network and storage). Performance of the app increases.
JSON parsing is easy and as mentioned above, its trivial.
JSON is friendly to javascript, in case it's required.
JSON vs SQLite
350 data set with 23 attributes, can be easily managed by JSON. RDBMS is not required.
SQLite becomes an overhead. It's an extra layer and layer comes with a cost. Especially if the application is containerized, the architecture becomes complicated. One needs to deal with volume mapping etc, in case of JSON you can keep the data as part of the application code.
Importantly, since data is static, keep the application stateless by keeping the data alongside the codebase. This makes lot more sense from architectural perspective.
Problem
You have a fixed set of information with a simple structure that you wish to deliver to clients.
Questions to Reflect On
Do I expect this information to significantly changed or modified ever?
Do I expect to increase the amount of information available?
What kind of help do I have? Do they have a background in software engineering or is it someone of a different profession that has to wear a lot of hats?
What is the scale of the project? Are you expecting a large amount of users or just people interested in a very niche application?
JSON or XML
JSON and XML provide similar services: they are both data transfer protocols. If the information is not expected to grow both might be a great option. If its public information, just serve these files statically over nginx. You can point a worker with limited software engineering experience to update these files; they're just files in a folder presented in a human readable format... its extremely simple to do. These updates should be minor and infrequent.
JavaScript Object Notation(JSON) Pros
solid browser and backend support
small size and fast parsing by the javascript engine
very human readable, easy for the untrained eye to make changes
Extensible Markup Language(XML) Pros
standard meta-data option
supports namespaces
solid backend support and is often baked into frameworks
This article explains XML and JSON differences really well (in 2020) if these highlights were not sufficient for your investigation.
Database System
There are a plethora of database systems out there. Their job is to efficiently retrieve specific information from a large volume of data stored. The key reason to use databases is scalability. Scalability means a number of things; I view it as adapting to drastic change. If you expect this information to frequently change or grow, go with a database.
Object Relational Mapping (ORM)
Databases can be cumbersome to use. I would recommend using an ORM on top of them. These encapsulate a database and makes it more user friendly (language specific). Room makes sense in your use case especially for java android development. Encapsulation also allows you to migrate to other databases later without change your code. Here's a good article that discusses Room and SQLite!
Miscellaneous
"Is it bad practice to use an XML solution?"
No. The important thing is that it works, is understandable, and runs efficiently. Just keep in mind that XML and JSON are data transfer protocols and they do THAT job well. This stackoverflow discussion may be helpful to gain a better picture of what that means; be sure to read more than just the accepted answer.
"It doesn't seem to be using too many resources and it isn't running slow at all when tested on a few devices."
Although testing for functionality is great, keep in mind that your test is not a load test and does not verify what you're trying to confirm. I would explore load testing, Wikipedia is a good place to start!
What is the right way to persist data defined using protobuf3. I am using golang and Java, both place with support of ORMs. In java with Hibernate and golang with gorm. Both place i need to convert the Generated code to corresponding Entity model. I feel that is more pain full to maintain same object structure in order to be understandable by ORM. Is there any Database which i can use along with protobuf objects as is. Or i can define the relations between objects in the protobuf itself.
Any helps really appreciated.
There is a not-straightforward solution to this problem.
Protobuf 3 standardises JSON mapping for the messages. Once you serialise your message to JSON, you have multiple options for storing it in a database.
The following (and many more) databases can store JSON data:
MariaDB
PostgreSQL
MongoDB
Your ORM is dealing with objects, by definition. It should not know or care about serialization on the network. I'd suggest deserializing the protobuf message into objects that your ORM is used to and letting it persist them. There's no good reason to couple your persistence tier to the network protocol.
It might make sense to store the protobuf serialization directly if you get rid of JPA and go with a document based solution.
You have to decide how much value JPA is providing for you.
Although this question is quite old, things have happened since then and the FoundationDB Record Layer, released by Apple in 2018, stores Protocol Buffer natively.
In Go, I don't know about gorm, but it seems that with Ent (a competing ORM) Protobufs can be deserialized into exactly the same objects which are used for DB tables/relations. Ent's official tutorial for that.
The caveat is that you specify your Protobuf with Ent's Golang structures, not via the standard proto3 language.
I want to transfer hibernate objects with GWT-RPC to the frontend. Of course i can not transfer the annotated class because the annotations can not be compiled to javascript. So i did the hibernate mapping purely in the ".hbm.xml". This worked fine for very simple objects. But as soon as i add more complex things like a oneToMany relationship realized with e.g. a set, the compiler complains about some serialization issues with the set (But the objects in the set are serializable as well).
I guess it does't work because hibernate creates some kind of special set that can not be interpreted by GWT?
Is there any way to get around this or do i need another approach to get my objects to the frontend?
Edit: It seems that my approach is not possible with RPC because hibernate changes the objects. (see answer from thanos). There is a newer approach from google to transfer objects to the the frontend: The request factory. It looks really good and i will try this now.
Edit2: Request factory works perfectly and is much more convenient than RPC!
This is a quote from GWT documentation. It says that hibernate changes the object from the original form in order to make it persistent.
What this means for GWT RPC is that by the time the object is ready to be transferred over the wire, it actually isn't the same object that the compiler thought was going to be transferred, so when trying to deserialize, the GWT RPC mechanism no longer knows what the type is and refuses to deserialize it.
Unfortunately the only way to implement the solution is by making DTOs and their appropriate converters.
Using Gilead is a cleaner approach (no need for all this DTO code), but DTOs are more ligtweight and thus produce less traffic through the wire.
Anyhow there is also Dozer, that will generate the DTOs for you so there will not be much need for yo to actually write the code.
Either way as mchq08 said the link he provided will solve many of questions.
I would also make another suggestion! Separate the projects. Create a new one as a model for your application and include the jar into the GWT. In this way your GWT project will be almost in its' entirety the GUI and the jar library can be re-used for other projects too.
When I created my RPC to Hibernate I used this example as a framework. I would recommend downloading their source code and reading the section called "Integration Strategies" since I felt the "Basic" section did not justify DTO. One thing this tutorial did not go over as well is the receiving and sending part from the web page(which converts to JS) so thats why I am recommending you downloading their source code and looking at how they send/receive each the DTOs.
Post the stack trace and some code that you believe will be useful to solving this error.
Google's GWT & Hibernate
Reading this (and the source code) can take some time but really helps understands their logic.
I used the next approatch: for each hibernate entity class I had client replica without any hibernate stuff. Also I had mechanism for copy data between client <-> server clases.
This was working, but I belive current GWT version should work with hibernate-annotated classes..
On a client project, I use Moo (which I wrote) to translate Hibernate-enhanced domain objects into DTOs relatively painlessly.
I am using XStream as part of my application for serializing objects. For one of the use cases, I have to serialize some of the objects implementing Externalizable interface. For my use case I would like to serialize them using native Java serialization.
I found a link on the internet, http://old.nabble.com/How-to-remove-Externalizable-Converter-td22747484.html, which helped me address this issue and started using Reflection Converter for Externalizable objects.
When testing the application, I am seeing that the application is spending lot of time (10's of seconds) in converter code during highly concurrent access. I can see that the problem is in the buildMap method of FieldDictionary.
I was wondering if there is a better way to address my original issue? Is the performance for Reflection Converter expected to be bad when having highly concurrent environment?
To give some additional context on the environment. It is a web application and the serialization is happening during the request processing and application can have 100's of concurrent threads.
I really appreciate any help/advice regarding this.
This is technically not an answer.. but I hope it helps anyways.
While creating a Java Swing based desktop app that was used for Bio-molecular research modeling, we were serializing very complicated and interconnected object graphs to disk for performance reasons.
Even after working our way through Externalization and Serializable related issues, we had to abandon the whole approach and start fresh, because Java serialization is very sensitive to object structure / name etc. Which means innocent refactoring of the model was leading to major crashes in production, when users tried to load old serialized models.
Eventually we created a data store friendly object structure (No strong inter-references to other nodes in the graph), and serialized this structure. This was much simpler, less error prone and much faster than serializing and deserializing the original graph. This also meant that we could refactor / modify our domain graph objects at will, as long as the Adapters (components that converted Domain objects to DataStore objects) was kept updated properly.
I am developing a Java based desktop application. There are some data generated from the application object model that I need to persist (preferably to a file). There is also a requirement to protect the persisted file so that others can't derive the object model details from the data. What's the best strategy for doing these? I was in the impression that these requirements are very common for desktop apps. However, I haven't been able to found much useful info on it. Any suggestion appreciated.
Your question has two parts. 1st: How to persist data? 2nd: How to protect them?
There is a lot of ways how to persist data. From simple XML, java serialization to own data format. There is no way how to prevent revers engineering data just by "plain text". You can just make it harder, but not impossible. To make it quite impossible you need to use strong encryption and here comes a problem. How to encrypt data and don't reveal secure token. If you are distributing secure token with your application it is just a matter of time to find it and problem is solved. So entering a secure token during installation is not an option. If user has to authenticate to use application it should help, but it is the same problem. The next option is to use custom protected bijection algorithm to obfuscate data. And the last option is to do nothing just keep the data format private and don't publish them and obfuscate your application to prevent from reverse engineering.
At the best value comes simple obfuscation of data (XOR primenumber) with custom data format and obfuscated application.
If you don't need to modify this file you can serialize the object graph to a file. The contents are binary and they could only be read using the classes where they were written.
You can also use Java DB ( shipped with java since 1.5 I think ) and an ORM tool for that such as Hibernate.
EDIT
It is bundled since 1.6 http://developers.sun.com/javadb/
XStream works if you want to do simple xml reading and writing to a file. Xstream allows you to take any java object and write it to and read it from you file.
I think "serialization" is the word:
http://java.sun.com/developer/technicalArticles/Programming/serialization/
If you really need the security implied in your statement ("...protect the persisted file so that others can't derive the object model details from the data."), I'd serialize the data in memory (to Java serialized form, XML, or whatever) and then encrypt that byte-stream to a file.
You can try using an embedded database like Berkeley DB Java Edition (http://www.oracle.com/database/berkeley-db/je/index.html). Their direct persistent layer API will most likely suit your needs. The database contents are synced to files on disk. From just looking at the files directly, it's not easy to figure out the object model from the data. I've had good experiences with it, it's lightning fast and works well with desktop applications.