I'm looking for a free (if possible) solution where:
You can 'record' all the kinds of objects created during normal (production-like) usage of a webapp on an EJB application server (like Weblogic).
Afterwards you can 'replay' each page separately on an non-EJB application server like Tomcat.
Moreover, it would be fantastic if:
One could easily create test-suites from all the recorded scenarios and replay them in different browsers (to test regressions).
One could modify or substitute single recorded objects (for example changing the fields of a map etc.) Changed objects can be saved and added into a test suite.
It would be possible to integrate the solution with an IDE, optimally Eclipse.
I've tried googling and I've found Replay Solutions . Their product seems to be pretty well-suited to my requirements but
unfortunately it is paid and they don't even put pricing directly on the webpage (it's available after contacting the company).
Does anybody know such a solution or am I demanding too much?
Related
My employer has currently given me a project that has me scratching my head about synchronization.
I'm going to first talk about the situation I'm in:
I've been asked to create a pdf-report/quotation-tool that takes data (from csv-files; because the actual database the data is on is being used by old IBM software and they for reasons (unknown) don't want any direct access to this database (so instead of making copies of the data to other databases, they apparently found it incredibly fine to just create a folder on the server with loads and loads and loads of CSV-files.)), this piece of software is to load data into the application, query it, transform where needed, do calculations and then return with a pdf-file to the end-user.
The problem here is that getting, querying, and calculating things takes a fair amount of time, the other problem is: they want it to be a WebApp because the business team does not want to install any new software, they're mostly moving towards doing everything online (since the start of the pandemic), it being a WebApp means that every computation has to be done by the WebApp and getting the data likewise.
My question: Is each call to a servlet by a separate user treated as a separate servlet and should I only synchronize the methods on the business logic (getting and using the data); or should I write some code that puts itself in the middle of the servlet, receives a user-id (as reference), that then runs the business-logic in a synchronized-fashion, then receiving data and returning the pdf-file?
(I hope you get the gist of it...)
Everything will run on Apache Tomcat 8 if that helps. Build is Java 11lts.
Sorry, no code yet. But I've made some drawings.
With java web applications, the usual pattern is for the components to not have conversational state (meaning information specific to a specific user's request). If you need to keep state for a user on the server, you can use the http session. With a SPA or Ajax application it's often easier to keep a lot of that kind of state in the browser. The less state you keep on the server the easier things are as your application scales, you don't have to pin sessions to servers (messing up load balancing) or copy lots of session state across a cluster.
For simple (non-reactive) web apps that do blocking i/o, each request-response cycle gets its own dedicated thread from tomcat's pool. That thread delivers the http request to the servlet, handles the business logic and blocks while talking to the database, then carries the http response.
(Reactive webapps are going to be more complex to build, you will need a non-blocking database driver and you will have less choices for databases, so I would steer clear of those, at least for your first web application.)
The threadpool used by tomcat has to protect itself from concurrent access but that doesn't impact your code. Likewise there are 3rd party middletier caching libraries that have to deal with concurrency but you can avoid dealing with it directly. All of your logic is confined to one thread so it doesn't interfere with processing done by other threads unless there are shared mutable data structures. Those data structures would be the part of the application where synchronization might be one of several possible solutions.
Synchronization or other locking schemes are local to one instance of the application. If you want to stand up multiple instances of this application then you need to be aware each one would be locking separately from the others. So for some things it's better to do locking in the database, since that is shared across webapp instances.
If you can make use of a database to store your data, so that you can rely on the database for caching and indexing, then it seems likely your application should be able to avoid having doing a lot of locking.
If you want examples there are a lot of small examples for building web apps using spring at https://spring.io/guides. These are spring boot applications that are self hosted so you can put them together quickly and run them right away.
Going rogue with a database may not be the best course since databases need looking after by DBAs. My advice is put together two project plans, one for using a database, and one for using the flat files. The flat file one will have to allow for addressing issues like handling caching, indexing data, replication of data from the legacy database, and not having standard tools that generate pdfs from sql queries. The alternative plan using a database should have a lot less sorting out of infrastructure and a shorter time til you can get down to cranking out reports.
Our team works with a well known OSGI based COTS product that runs as standalone service (it does not interact with multiple instances of itself). The product contains an API which allows developers to build additional functionality into the project. This product stores what can be large sized jars (1-5M) in zookeeper along with other configuration data. The COTS product also includes much opensource (tomcat, zookeeper, many other apache products, etc.). Thanks to the product being written in java, I have a good understanding of the design and source code.
Our instance of the product has been having issues starting up correctly at times and the issue according to the vendor is that the product is either failing to correctly write or read to zookeeper either when the product is stopping or started (Vendor does not yet know for sure). This problem only started to appear as we started to add these large jars to the products ./deploy folder.
I do not believe that the node or path cache use cases apply to this product https://github.com/Netflix/curator/wiki/Recipes
Full disclosure: I currently only have a shallow understanding of zookeeper and have been trying without success to find a recipe/use case where one would use zookeeper to store large binary jars. I also recognize that I may be asking the wrong question to this audience.
Is the above scenario a common use case for zookeeper?
ZooKeeper is a consensus store that allows multiple processes to share a common view of a shared resource, it is not a blob store, and should not be used as one.
Firstly, ZooKeeper is a poor choice for storing data in a standalone instance. If you have no need for distributed consensus between multiple readers/writers then ZooKeeper is complete overkill.
Secondly, ZooKeeper nodes are designed to hold small data which changes frequently, potentially with many readers watching for changes - the JAR files that you are adding seem not to fit this pattern in that there aren't many readers (the product is a standalone instance) and the JAR files are large.
The default ZooKeeper configuration puts a hard limit of 1MB storage per ZNode, and ideally you store a lot less than that. This can be increased, but it is not advised that you do so. I would strongly recommend that you look into using a proper file store (or even just the file system as your node is standalone) to store these JAR files.
within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
Our web infrastructure consist of a few independent web applications within Tomcat 7 for printing, product description, product order (this is not persisted in the database!), category description etc.
They are all build on Servlet 2 API.
So each instance/implementation of repository holds a specialised kind of data represented by serializable classes and the instances of this serialzable classes are set up/filled by an periodically executed database query (for every resultrow the setters of the fields are called; reminds me of domain oriented entity beans with CMP).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances).
Each context has a own connection to the Oracle database (set up by resource description file on deployment).
All the data is read only, we never need to write back to the database.
Because we need some of these data types for more than one web application (context) and some even for more than one servlet within the same web context repositories with an identical data type are instantiated more than once - e.g. four times, twice within the same application.
In the end some of the data is doubled and I'm not sure if this is as clever and efficient as it should be. It should be possible to share the same repository object to more than one application (JNDI?) but at least it must be possible to share it for several servlets within the same application context.
Despite I'm irritated by the idea to use a "self build" repository instead of something like a well tested, open developed cache (ehcache, jcs, ...) because some of these caches also provide options for distributed caches (so it should also work within the same container).
If certain entries are searched the search algorithm iterates over all entries in the repository (s. link above). For every search pattern there are specialised functions which are directly called from within the business logic classes using the "entity beans"; there's no specification object or interface.
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM (at least for approximately 10000 DB entries); this is in my opinion most probably correlated to the use of serializeable XSD-to-JAXB-generated classes.
Additionally every time a application is deployed for tests you have to wait at least two minutes until all entries of the database have been loaded into the repositories - when deploying on live there's a well recognizable out of service phase on context/servlet start up.
I tend to think all of this is closely related to the solutions I described above.
Because I haven't got any experiences in this field and I'm new in the company I don't want to be to obtrusive.
Maybe you can help me to evaluate ideas for a better setup:
Is it for performance and memory better to unify all the repositories into one "repository servlet" and request objects from there via HTTP (don't think so, though it seems quite modular/distributed system friendly) or should I try to go with JNDI (never did that before) and connect to the repository similar to a JDBC database?
Wouldn't it be even more sensible, faster and efficient to at least use only one single connection pool for the whole Tomcat (and reference this connection pool from within the web apps deployment descriptor)? Or might that slow down connections or limit it in any other aspect?
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that). I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower. - I believe it will be faster and have shorter start up times respectively it shouldn't be needed to redeploy it that often.
I'm very grateful for every tip or hint and your thoughts. Would be marvellous to get a peer review of my ideas based on practical experiences.
So thank you very much in advance!
Is it better to hold a repository for every web application (context) or is it better to share a common instance by JDNI or a similar technique
Unless someone proves me otherwise I would say there is no way to do it, in a standard way, meaning as defined in the Servlet Sepc or in the rest of the Java EE spec canon.
There are technical ways to do it which probably depend on a specific application server implementation, but this cannot be "better" in its universal sense.
If you have two applications that operate on the same data, I wonder whether the partitioning of the applications is useful. Maybe all functionality operating on some kind of data needs to be in the same application?
within our company it's kind of standard to create repositories for data which is originally stored in the database as described for example in https://thinkinginobjects.com/2012/08/26/dont-use-dao-use-repository/.
I looked up Evans in our book shelf. The blog post is quite weird. A repository and a DAO are basically the same thing, it provides CRUD operations for an object or for a tree of objects (Evans says only the the aggregate roots).
The repositories are initialized on the servlets init sequences (so every servlet keeps it's own set of instances). Each context has a own connection to the Oracle database (set up by resource description file on deployment). [ ... ]
In the end the application server as a whole does not perform that well and it uses a hell lot of RAM
When something performs badly its the best to do profiling, e.g. with YourKit or with perf and FlameGraphs if you are on Linux. If your applications need a lot of RAM, analyze the heap e.g. with Eclipse MAT. There is no way somebody can give you a recommendation or hint on a best practice without seeing any line of code.
A general answer would include anyting about performance tuning for Oracle DBs, JDBC, Java Collections and Concurrent Programming, Networking and Operating Systems.
I was told that the cache system (ehcache) didn't work well (at least not with the performance of the self written solution - though: I can't believe that)
I can. EHCache is between 10-20 times slower then a simple HashMap. See: cache benchmarks. You only need a map, when you do a complete preload and don't have any mutations.
I imagine the usage of repositories backed by a distributed (as across all contexts) cache used in all web applications should not only reduce memory footprint significantly but should not be significantly slower
Distributed caches need to go over the network and add serialization/deserialization overhead. That's probably another factor 30 slower. When is the distributed cache updated?
I'm very grateful for every tip or hint and your thoughts.
Wrap up:
Do the normal software engineering homework, do profiling and analyzing and spend the effort of tuning at the right places
Ask specific questions on one topic on stackoverflow and share your code and performance data. Ask a question about one thing at one time and read https://stackoverflow.com/help/on-topic
You may also come to the conclusion that there is nothing to tune. There are applications out there that need a day to build up an in memory data structure from persistent data. Maybe its just a lot of data? If you do not like the downtime use green blue deployment. Also use smaller data sets for development and testing
We are looking at our current Liferay deployment for improvement in the near future. We'd like to move to where we use instances more.
We currently have a few sites in the default instance. The model we'd like to follow in the future is to - use the default instance for super/global administrator use only. Then create instances with different domains for different audiences/users and that they can be administered by different administrators/groups separately.
Does anyone know how can we move our sites and its related data to another Liferay instance? Is this easy and doable? Are there clearly defined steps and options for this process? Are there any risks to moving the data?
Thanks for the insight.
instances mean that there's nothing shared between the sites. Well, there actually is something: As you're running on the same appserver, so you're sharing all the code, all plugins.
For this reason you might not be as isolated as you want to be. Administration typically is a bit more tricky, as every single instance needs its own virtual host and its own user database (or LDAP connection). The shared plugins might limit you as to what customization you can do with Liferay.
Quite a few times I've seen expectations to "restart" the instance, which seems legit, as a customer is the only one on that instance. However, this quickly leads to accumulated maintenance windows for all instances that are not clearly visible to other customers.
In general you can use site's import and export, e.g. export to a LAR file and import into a new site on a new server in a different instance. You'll find the Import/Export UI in the site administration UI, along with the page administration.
Our framework is Grails. Say domain.com contains an application and currently used by some client. If we want to allow another client with the same functionality but providing a separation for the data of two clients, so that they can't mix both, how to do this? And whenever we want to add n clients to this application, what is the best method to be followed, so that with less / no configuration we can share the common war file for these clients by separating the db.
How the real time web development handle these type of situations?
And, one more point is how to provide client1.domain.com works for client1 and client2.domain.com works for client2. How to make the war file (in Java / Grails) to work like this? Otherwise we have to programmatically control the clients with in the project for every feature to be allowed or unnecessarily maintain separate war file for each client, which will be a waste of resources.
You're describing multitenancy - create one table for N 'tenants' instead of N identical (or nearly) tables, but partition it with a tenant_id column, and use that to filter results in SQL WHERE clauses.
For example the generated code for findByUsername would be something like select * from person where username='foo' and tenant_id=3' - the same code as a regular call but with the tenant_id column to restrict within that tenant's data.
Note that previously simple things like unique constraints are now harder because you would want to restrict uniqueness within a tenant, but allow a value to be reused across tenants. In this case changing the unique constraint to be on the combo of username and tenant_id works and does the heavy lifting in the database.
For a while there were several related plugins, but they relied on tweaking internal APIs and some features broke in newer Hibernate versions. But I believe that http://grails.org/plugin/multi-tenant-single-db is active; it was updated over a year ago, but it is being used. Contact the authors if it looks like it'll be what you need to be sure it's active. Note that this can only work with Hibernate 3.x.
Hibernate 4 added support for multitenancy, but I haven't heard much about its use in Grails (which is expected, since it's not that common a requirement). It's not well documented, but this bug report highlights some of the potential pitfalls and should still be a working example (the test app is still on GitHub): https://jira.grails.org/browse/GPHIB-6.
I'd like to ensure that this is working and continues to work, so please let me know via email if you have issues later. It's a great feature and having it in Hibernate core makes things a lot easier for us. But we need to make it easy to use and well-documented, and that will happen a lot faster when it's being used in a real project.