As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have yet to find a good benchmark on JSF performance. I know what you are thinking, it depends on your code/design/database setup, etc. I know that you have to optimize many things before you need to optimize your presentation layer (for instance you should see how good your database schema is), but let's say for the sake of the argument that we have reached the point in which we must review our presentation layer.
JSF is session-intensive. I've read a bunch of times that this can be a drawback when it comes to writing scalable applications. Having big user sessions in a clustered enviroment can be problematic. Is there any article on this? I'd hate to go to production just to see that the great JSF lifecycle and CDI integration have a huge cost in performance.
For high performance, session stickiness must be implemented, regardless of framework or language. How that's done depends on your setup; for example hardware loadbalancers usually have this feature. Then you don't really have to worry about inter-server network latency.
However, JSF+CDI performance on a single machine is also very important. Suppose the overhead is 300ms, that means a 4-core server can only handle 10 requests per second. Not too bad, but not in the high performance class. (Usually not a problem for companies on JEE bandwagons; they are usually enterprise-scaled, not internet-scaled; and they have cash to burn for lots of servers)
I don't really have the performance number though; it would be interesting if someone reports some CDI+JSF stats, for example, how long it takes to handle a typical page with a moderate size form.
I don't know if there is any truth in the assertion that JSF is heavy on session data. However, I'd have thought that the way to address scalability issues due to large amounts of session data would be to:
replicate the front-end servers (which you have to do anyway beyond a certain point of scaling), and
dispatch requests to the front-end based on the session token, so that the session data is likely to already be available in memory.
The presentation layer is one instance of an -> embarrassingly parallel application. In principle you can scale it by adding hardware; an extreme would be to have one hyper-thread per user in the minute of your site's maximum user-count. So scalablilty is not a problem here. What might be a problem is with pages that have to be rendered sequentially and take a long time to render even in single-user mode: If your JSF takes a minute to render in single-user mode then it will so too in multi-user mode and if you can not render it in multiple pieces in parallel then that is plain-old necessary.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have a sort of unusual question, and I know it is a very controversial, but here it comes.
I have developed a few JSF applications in the past but these all limit the amount of users that can be servred about 5-6. This was partly because of a license based policy. I performed some tests with 20+ users and Selenium, and the applications went really slow. The problem was probably originating from the server's performance but still, I can't help to ask the following question:
Can a JSF application support a large number of users? My bet would be that the framework should allow it, however I can't think of any commercial website that uses JSF and can support thousands of users at a time. (If you could show me some that would be great!)
I ask this, because I have been asked to develop a larger system, and I would love to use JSF because I love it very much, however the recent performance tests gave me doubts. The lead programmer said it is only the server machine's perfomance that is the issue, but in that case, what kind of machine can support thousands of users logged in at the same time? The lead programmer is not the best of it's kind, that is why I want to hear a second opinion from SOF if you dont mind.
If there is any framework more suitable for extreme use please let me know which one it is, the only real constraint I have is that it should be Java based on the server side.
Again my apologies for the unconstructive question.
these all limit the amount of users that can be servred about 5-6
Not sure what the app's load or design are, but that sounds unbelievably low. JSF should be able to handle many 100s of users if designed right or even 1000s with the right infrastructure. JSF code runs with servlets and facelets - the framework is standard code on top of these, that has been optimized over time and gets JIT optimised at runtime.
E.g. With IBM Websphere Portal Server and Oracle Portal, the standard way to build customer portals and apps is via JSF. And they're used in massive installations.
Sounds like your past app(s) have some problem. I don't think you can blame that performance on JSF.
If you want an extreme number of connection is Java you might consider http://netty.io/ It is designed to support and has been tested for 100,000+ connections.
I suspect the bottleneck is not the number of connections you have but how efficient you serve up pages i.e. you JSF is particularly slow. If you optimise that I suspect you can handle more connections.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need to call a method from another JVM running on the same machine. This methods needs to be called very many time with Java/native-like performance. It is a small-input small-output method. The other JVM runs on the same machine.
What is the fastest way to make this call and retrieve the result from this other JVM running "nearby"?
Some options are probably RMI, pipes, sockets, JMS, an optimized same-machine inter-JVM communication support, some low-level hack in the JVM. Any idea is welcome, regardless of how specialized it is.
The fastest way to communicate between JVMs on the same machine is using shared memory e.g. via memory mapped files. This is as much as 100x faster than using a Socket over loopback. e.g. a 200 ns round trip time vs a 10-20 micro-second round trip time for Sockets.
One implementation is Java Chronicle BTW The 100 ns latency includes persistence of the messages.
Whether you need either of these solutions isn't something you should take for granted. Often when people say they have to have the "fastest" they really mean they don't know how fast it needs to be so if they pick the fastest it should be the right solution. This is usually not correct because taking the fastest solution often means making compromises in the design and implementation which it may turn were never needed if only you knew what the requirements really were.
In short, unless you have specific, measurable latency and/or throughput requirements you should assume that the simplest solution is what you really want. This can be replaced with something faster should it turn out it is not suitable when you have a better understanding of what is required.
Another possibility is 0MQ (ZeroMQ), though it depends what you mean by "fastest" - zeromq is excellent for throughput but if you absolutely must have the lowest possible latency it may not be optimal.
ZeroMQ may be overkill for just two JVMs, but has the advantage that if you later want to move one of the JVMs to another machine, or communicate with non-Java processes, ZeroMQ will still work just fine - and it scales to larger-scale, more complex communications too.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
It's not going to be a request for general comparisson:
Play! framework is Java based which means the code is interpreted to bytecode and then compiled by the JVM in runtime. On the other hand, Ruby is a dynamic language which means the code is interpreted with every request. This is certainly obvious for every programmer.
Another aspect is the development process and the ease of the language (strong typing vs weak typing).
Currently I'm developing a new website using Play!
So, for the questions:
Performance for an HTTP server (Play! runs on the JVM, Ruby is dynamic) - does it really matter for a website? would you see a significant differences?
I feel RoR has much larger community, sources, tutorials etc, and it's a little batter me. Or should it?
Well, it depends.
Ruby's not a particularly fast language, but language execution speed is likely not to be your bottleneck—in my experience, ruby's relative slowness is often just a drop in the ocean of external service calls (e.g. databases), algorithmic problems (e.g. synchronous, blocking subroutines), and design choices that are just generally inappropriate for the problem domain. Keep your whole technology stack in perspective.
Community's important, and Ruby/Rails has an extremely active one. AFAIK Play's smaller, but in my own experience Java and Scala (and the myriad other languages that have JVM implementations (including Ruby)) also have good communities.
All of this depends on the specific needs of your app (and you!). If Ruby's too slow, it's too slow. If you absolutely need some library that only exists in Java, use Java. Choose the tool to fit the task. But keep the entire task (and your own needs for completing that task) in perspective.
Many differences between these two models. As for the performance, my opinion about Java based & RoR:
1, Java based website(running on several Java Application Servers), has its unique advantage, such as multi-thread model(highest speed to read local data), global memory, easy to pooling resources, plenty of efficient clients to connect all kinds of 3rd part OSS tools...
2, RoR (and Php) model of HTTPServer connection, need to "proxy" request to App tier. Multi-process model increases inter-process communications. And as a "dynamic language", the performance is lower.
But, nowadays, web programming depends on other tools to boost up. The widespread uses of cache, NoSQL(Memcached, Redis, TT/TC), IPC/RPC framework(netty, akka, )... shift the bottleneck. I knew both above models has been used in large-scale networking social games.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been doing software development (in Java) since 1998 and have worked with many different teams solving various different problems. Never during time period has any team used parallel programming methods. Even though multi-core processors have been around for a while, I find that parallel programming models are still largely ignored in the real world to solve problems. So, why is parallel programming not used more often? It seems like a good way to make things more scalable, efficient and generally improve performance of programs.
Because getting parallel programming right in a multithreaded shared-memory environment like Java is really, really hard. You're practically guaranteed to make mistakes that are very hard to diagnose. And of course it's extra effort.
Furthermore, the kind of programs that most people work on are workflow systems. Desktop versions of those aren't performance-critical, and webapps / server components are trivial to parallelize by having each request be served by its own thread. This has the advantage that developers don't really have to deal with the parallel aspect.
Because parallel programming is not applicable to every possible problem and because writing a correct concurrent program is hard. Especially making sure that threads are synchronized correctly without unnecessary locking is just not easy. Also, bugs that happen depending on the timing of thread scheduling are very hard to reproduce, find and fix. It's very nasty if you have a concurrency bug that happens once every 100,000 transactions and it happens on production and not on your development system (I've been there...).
Read the book Java Concurrency in Practice, the best book about concurrent programming in Java.
parallel programming models are still largely ignored in the real world to solve problems
I think it is being used where it solves problems. But it doesn't come for free, so it's indeed better not to do anything in parallel (asynchronously) when a simpler serial (synchronous) solution works well enough.
Real parallel programming (where you split up a problem over several cores) is mostly interesting for long running algorithms. Most real life applications are more event processors. Often these events will run in parallel (think about a web server having several threads to process requests). when programming is used to solve an arithmetic problem, it is used more often (think optimization, data analysis etc)
I think your opinion is wrong, for sample when you are have a server side application, application server handle any request by a thread(maybe achieve it from thread-pool).
I guess any place that need to parallel(multi-threading) model you have to do programming parallel.
There is usefull information in following link :
http://www.asjava.com/core-java/why-we-need-to-use-multiple-threads/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Are there any good books on the subject worth reading and still up-to-date with current technologies?
I'm mostly interested in back-end architecture and things I should consider choosing clustering and database solution as I plan to use GWT for the front-end therefore won't be able to control a lot there.
I'm looking for a book which will answer questions like: How to choose load balancing strategy? What DB model to choose? How to scale data? How to scale request handling? What are common problems when building web application able to handle huge traffic?
About GWT: Google Web Toolkit Applications.
In general Even faster web sites performance and Building scalable web sites are very nice.
I have heard good words on The Art of Capicity Planning too, but i don't have it, so i cannot say from first-hand experience.
Check out O'Reilly's books. Here's one on High Performance Web Sites.
Don't know about books, but if you want information regarding real world, up to the bleeding edge, scalable web applications and architecture, then highscalability is a must read.
Perfomance Analysis for Java Web Sites by Stacey Joines et al?
My take is that Ajax doesn't fundamentally affect the overall approach to scalability. It may place even greater emphasis on the intelligent use of caching, but overall everything we knew about scalabilty remains true.