As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need to call a method from another JVM running on the same machine. This methods needs to be called very many time with Java/native-like performance. It is a small-input small-output method. The other JVM runs on the same machine.
What is the fastest way to make this call and retrieve the result from this other JVM running "nearby"?
Some options are probably RMI, pipes, sockets, JMS, an optimized same-machine inter-JVM communication support, some low-level hack in the JVM. Any idea is welcome, regardless of how specialized it is.
The fastest way to communicate between JVMs on the same machine is using shared memory e.g. via memory mapped files. This is as much as 100x faster than using a Socket over loopback. e.g. a 200 ns round trip time vs a 10-20 micro-second round trip time for Sockets.
One implementation is Java Chronicle BTW The 100 ns latency includes persistence of the messages.
Whether you need either of these solutions isn't something you should take for granted. Often when people say they have to have the "fastest" they really mean they don't know how fast it needs to be so if they pick the fastest it should be the right solution. This is usually not correct because taking the fastest solution often means making compromises in the design and implementation which it may turn were never needed if only you knew what the requirements really were.
In short, unless you have specific, measurable latency and/or throughput requirements you should assume that the simplest solution is what you really want. This can be replaced with something faster should it turn out it is not suitable when you have a better understanding of what is required.
Another possibility is 0MQ (ZeroMQ), though it depends what you mean by "fastest" - zeromq is excellent for throughput but if you absolutely must have the lowest possible latency it may not be optimal.
ZeroMQ may be overkill for just two JVMs, but has the advantage that if you later want to move one of the JVMs to another machine, or communicate with non-Java processes, ZeroMQ will still work just fine - and it scales to larger-scale, more complex communications too.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am watching node.js and its apps for a year now and I would love to replace bigger parts of my good old Java code with node.js.
The problem I noticed with node.js is that
it looks like a chaos, from version to version things don't work anymore
bad documentation, really bad
no idea what libraries have been ported or will be ported any time soon
multi core managment, does it ?
uses 100% of the CPU power regardless of what it actually does (ie : pauses in loops). Thats not green and this is important to us.
Regarding security concerns I would put it behind a reverse proxy and only my old and real Java server would be able to use it.
update : funny this question gets closed because its not constructive. how can the question be constructive when I don't have any glue? Thats why also I am aksing here ! You moderators here really suck sometimes.
However, would you rather suggest to wait before moving to node ? Or do you think its time to move over ?
I'm writing loads of Java server side code and I would start building my own base framework and then port piece by piece!?
Even if the questions gets closed:
Actually, it works pretty stable and backwards compatible so far
Are you for real ? --> http://nodejs.org/api/
Again: http://www.nodejs.org
Node didn't scale with CPU or core numbers, you should let the OS scale node processes by just launching multiple instances
That is just wrong.
Even if Node still is "young" in comparison to other "server-side considered languages", it already found its place in a lot of spots. It can deal easily with huge amounts of users, it's an excellent web-socket server counterpart, its lightning fast when it comes to dispatching network traffic to a lot active connections and its ECMAscript, the most sweet sugar language ever made (the last statement is personal opinion).
There are probably hundreds of valid use cases for Node.js, obviously there is no specific task were it is a "must use", but thats the same rule for any language most likely. Its fun, its fast, dig into it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Simple enough really. I have a horrible amount of JSON to process, 100GB in total. This 100GB is split into files which are typically 1mb each.
So this left me wondering, typically speaking would it be quicker to parse a JSON file in Javascript or would I have similar results processing the file using one of Java's JSON jars?
Now obviously I'd have to multi thread all of this and so on.
Use whatever technology you're most adept at, the odds of a massive performance difference are low. V8 (Google's JavaScript engine — best known in the Chrome browser, and in NodeJS in non-browser environments, but which can also be run standalone) is freaky fast, as is Sun/Oracle's JVM with its excellent hotspot optimization technology. You could even use JavaScript on the JVM if you like (Rhino).
Now obviously I'd have to multi thread all of this and so on.
It's not obvious at all. If the process is I/O bound (and if you're reading a thousand 100MB files, it sounds like it probably will be, depending on what you're doing with them), adding multiple threads won't help you.
i think it'd be easier, faster, and more easily scalable (ThreadPoolExecutor) to process in java.
how were you planning to do it with javascript? stand alone v8 ?
If you know it, I'd use Node.js. Better to handle JSON objects in an environment built on Javascript
Both languages are run in a virtual execution environement so the execution speed will the more dependent on the VM you use and recent VMs have become really fast especially on recent hardware.
To my knowledge, javascript doesn't have 'native' support for threading. Multithreading was implemented in a "time shared" execution to prevent lockups. This seems however to no longer be the case with "webworkers" Also you could just split your files across different processes that will independently process the files this will however generate a lot of concurrent disk access which will most probably be your bottleneck when processing your files.
So I'd suggest you go with the language that you are the most comfortable with.
Btw. mind telling us what kind of processing you will be doing on the json files ?
If I were to implement this: to limit concurrent IO, I'd have a 1st thread that will prefetch one file at a time and read it into memory and queue a worker to process that file (if the processing is heavy a threadpool will certainly improve processing speed).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been doing software development (in Java) since 1998 and have worked with many different teams solving various different problems. Never during time period has any team used parallel programming methods. Even though multi-core processors have been around for a while, I find that parallel programming models are still largely ignored in the real world to solve problems. So, why is parallel programming not used more often? It seems like a good way to make things more scalable, efficient and generally improve performance of programs.
Because getting parallel programming right in a multithreaded shared-memory environment like Java is really, really hard. You're practically guaranteed to make mistakes that are very hard to diagnose. And of course it's extra effort.
Furthermore, the kind of programs that most people work on are workflow systems. Desktop versions of those aren't performance-critical, and webapps / server components are trivial to parallelize by having each request be served by its own thread. This has the advantage that developers don't really have to deal with the parallel aspect.
Because parallel programming is not applicable to every possible problem and because writing a correct concurrent program is hard. Especially making sure that threads are synchronized correctly without unnecessary locking is just not easy. Also, bugs that happen depending on the timing of thread scheduling are very hard to reproduce, find and fix. It's very nasty if you have a concurrency bug that happens once every 100,000 transactions and it happens on production and not on your development system (I've been there...).
Read the book Java Concurrency in Practice, the best book about concurrent programming in Java.
parallel programming models are still largely ignored in the real world to solve problems
I think it is being used where it solves problems. But it doesn't come for free, so it's indeed better not to do anything in parallel (asynchronously) when a simpler serial (synchronous) solution works well enough.
Real parallel programming (where you split up a problem over several cores) is mostly interesting for long running algorithms. Most real life applications are more event processors. Often these events will run in parallel (think about a web server having several threads to process requests). when programming is used to solve an arithmetic problem, it is used more often (think optimization, data analysis etc)
I think your opinion is wrong, for sample when you are have a server side application, application server handle any request by a thread(maybe achieve it from thread-pool).
I guess any place that need to parallel(multi-threading) model you have to do programming parallel.
There is usefull information in following link :
http://www.asjava.com/core-java/why-we-need-to-use-multiple-threads/
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have been writing C/C++ code for years. Recently started doing lot of Java too because some of the very fine products that I am using to solve my computing problems are all written in Java (Example: Lucene/Solr, Hadoop, Neo4j, OpenNLP etc.).
I am seeing this chage since last 3-4 years that Java has really got very popular atleast in Internet Algorithms (Clustering, Search, Big Data & so on). Though their are counterparts of the products that I have mentioned above in C++ (like for Search Sphinx written in C++ is a great option, Google has its Map Reduce written in C++ etc.)
I am just curious to know what are the factors & strength's that are making Java very popular these days specially in the Information Retrieval & Big data domain.
I just wanted to know the strengths of Java which is making it very popular in Internet Algorithms space? Is it just because of platform independence thing?
I would argue that Java and C++ perform at a similar level outside of the arbitrary, contrived situations which are so often used to prove that X is faster than Y.
Once you factor in network round-trip times and other, real world delays, I can't see a C++ application offering a measurable advantage over a Java application simply due to being C++ as opposed to Java. You will, however, see a measurable difference between a well-written application and a poorly-written application.
plattform independance is a nice feature, but doesn't always work in java. depending on what you do
java gets its popularity for the fact, that it's more safe than c++
you can not use pointer arithmetics and you can not manage memory allocation on your own
if something wents terribly wrong, you get an exception or an error, or the program just crashes but in java you are relatively sure not to continue doing things you definitely don't want to do
yes you can do all that in c++, but that's not the question, isn't it?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have yet to find a good benchmark on JSF performance. I know what you are thinking, it depends on your code/design/database setup, etc. I know that you have to optimize many things before you need to optimize your presentation layer (for instance you should see how good your database schema is), but let's say for the sake of the argument that we have reached the point in which we must review our presentation layer.
JSF is session-intensive. I've read a bunch of times that this can be a drawback when it comes to writing scalable applications. Having big user sessions in a clustered enviroment can be problematic. Is there any article on this? I'd hate to go to production just to see that the great JSF lifecycle and CDI integration have a huge cost in performance.
For high performance, session stickiness must be implemented, regardless of framework or language. How that's done depends on your setup; for example hardware loadbalancers usually have this feature. Then you don't really have to worry about inter-server network latency.
However, JSF+CDI performance on a single machine is also very important. Suppose the overhead is 300ms, that means a 4-core server can only handle 10 requests per second. Not too bad, but not in the high performance class. (Usually not a problem for companies on JEE bandwagons; they are usually enterprise-scaled, not internet-scaled; and they have cash to burn for lots of servers)
I don't really have the performance number though; it would be interesting if someone reports some CDI+JSF stats, for example, how long it takes to handle a typical page with a moderate size form.
I don't know if there is any truth in the assertion that JSF is heavy on session data. However, I'd have thought that the way to address scalability issues due to large amounts of session data would be to:
replicate the front-end servers (which you have to do anyway beyond a certain point of scaling), and
dispatch requests to the front-end based on the session token, so that the session data is likely to already be available in memory.
The presentation layer is one instance of an -> embarrassingly parallel application. In principle you can scale it by adding hardware; an extreme would be to have one hyper-thread per user in the minute of your site's maximum user-count. So scalablilty is not a problem here. What might be a problem is with pages that have to be rendered sequentially and take a long time to render even in single-user mode: If your JSF takes a minute to render in single-user mode then it will so too in multi-user mode and if you can not render it in multiple pieces in parallel then that is plain-old necessary.