Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
After coming across the concept of co-routines in Lua I feel that these seem to be a much better model for programming concurrent software and i'm wondering why there was not used in Java?
Co-routines seem to let the developer write code which jumps between multiple functions, progressing each a few steps at a time, providing the illusion of concurrent execution, in much the same way as the CPU would time slice between multiple threads in Java, however co-routines allow the developer to decide when to jump out of one function and start executing another. This allows the developer to decide how fine grain the steps should be, ie the degree of concurrency, and when the context switch should occur, which could prevent costly context switches when latency is critical.
Coroutines are powerful but will not replace full fleged multithreaded applications, because coroutines run on a single thread. As such they will not make use of multiple cores when needed for CPU intensive tasks. I see them as representing a complementary paradigm rather than a competing one. Functional programing is making a headway into Java as it has already done on the .Net platform. Coroutines will eventually follow suite. I suggest you to look at the Java roadmap for more info.
See Processes, threads, green threads, protothreads, fibers, coroutines: what's the difference? for a more elaborate answer that covers coroutines and other concepts.
See also Throughput differences when using coroutines vs threading that discusses the implementation of the producer consumer problem with coroutines instead of multiple threads.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
For the past few weeks now I've been studying Concurrency(Multithreading) in Java. I find it difficult and rather different than anything I've encountered in the Java language so far(or in programming in general). Often I have to reread and reread over and over again until I start to understand a small concept fully.
It's frustrating and I've wondered why this part of the Java programming language has given me so much trouble.
Usually when I look at the code of a single-threaded program I look at the main method and start going step by step in my mind through the whole execution(like a debugger). Throughout this process I try to keep in mind EVERYTHING like variables and their states(values) at every point in the execution. Often times when doing that I even stop at certain points and think how the program execution would alter in different scenarios. If I can go through a program from start to finish like that, I feel like I've fully understood the code and the material.
The problem that I have, I suppose, is that when I try to apply this method for a concurrent application, there are so much things happening at once(sleep(), synchronized methods, acquiring intrinsic locks, guarded blocks using wait(), etc.) and there's so much uncertainty of when something will execute, that it becomes nearly impossible for me to keep up with everything. That's what frustrates me, because I want to have a feeling of "I have control over what's happening", but with concurrency that's impossible.
Any help would be appreciated!!!
Concurrency is a simple concept, really - you have several separate paths of execution, which can interact with each other. The stuff you mentioned, like syncing, blocks, waits and so on are technical details, tools.
I would suggest trying to do some coding :-) Come up with a multi-thread program idea and code it. At some point you will need to use one of the tools you listed and it will all begin to fall into place. This is NOT a concept you should to understand only in theory ^^
More over a Science, Concurrent Programming is an art.
Before going into Java concurrency, PLEASE DO go through the conceptual things first.. i.e. what are the major problems in concurrency? what is a lock? what is a semaphore? what is a barrier? why we are using those? how can we use them for different purpose like variable protection, synchronization etc.? Likewise there are some.
Then you would probably get some very important knowledge before getting into language specific usage.
For a person who have followed sequential programming all the time and first looking at concurrency, definitely it would be harder to understand those things at once. But I am sure you can go to the level that you are in sequential programming for concurrent programming also after some time. :))
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I would like to understand the differences between Akka and Netty. I understand you can use both from Scala and Java. I am more interested in knowing where Netty is better (if anywhere) and where Akka is better (if anywhere). Where do they overlap, in other words, in what areas can I use Akka and not Netty and vice-versa.
Akka is a concurrency framework built around the notion of actors and composable futures, Akka was inspired by Erlang which was built from the ground up around the Actor paradigm. It would usually be used to replace blocking locks such as synchronized, read write locks and the like with higher level asynchronous abstractions.
Netty is an asynchronous network library used to make Java NIO easier to use.
Notice that they both embrace asynchronous approaches, and that one could use the two together, or entirely separately.
Where there is an overlap is that Akka has an IO abstraction too, and Akka can be used to create computing clusters that pass messages between actors on different machines. From this point of view, Akka is a higher level abstraction that could (and does) make use of Netty under the hood.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Wikipedia defines an execution unit as:
"In computer engineering, an execution unit (also called a functional unit) is a part of a CPU that performs the operations and calculations called for by the computer program."
Now, is it a logical or conceptual thing performing the operations of the program? Or is it a physical (hardware) structure in CPU which performs the tasks called for by the program (e.g. shutting down the computer, changing the colors etc. ) ?
And I have read that "In concurrent programming, there are two units of execution i.e. processes and threads."
Now, the concept I have made in my mind is that a unit of execution is, let's say a package of related classes as well as the system resources being used by them e.g. system's memory and other resources.
Please tell me to what extent am I right?
NOTE: Please keep your language (i.e. jargon and terminology you might use) simple enough for a beginner to understand.
Thank you in advance.
It appears to me that execution unit refers to hardware, specifically a portion of the computer's brain that can work at the same time as other parts on a different task. It seems to allow simple multi-tasking, as implied by Wikipedia's article on the execution unit. The article explains that Superscalar Architecture involves multiple execution units fetching commands at the same time.
An execution unit is like a worker. He has a job and does it until he is finished. Then he asks his boss what to do next and works on that. When you have multiple workers, you get more work done faster. An execution unit does low level tasks like 1+1.
Moving onto unit of execution, it appears this is more about how the software runs, as evidenced by this microsoft article. A unit of execution, such as a thread, manages high level tasks involving many small steps like conquerTheWorld().
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I know that similar questions have been asked before, but...
We want to develop (at least hope) an indie game but still a game with high quality graphics with hundreds if not thousends of moving objects on the screen so we expect very high number of polygons and requirements for hittest and perhaps some AI.
I know the basic problem with java is garbage collection. But it's not an issue, we plan to allocate all of the required memory before the game starts and for transient objects we will use pooling (so in the game loop the new keyword will never be written). And we plan to use every possible techique mentioned here (Google I/O 2009 - Writing Real-Time Games for Android).
The main reason we insist on Java is deployment and we only want to develop for Android (for now at least)
So can the same performance in a game be achived with Java (even if that means ugly/not idiomatic code) as if we did it with c++. If not, what are the specifics? Or perhaps if it's possible but very-very unpractical what are these reasons?
(For example I read something about java Buffers and OpenGL are not the best pairing but don't remember the specifics - maybe some expert)
You're going to be paying a fixed additional cost per call to use OpenGL from Java source code. Android provides Java-language wrappers around the calls. For example, if you call glDrawArrays, you're calling a native method declared in GLES20.java, which is defined in android_opengl_GLES20.cpp. You can see from the code that it's just forwarding the call with minimal overhead.
Nosing around in the file, you can see other calls that perform additional checks and hence are slightly more expensive.
The bottom line as far as unavoidable costs goes is that the price of making lots of GLES calls is higher with Java source than native source. (While looking at the performance of Android Breakout with systrace I noticed that there was a lot of CPU overhead in the driver because I was doing a lot of redundant state updates. The cost of doing so from native code would have been lower, but the cost of doing zero work is less than the cost of doing less work.)
The deeper question has to do with whether you need to write your code so differently (e.g. to avoid allocations) that you simply can't get the same level of performance. You do have to work with direct ByteBuffer objects rather than simple arrays, which may require a bit more management on your side. But aside from the current speed differences between compute-intensive native and Java code on Android, I'm not aware of anything that fundamentally prevents good performance from strictly-Java implementations.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am researching the possibility of starting a data mining project which will include intensive calculations and transformation on data, and should be relatively easy to scale.
In your experience, is the choice of programming language critical for said project?
For example, if I am already working on a JVM environment, should I prefer Clojure over plain Java? Does the functional environment guarantee easier scalability? Better performance?
Put aside other factors such as familiarity with the language, toolchain, etc. In your experience, is the choice of language a critical one?
There are a few good reasons for choosing functional programming for data mining projects.
Usually data mining projects
involve algorithmics and mathematics
(than other types of systems) which
can be more easily expressed in
functional programming
Data
mining projects would involve
aggregate functions - which are better in functional
programming, say Clojure
Data
mining programs also would be more
suitable to parallelism - definitely
data parallelism and could even be
task parallelism, again a forte of
functional programming
And
functional languages like Clojure
can interface with java anyway for I/O, file read and write
I
think one can learn the tool chain
easily; it is not that different and so that shouldn't be a factor.
I was asking the same question myself and came with a big Yes for Clojure - am still thinking through how to include R in the mix.
Use the most powerful language you are comfortable with.
In any case, if you want to get scalability you need to have a map-reduce implementation which allow you to parallellize and collect the results.
No particular reason. Pick whatever language you feel most comfortable with.
See my answer to a similar question about natural language processing. I think that some of the features people think obscure languages are suited to AI are really counterproductive.
Often, functional programming solutions are more scalable.