Related
All the operating systems till date have been written in C/C++ while there is none in Java. There are tonnes of Java applications but not an OS. Why?
Because we have operating systems already, mainly. Java isn't designed to run on bare metal, but that's not as big of a hurdle as it might seem at first. As C compilers provide intrinsic functions that compile to specific instructions, a Java compiler (or JIT, the distinction isn't meaningful in this context) could do the same thing. Handling the interaction of GC and the memory manager would be somewhat tricky also. But it could be done. The result is a kernel that's 95% Java and ready to run jars. What's next?
Now it's time to write an operating system. Device drivers, a filesystem, a network stack, all the other components that make it possible to do things with a computer. The Java standard library normally leans heavily on system calls to do the heavy lifting, both because it has to and because running a computer is a pain in the ass. Writing a file, for example, involves the following layers (at least, I'm not an OS guy so I've surely missed stuff):
The filesystem, which has to find space for the file, update its directory structure, handle journaling, and finally decide what disk blocks need to be written and in what order.
The block layer, which has to schedule concurrent writes and reads to maximize throughput while maximizing fairness.
The device driver, which has to keep the device happy and poke it in the right places to make things happen. And of course every device is broken in its own special way, requiring its own driver.
And all this has to work fine and remain performant with a dozen threads accessing the disk, because a disk is essentially an enormous pile of shared mutable state.
At the end, you've got Linux, except it doesn't work as well because it doesn't have near as much effort invested into functionality and performance, and it only runs Java. Possibly you gain performance from having a single address space and no kernel/userspace distinction, but the gain isn't worth the effort involved.
There is one place where a language-specific OS makes sense: VMs. Let the underlying OS handle the hard parts of running a computer, and the tenant OS handles turning a VM into an execution environment. BareMetal and MirageOS follow this model. Why would you bother doing this instead of using Docker? That's a good question.
Indeed there is a JavaOS http://en.wikipedia.org/wiki/JavaOS
And here is discuss about why there is not many OS written in java Is it possible to make an operating system using java?
In short, Java need to run on JVM. JVM need to run on an OS. writing an OS using Java is not a good choice.
OS needs to deal with hardware which is not doable using java (except using JNI). And that is because JVM only provided limited commands which can be used in Java. These command including add, call a method and so on. But deal with hardware need command to operate reg, memory, CPU, hardware drivers directly. These are not supported directly in JVM so JNI is needed. That is back to the start - it is still needed to write an OS using C/assembly.
Hope this helps.
One of the main benefits of using Java is that abstracts away a lot of low level details that you usually don't really need to care about. It's those details which are required when you build an OS. So while you could work around this to write an OS in Java, it would have a lot of limitations, and you'd spend a lot of time fighting with the language and its initial design principles.
For operating systems you need to work really low-level. And that is a pain in Java. You do need e.g. unsigned data types, and Java only has signed data types. You need struct objects that have exactly the memory alignment the driver expects (and no object header like Java adds to every object).
Even key components of Java itself are no longer written in Java.
And this is -by no means- a temporary thing. More and more does get rewritten in native code to get better performance. The HotSpot VM adds "intrinsics" for performance critical native code, and there is work underway to reduce the overall cost of native calls.
For example JavaFX: The reason why it is much faster than AWT/Swing ever were is because it contains/uses a huge amount of native code. It relies on native code for rendering, and e.g. if you add the "webview" browser component it is actually using the webkit C library to provide the browser.
There is a number of things Java does really well. It is a nicely structured language with a fantastic toolchain. Python is much more compact to write, but its toolchain is a mess, e.g. refactoring tools are disappointing. And where Java shines is at optimizing polymorphism at run-time. Where C++ compilers would need to do expensive virtual calls - because at compile time it is not known which implementation will be used - there Hotspot can aggressively inline code to get better performance. But for operating systems, you do not need this much. You can afford to manually optimize call sites and inlining.
This answer does not mean to be exhaustive in any way, but I'd like to share my thoughts on the (very vast) topic.
Although it is theoretically possible to write some OS in pure java, there are practical matters that make this task really difficult. The main problem is that there is no (currently up to date and reliable) java compiler able to compile java to byte code. So there is no existing tool to make writing a whole OS from the ground up feasible in java, at least as far as my knowledge goes.
Java was designed to run in some implementation of the java virtual machine. There exist implementations for Windows, Mac, Linux, Android, etc. The design of the language is strongly based on the assumption that the JVM exists and will do some magic for you at runtime (think garbage collection, JIT compiler, reflection, etc.). This is most likely part of the reason why such a compiler does not exist: where would all these functionality go? Compiled down to byte code? It's possible but at this point I believe it would be difficult to do. Even Android, whose SDK is purely java based, runs Dalvik (a version of the JVM that supports a subset of the language) on a Linux Kernel.
I was wondering if Java get's assembled and in my readings I found the compiler creates byte code which is then run on the Java Virtual Machine. Does the JVM interpret the byte code and execute it?
This is why I'm confused. Today in class the prof said "the compiler takes a high level language, creates assembly language, then the assembler takes the assembly language and creates machine language (binary) which can be run". So if Java compiles to bytecode how can it be run?
There is a standard compiler setup, such as would be used for the C language, and then there is Java, which is significantly different.
The standard C compiler compiles (through several internal phases) into "machine instructions" which are directly understood by the x86 processor or whatever.
The Java compiler, on the other hand, compiles to what are sometimes called "bytecodes". These are machine instructions, but for an imaginary machine, the Java Virtual Machine. So the JVM interprets the bytecodes just like a "real" machine processes it's machine instructions. (The main advantage of this is that a program compiled into bytecodes will run on any JVM, whether it be on an x86 system, an IBM RISC box, or the ARM processor in a Android -- so long as there's a JVM the code will run.)
(There have historically been a number of "virtual machines" similar to Java, the UCSD Pascal "P-code" system being one of the more successful ones.)
But it gets more complicated --
Interpreting "bytecodes" is fairly slow and inefficient, so most Java implementations have some sort of scheme to translate the bytecodes into "real" machine instructions. In some cases this is done statically, in a separate compile step, but most often it's done with a "just-in-time compiler" (JITC) which converts small portions of the bytecodes to machine instructions while the application is running. These get to be quite elaborate, with complex schemes to decide which segments of code will benefit most from translating into hardware machine instructions. But they all, for the most part, do their magic without you needing to be aware of what's going on, and without you having to compile your Java code to target a specific type of processor.
Think of bytecode as the machine langauge of the JVM. (Compilers don't HAVE to produce assembly code which has to be assembled, but they're a lot easier to write that way.)
Just a clarifying note:
That which in java is called "bytecode" is what in your original description is "creates machine language (binary) which can be run"
So the answer to how to run java bytecode is:
You build a processor which can handle java bytecode, in the same way that if you want to execute normal x86 code you build a cpu to handle that.
Javas binary machine language is not really different from the binary instruction format of other cpus such as x86 or powerpc. And there do exists cpus which can execute java bytecode directly. (That would be a normal Intel/Amd cpu).
An other example: How would you run powerpc code, on a normal intel cpu? You would build a piece of software which would at runtime translate the powerpc binary code, to x86 code. The case for java is not really that different. So to run java code on a x86 cpu, you need a program which translate the java binary code(aka the bytecode) to x86 binary code. This is what the jvm* does. And it does this either by interpreting the java instructions one at a time, or by translating a huge chunk of instructions at a time(Called jit). Exactly how the jvm handles the translation depend on which jvm implementation you use and its settings.(There are multiple independent implementations of java jvms which implement their translation in different ways).
But there is one thing which make java a bit different. Unlike other binary instruction formats such as x86, java was newer really designed to run directly on a cpu. Its binary format is designed in a way which make it easy to translate it to binary code for "normal" cpus such as x86 or powerpc.
*The jvm does in fact handle more then just translating the java binary code to processor dependend code. It also handles memory allocations for java programs, and it handles communication between a java program, and the users operation system. This is done to make the java program relative independent of the users operation system and platform details.
In a short explanation: The JVM translates the Java Byte Code into machine specific code. The generated machine specific code is then executed by the machine.
The Java compiler translates JAVA into ByteCode. The JVM translates ByteCode into Assembly (machine specific code) at runtime. The machine executes the Assembly.
What the differences between classical compilation model (C, C++, etc.) and the Java compilation model?
A proper answer to your question could take several hundred pages to answer, but I'll try to sum it up in a few paragraphs.
Basically, the "classic compilation model" you refer to takes as input human-written source code and emits machine code, which can be loaded and run without further translation of the machine code. One ramification of this is that the resulting machine code can only be run on compatible hardware and can only be run within a compatible operating system.
The Java compilation model takes human-written source code as input and emits not machine code, but so-called "byte code". Byte code cannot be directly executed on a machine. Instead, it needs to be translated once again by another compiler to machine code, or interpreted on-the-fly by a device that executes instructions on the machine that correspond to the instructions in the byte code. The latter device is often referred to as a Virtual Machine. One ramification of this model is that the byte code can be "run" on any platform that has either a byte code compiler or virtual machine written for it. This gives Java the appearance and effect of complete portability, where there is no such portability implied by the machine code emitted by a C++ compiler stack.
Two aspects play into the C (and C++) compilation model. One is its longer history than Java, meaning that it caters to very low-powered compilers and machines. The second is the compilation target, which is usually low-level machine code.
To target low-memory compiler environments, C code must be readable from top to bottom, with no backtracking. This means that you have to follow a strict discipline for the order of declarations. (C++ relaxes this a little bit for class definitions.) Further more, each source file must be compilable as an independent translation unit which need not know anything about other source files.
Second, because C targets low-level machine code, this means that each translation unit contains essentially no metadata, in stark contrast to Java class files. This necessitates a stronger coding discipline in which each translation unit must be provided with the necessary declarations. The compiler cannot just scan all the other files in order to get the required information; it is up to the user to supply it. (C++ enforces this more rigidly, in C you can get away with nasty errors by forgetting a declaration.)
Bear in mind that a C program has to be fully compiled and linked at compile time, so a lot of information has to be available already at that point. Java programs can load classes at runtime, and Java execution generally performs more "fitting" operations (casting, essentially, as opposed to static linking in C) at runtime. The more sophisticated runtime environment of Java allows for a more flexible and modular compilation model.
I am going to be brave and compare performance. ;)
The Java compiler javac does little optimisation preferring to syntax check code. It does all the reasonable checks required to ensure it will run on a JVM, and some constant evaluation and that's about it.
Most of the smart compilation is done by the JIT which can perform dynamic complication based on how the program is used. This allows it to inline "virtual" methods, for example, even if the caller and callee are in different libraries.
The C/C++ compiler performs significant static analysis up front. This means a program will run at almost full speed right from the start. The CPU performs some dynamic optimisation with instruction re-ordering and branch prediction. While C/C++ lacks dynamic optimisation, it gains from by making low level access to the system much easier. (Its usually not impossible in Java, but low level operations which are trivial in C/C++ can be complex and obscure in Java) It also provides more ways to do the same thing allowing you to choose the optimal solution to your problem.
When Java is likely to be faster.
If your style of programming suits Java and you only use the sort of features Java supports, Java is likely to be marginally faster (due to dynamic compilation) i.e. you wouldn't use C/C++ to their full potential anyway.
If your code contains lots of dead code (possibly only known to be dead at run time) Java does a good job at eliminating this. (IMHO A high percentage of micro-benchmarks which suggest Java is faster than C++ are of this type)
You have a very limited time and/or resources to implement your application. (In which case an even higher level language might be better) i.e. You don't have time to optimise your code much and you need to write safe abstracted code.
When C/C++ is likely to be faster.
If you use most of the functionality C/C++ provides. Something more advanced programmers tend to do.
If startup time matters.
If you need to be creative about algorithms or data structures.
If you can exploit a low level hardware feature, like direct access to devices.
For short, "classical" compilation (which is a temp term provided by the material because they don't have a real word for it), is basically compiling against a real device (in our case a machine with a physical processor). Instead, Java compiles to code against a virtual device, which is software installed on a machine. The virtual device is what changes and targets the real machine.
In this way your hardware is abstracted. This is why Java can work on "any" machine.
Basically, there are two kinds of magic. Machine magic is only understood by certain wizards. JVM Bytecode magic is understood by a special kind of wizard that you have to hire in order to make the machine wizard able to cast spells that make your computer do things. C and C++ compilers generally emit the machine kind, whereas Java compilers emit JVM Bytecode.
C/C++ gets compiled before execution.
Java gets compiled while executing.
Of course, neither language mandates a certain way of being compiled.
There is no difference. Both convert source code that a human understands, to a machine code that some machine understands. In Java's case it targets a virtual machine, i.e. a program instead of a piece of silicon.
Of course there's nothing to prevent a piece of silicon from understanding JVM byte code (in which case you could rename it from 'byte code' to 'machine code'). And conversely, there's nothing to prevent a compiler from converting C/C++ code to JVM byte code.
Both have a runtime and both require you to tell it which parts of the runtime you intend to use.
I really think you intended to ask a different question.
I'm reading Herbert Schildt's book "Java: The Complete Reference" and there he writes that Java is portable AND architecture-neutral. What is the difference between this two concepts? I couldn't understand it from the text.
Take a look at this white paper on Java.
Basically they're saying that in addition to running on multiple environments (because of being interpreted within the JVM), it also runs the same regardless of environment. The former is what makes it portable, the latter is what makes it architecture-neutral. For example, the size of an int does not vary based on platform; it's established by the JVM.
A portable C program:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
printf("Hello, World!");
return (EXIT_SUCCESS);
}
You can take that C program and compile it on any machine with a C compiler and have it work (assuming it supports printf... I am guessing some things out there may not).
If you compile it on Windows and try to run that binary on a Mac it won't work.
The same sort of program written in Java will also compile on any machine with a Java compiler installed, but the resulting .class file will also run on any machine with a Java VM. That is the architectural neutral part.
So, portable is a source code idea, while architectural neutral is an executable idea.
Looking around I found another book that describes the difference between the two.
For architecture neutral the compiler will generate an architecture-neutral object file meaning that compiled Java code (bytecode) can run on many processors given the presence of a Java runtime.
For portable it means there are are no implementation-dependent aspects of the specification. For instance in C++ an int can be 16-bit, or 32 bit depending on who is implementing the specification where as in Java an int is always 32 bit.
I got my information from a different book (Core Java 2: Fundamentals) so it may differ from his meaning. Here is a link: Core Java 2: Fundamentals
With architecture-neutral, the book means that the byte code is independent of the underlying platform that the program is running on. For example, it doesn't matter if your operating system is 32-bit or 64-bit, the Java byte code is exactly the same. You don't have to recompile your Java source code for 32-bit or 64-bit. (So, "architecture" refers to the CPU architecture).
"Portable" means that a program written to run on one operating system works on another operating system without changing anything. With Java, you don't even have to recompile the source code; a *.class file compiled on Windows, for example, works on Linux, Mac OS X or any other OS for which you have a Java virtual machine available.
Note that you have to take care with some things to make your Java code truly portable. If you for example hard-code Windows-style file paths (C:\Users\Myself...) in your Java application, it is not going to work on other operating systems.
I suspect that he means that code can run on many platforms without recompilation.
It is also possible to write code that deals with the underlying system without rewrites or conditions.
E.g. Serialized objects from a 32 bit Windows system can be read on a 64bit Linux system.
there are 3 related features in java.
platform independent -> this means that the java program can be run on any OS without considering its vendor. It is implemented by using the MAGIC CODE called "BYTE CODE". The JVM then either interprets this at runtime or uses JIT (Just in Time) compilation to compile it to machine code for the architecture that is being run on (e.g. i386).
architecture neutral -> it means the java program can be run on any processor irrespective of its vendor and architecture. so it avoids rebuilding problem.
portable -> a programming language/technology is said to be purely portable if it satisfies the above two features.
.class file is portable because it can run on any OS . The reason is , .class file generated by JVM is same for all OS. On the other hand JVM is differ as OS , but it generate same .class file for all OS, so JVM is architectural neutral.
What is difference between Architecture Neutral and Portable?
Architecture Neutral: Java is an Architecture neutral programming language because, java allows its application to compile on one hardware architecture and to execute on another hardware architecture.
Portable: Java is a portable programming language because, java is able to execute its application and all the operating system and all the hardware system.
In Terms of Java
Java Architecture Neutral - Here we are talking about the Operating System Architecture i.e the java Generate the Intermediate Byte-code(binary code) (handle by the JVM) and allow the java code to run on any O.S for which you have a Java virtual machine available( irrespective of its O.S architecture to handle memory allocation ,Cashing, register handling , bit code processing 32 bit or 64 bit while interpreting the code like each interpreter execute the code line by line - this is handle by jvm with respect to Hardware and O.S configuration) .
Portable (Generic meaning like transferable, Platform Independent, or even in terms of the Source code is fix for all i.e Simply means support to many)
Java Portable means java machine code write in one machine and will run on any machine that has proper JVM with respect to O.S.
Introduction
I heard something about writing device drivers in Java (heard as in "with my ears", not from the internet) and was wondering... I always thought device drivers operated on an operating system level and thus must be written in the same language as the OS (thus mostly C I suppose)
Questions
Am I generally wrong with this
assumption? (it seems so)
How can a driver in an "alien"
language be used in the OS?
What are the requirements (from a
programming language point of view)
for a device driver anyway?
Thanks for reading
There are a couple of ways this can be done.
First, code running at "OS level" does not need to be written in the same language as the OS. It merely has to be able to be linked together with OS code. Virtually all languages can interoperate with C, which is really all that's needed.
So language-wise, there is technically no problem. Java functions can call C functions, and C functions can call Java functions. And if the OS isn't written in C (let's say, for the sake of argument that it's written in C++), then the OS C++ code can call into some intermediate C code, which forwards to your Java, and vice versa. C is pretty much a lingua franca of programming.
Once a program has been compiled (to native code), its source language is no longer relevant. Assembler looks much the same regardless of which language the source code was written in before compilation. As long as you use the same calling convention as the OS, it's no problem.
A bigger problem is runtime support. Not a lot of software services are available in the OS. There usually is no Java virtual machine, for example. (There is no reason why there technically couldn't be, but usually, but usually, it's safe to assume that it's not present).
Unfortunately, in its "default" representation, as Java bytecode, a Java program requires a lot of infrastructure. It needs the Java VM to interpret and JIT the bytecode, and it needs the class library and so on.
But there are two ways around this:
Support Java in the kernel. This would be an unusual step, but it could be done.
Or compile your Java source code to a native format. A Java program doesn't have to be compiled to Java bytecode. You could compile it to x86 assembler. The same goes for whatever class libraries you use. Those too could be compiled all the way to assembler. Of course, parts of the Java class library requires certain OS features that won't be available, but then use of those classes could be avoided.
So yes, it can be done. But it's not straightforward, and it's unclear what you'd gain.
Of course another problem may be that Java won't let you access arbitrary memory locations, which would make a lot of hardware communication pretty tricky. But that could be worked around too, perhaps by calling into very simple C functions which simply return the relevant memory areas as arrays for Java to work on.
Writing Solaris Device Drivers in Java covers a A RAM disk device written in Java.
Another one for Linux. Goes more in depth on why you might want a DD in Java as well (since some people were wondering by the looks of the other posts and comments)
A device driver could be a lot of things
I actually write device drivers in java for a living: drivers for industrial devices, such as scales or weighing devices, packaging machines, barcode scanners, weighing bridges, bag and box printers, ... Java is a really good choice here.
Industrial devices are very different from your home/office devices (e.g. scanners, printers). Especially in manufacturing (e.g. food), companies opt more and more for a centralized server which runs an MES application (e.g. developed in Java) The MES server needs to interface with the devices of the production line, but also contains business logic. Java is a language that can do both.
Where your home/office devices are often built-in to your computer or connected with an USB cable, these industrial devices usually use Ethernet or RS232 connectors. So, in essence, pretty much every language could do the job.
There is not much standardisation in this area yet. Most vendors prefer to create their own protocol for their devices. After all they are hardware builders, not software geniuses. The result is that there is a high diversity of protocols. Some vendors prefer simple plain-text protocols, but others prefer complex binary protocols with CRC codes, framing, ... Sometimes they like to stack multiple protocols (e.g. a vendor specific handshaking algorithm on top of an OPC layer). A strong OOP language has a lot of advantages here.
E.g. I've seen java print at a continuous speed of 100ms/cycle. This includes generating a unique label, sending it to the printer, receiving a confirmation, printing it on paper and applying it to the product using air pressure.
In summary, the power of java:
It is useful for both business logic as complex interfacing.
It is just as reliable in communication with sockets as C.
Some drivers can benifit from Java's OOP power.
Java is fast enough.
It's not impossible, but possibly hard and possibly makes not much sense.
Possible is it, because Java is a normal programming language, as long as you have some way to access the data, it's no problem. Normally in a modern OS the kernel has a layer to allow raw access to hardware in some way. Also already exist drivers in userspace, at least the userspace-part should be no problem to implement in Java.
It makes possibly not too much sense, because the kernel has to start a JVM to execute the driver. Also JVM-implementations normally eat up much memory.
You could also use Java-code compiled to be executed natively on the platform (not with the help of a JVM). This is usually not that efficient, but it could be suitable for a device-driver.
The question is, does it make sense to implement the driver in Java? Or stated in another way: What is the benefit you hope for, if you use Java for implementing the driver instead of another alternative? If you can answer this question, you should find a way to make it possible.
At the end the hint to JNode, a project that tries to implement a complete OS purely based on Java.
You have a too narrow view of device drivers.
I have written such device drivers on top of MOST in an automotive application. A more widespread use might be drivers for USB devices if Java ever gets a decent USB library.
In these cases there is a generic low-level protocol which is handled in native code, and the Java driver handles the device specifics (data formats, state machines, ...).
For the motivation, please remember that there is plenty of fast languages which are better than C for programming; they might not be as fast as C, but they are safe languages: if you make a mistake you don't get undefined behavior. And "undefined behavior" includes executing arbitrary code supplied by some attacker which formats your HD.
Many functional languages are usually compiled to native code.
Device drivers contain the most bugs in an OS kernel - I know that for Linux (Linus Torvalds and others keep saying so) and I heard that for Windows. While for a disk or Ethernet driver you need top-notch performance, and while in Linux drivers today are the bottleneck for 10G Ethernet or SSD disks, most drivers don't need that much speed - all computers wait at the same speed.
That's why there are various projects to allow writing drivers which run outside of the kernel, even if that causes a slowdown; when you can do that, you can use whatever language you want; you will just then need Java bindings for the hardware control library you use - if you were writing the driver in C, you would still have a library with C bindings.
For drivers in kernel mode proper, there are two problems that I've not yet seen mentioned:
Garbage Collection, and that's a tough requirement. You need to write an in-kernel Garbage Collector; some GC algorithms rely on Virtual Memory, and you cannot use them. Moreover, you probably need to scan the whole OS memory to find roots for the GC. Finally, I would only trust an algorithm guaranteeing (soft) real-time GC, which would make the overhead even bigger.
Reading the paper which was mentioned about Java Device Drivers on top of Linux, they just give up, and require programmers to manually free memory. They try to argue that this will not compromise safety, but I don't think their argument is convincing - it's not even clear whether they understand that Garbage Collection is needed for a safe language.
Reflection and class loading. A full Java implementation, even when running native code, needs to be able to load new code. This is a library you can avoid, but if you have an interpreter or JIT compiler in kernel (and there's no real reason that makes it technically impossible).
Performance. The paper about a JVM on Linux is very bad, and their performance numbers are not convincing - indeed, they test a USB 1.1 network driver, and then show that performance is not so bad! However, given enough effort something better can surely be done.
Two last things:
I'd like to mention Singularity, which is a complete OS written in a C# variant, with just a Hardware Abstraction Layer in a native language.
About picoJava, it's a bad idea to use it unless your system is a really memory constrained one, like a smart card. Cliff Click already explained why: it gives better performance to write a good JIT, and nowadays even smartphones can support that.
Have you perhaps heard a reference to the JDDK?
Writing a device driver 100% in Java is not possible without native code to provide the interaction between (1) the OS-specific driver entry points and conventions, and (2) the JVM instance. The JVM instance could be started "in-process" (and "in-process" may have different meanings depending on the OS and on whether the driver is a kernel-mode or user-mode driver), or as a separate user-land process with which a thin, native driver adaptation layer can communicate and onto which the said driver adaptation layer can offload actual user-land work.
It is possible to compile java code to hardware native (i.e. not JVM bytecode) instructions. See for instance GCJ. With this in hand, you're a lot closer to being able to compile device drivers than you were before.
I don't know how practical it is, though.
Possible?
Yes but only in special circumstances. Because you can write an operating system in Java and C#, and then, should be able to write device drivers for it. The memory hit to these drivers and operating systems would be substantial.
Probable?
Not likely. Atleast not in the world of Windows or MacOS or even Linux... At least not anytime soon. Because languages like C# and Java depend on the CLR and JVM. The way these languages work means that they cannot effectively be loaded into ring0.
Also, the performance hit would be rather large if managed languages were employed in device drivers.
Device drivers have to be written in a language which can execute in the kernel, either compiled into it, or loaded as a module at runtime. This usually precludes writing device drivers in Java, but I suppose you theoretically could implement a JVM inside a device driver and let it execute Java code. Not that any sane person would want to do that.
On Linux there are several user-land (i.e. non-kernel) implementations of filesystems which uses a common abstraction layer called (fuse) which allows user-land programs to implement things which are typically done in the kernel.
The Windows Driver Foundation (WDF) is a Microsoft API that does allow both User and Kernel mode device drivers to be written. This is being done today, and it is now compatible with w2k and later (used to not have w2k as a supported target). There is no reason that JNI calls can't be made to do some work in the JRE . . . ( assuming that JNI is still the way to call Java from C/C++ . . . my knowledge is dated in that arena). This could be an interesting way to have high level algorithms directly munch on data from a USB pipe for something to that effect . . . cool stuff!
PCIe user space device drivers can be written in Pure Java. See JVerbs for details about memory-based direct hardware access, in the context of OFED. This is a technique that can be used to create very high performance systems.
You can examine the PCI bus to determine the memory regions for a given device, what ports it has, etc. The memory regions can be mapped into the JVM's process.
Of course, you're responsible for implementing everything yourself.
I didn't say easy. I said possible. ;)
See also Device Drivers in User Space, which discusses using the UIO framework to build a user space driver.
First of all, note that I'm not an expert on device drivers (though I wrote a few myself back in the day), much less an expert on Java.
Let's leave the fact that writing device drivers in a high-level language is not a good idea (for performance and possibly many other reasons) aside for a moment, and answer your question.
You can write device drivers in almost any language, at least in theory.
However, most device drivers need to do plenty of low-level stuff like handling interrupts and communicating with the OS using the OS APIs and system calls, which I believe you can't do in Java.
But, if your device communicates using, say, a serial port or USB, and if the OS doesn't necessarily need to be aware of the device (only your application will access the device*), then you can write the driver in any language that provides the necessary means to access the device.
So for example you probably can't write a SCSI card driver in Java, but you can write a driver for a proprietary control device, USB lava lamp, license dongle, etc.
* The obvious question here is, of course, does that count as a driver?