Related
I want to migrate my entire C# 4.0(.Net 2010) desktop Application to Java.I don't know any tool available for that?Please suggest me good one.
Also, i would like to know what are the limitations and advantages of Cross Compiler for C# to java?
please guide me to get out of this problem...
Saravanan.P
Crosscompilers will usually produce rather messy code, and sometimes code that doesn't even compile.
Some (maybe most) will force your new code into having bindings with custom libraries from the crosscompiler, and thus be forever linked to that product.
Your new code will be very hard to maintain and expand as a result, and might well offer poor performance as well as compared to the old code when compiled.
In general, you would most likely be better off rewriting the application yourself (or hiring people to do so) if it is going to have to be used and maintained actively for more than a short, transitional period.
That said, for some things a crosscompiler can be helpful. For example start with a crosscompiled version and over time replace that codebase with newly written code, this would get you working more quickly and you'd not have to maintain 2 separate code bases, in 2 different languages, using 2 toolsets, at the same time.
At a recent interview, I was asked:
Open source web app (say built on Struts/Spring) is more prone to hacking since anyone can access the source code and change it. How do you prevent it?
My response was:
The java source code is not directly accessible. It is compiled into class files, which are then bundled in a war file and deployed within a secure container like Weblogic app server.
The app server sits behind a corporate firewall and is not directly accessible.
At that time - I did not mention anything about XSS and SQL injection which can affect a COTS-based web app similar to an open source one.
My questions:
a) Is my response to the question correct?
b) What additional points can I add to the answer?
thanks in advance.
EDIT:
While I digest your replies - let me also point out the question was also meant towards frameworks such as Liferay and Apache OFBiz.
The question is a veiled argument towards Security through obscurity. I suggest you read up the usual arguments for and against and see how that fits:
Security through obscurity ( Wikipedia )
Hardening Wordpress
SSH server security (Putty)
My personal opinion is that obscurity is at best the weakest layer of defence against atack. It might help filter out automated attacks by uninformed attackers, but it does not help much against a determined assault.
a) Is my response to the question correct?
The part about the source not being accessible (to change it) because it is compiled and deployed where it cannot be touched is not a good answer. The same applies to non-open-source software. The point that was being made against an open source stack is that the source is accessible to read, which would make it easier to find vulnerabilities that can be exploited against the installed app (compiled or not).
The point about the firewall is good (even though it does not concern the open- or closedness of the software, either).
b) What additional points can I add to the answer?
The main counterargument against security through obscurity (which was the argument being made here) is that with open source software, many more people will be looking at the source in order to find and fix these problems.
since anyone can access the source code and change it.
Are you sure that is what they said? Change it? Not "study it"?
I don't see how anyone can just change the source code for Struts...
A popular open-source web framework/CMS/library is less likely to have horrible bugs in it for long, since there are lots of people looking at the code, finding the bugs, and fixing them. (Note, in order for this to matter, you'll need to keep your stuff up to date.)
Now, your friend does have a tiny point -- anyone who can fix the bugs could also introduce them, if the project is run by a bunch of idiots. If they take patches from any random schmuck without looking the patches over, or don't know what they're doing in the first place, it's possible to introduce bugs into the framework. (This doesn't matter unless you update regularly.) So it's important to use one that's decently maintained by people who have a clue.
Note, all of the problems with open-source frameworks/apps apply to COTS ones as well. You just won't know about bugs in the latter til after bugtraq and other such lists publish them, as big companies like to pretend there aren't any bugs in their software til forced to react.
a) Yes. Open source doesn't mean open binaries :) The sentence "anyone can change the source code" is simply incorrect (you can change your copy of the code, but can't edit Apache Struts code)
b) Maybe the fact that the source code is visible makes it easier to somebody to see the posible flaws it can have and exploit them. But, the same argument functions the other way: as a lot of people review the code the flaws are found faster so the code is more robust at the end.
I'm trying to code an application which runs un different java platforms like J2SE, J2ME, Android, etc. I already know that I'll have to rewrite most of the UI for each platform, but want to reuse the core logic.
Keeping this core portable involves three drawbacks that I know of:
Keeping to the old Java 1.4 syntax, not using any of the nice language features of Java 5.0
only using external libraries that are known to work on those platforms (that is: don't use JNI and don't have dependencies to other libs which violate this rules)
only using the classes which are present on all those platforms
I know of ways to overcome (1): code in 5.0 style and automatically convert it to 1.4 (retroweaver - haven't tried it yet, but seems ok).
I think (2) is a problem that I just have to accept.
Now I'd like to know what's the best workarround for (3), especially collection classes, which I miss the most. I can think of those:
Most programmers I know just don't use Set, Map, List, etc. and fallback to Vector and plain Arrays. I think this makes code ugly in the first place. But I also know that the right choice between TreeSet/Hashset or LinkedList/ArrayList is crucial for performance, and always using Vector and Arrays can't be right.
I could code my own implementations of that classes. This seems to be reinventing the wheel, and I think I could not do it as good as others have done.
Since Java is open source, I could grab the sourcecode of the J2SE Collections framework and include into my application when building for J2ME. I don't know if this is a good idea, though. Pherhaps there are good reasons not to do this.
Maybe there already are libraries out there, which rebuild the most important features of the collections framework, but are optimized for low end systems, pherhaps by not implementing functionality that is used infrequently. Do you know any?
Thanks for your answers and opinions!
Edit: I finally found a (complex, but nice) solution, and I thought by providing my own answer and accepting it, the solution would become visible at the top. But to the contrary, my answer is still at the very bottom.
J2ME is brutal, and you're just going to have to resign yourself to doing without some of the niceties of other platforms. Get used to Hashtable and Vector, and writing your own wrappers on top of those. Also, don't make the mistake of assuming that J2ME is standard either, as each manufacturer's JVM can do things in profoundly different ways. I wouldn't worry much about performance initially, as just getting correctness on J2ME is enough of a challenge. It is possible to write an app that runs across J2ME, J2SE and Android, as I've done it, but it takes a lot of work. One suggestion that I'd have is that you write the core of your application logic and keep it strictly to java.lang, java.util and java.io. Anywhere where you're going to be doing something that might interact with the platform, such as the file system or network, you can create an interface that your core application code interacts with, that you have different implementations for the different environments. For example, you can have an interface that wraps up HTTP stuff, and uses javax.microedition.io.HttpConnection with J2ME and java.net.HttpURLConnection on Android. It's a pain, but if you want to maintain an app running on all three of those environments, it can get you there. Good luck.
It's been a while since I asked this question, and I while since I found a nice, working solution for the problem, but I had since forgotton to tell you.
My main focus was the Java Collections Framework, which is part of the java.util package.
I've finally taken the source code of Suns Java 6.0 and copied all the classes that belong to the Collections framework into a project of my own. This was a Java 6.0 project, but I used the jars from J2ME as classpath. Most of those classes that I copied depend on other J2SE classes, so there are broken dependencies. Anyway, it was quite easy to cut those depensencies by leaving out everything that deals with serialization (which is not a priority for me) and some minor adjustments.
I compiled the whole thing with a Java 6 compiler, and retrotranslator was used to port the resulting bytecode back to Java 1.2.
Next problem is the package name, because you can't deliver classes from java.util with a J2ME application and load them - the bootstrap class loader won't look into the applications jar file, the other bootloaders aren't allowed to load something with that package name, and on J2ME you can't define custom classloaders. Retrotranslator not only converts bytecode, it also helps to change name references in existing bytecode. I had to move and rename all classes in my project, e.g. java.util.TreeMap became my.company.backport.java.util.TreeMap_.
I was than able to write actual J2ME application in a second Java 6.0 project which referenced the usual java.util.TreeMap, using the generic syntax to create type-safe collections, compile that app to Java 6.0 byte code, and run it through retrotranslator to create Java 1.2 code that now references my.company.backport.java.util.TreeMap_. Note that TreeMap is just an example, it actually works for the whole collections framework and even for 3rd party J2SE Jars that reference that framework.
The resulting app can be packaged as a jar and jad file, and runs fine on both J2ME emulators and actual devices (tested on a Sony Ericsson W880i).
The whole process seems rather complex, but since I used Ant for build automation, and I needed retranslator anyway, there only was a one-time overhead to setup the collection framework backport.
As stated above, I've done this nearly a year ago, and writing this mostly from the top of my head, so I hope there are no errors in it. If you are interested in more details, leave me a comment. I've got a few pages of German documentation about that process, which I could provide if there is any demand.
We faced exactly this situation in developing zxing. If J2ME is in your list of targets, this is your limiting factor by far. We targeted MIDP 2.0 / CLDC 1.1. If you have a similar requirement, you need to stick to Java 1.2. Java 1.4 language features are definitely not present (like assert) and in general you won't find anything after 1.2 in J2ME.
We did not use external libraries, but, you could package them into your deployed .jar file with little trouble. It would make the resulting .jar bigger, and that could be an issue. (Then you can try optimizers/shrinkers like ProGuard to mitigate that.)
I did end up reimplementing something like Collections.sort() and Comparator since we needed them and they are not in J2ME. So yeah you might consider doing this in cases, though only where necessary.
We used Vector and Hashtable and arrays since there is no other choice, really, in J2ME. I would just use them unless you have a reason not to, and that would be performance I guess. In theory JVM makers are already optimizing their implementation but that doesn't mean you couldn't make a better one... I guess I would be surprised if it is worth it in the vast majority of cases. Just make sure you really need to do this before putting in the effort.
To answer part of your question another collections library would be Javolution which can be built for j2me.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm preparing to write a COLLADA importer in Java. There aren't any pre-written importers, and even if there were I would probably be picky, so I believe this is best. I am going with COLLADA 1.4 because I do not need the 1.5 features, and many programs don't yet support it. I found the spec for it and plan on following it, but it is much easier to follow by example and reference the spec for additional information.
So with all that... Can some of you who are experienced in COLLADA point me to some simple models that I can use to learn and test my importer as I write it? I will export a few with just geometry, but I need some with textures or materials, with skeletal and keyframe animation, etc. Any suggestions?
Alternatively, I know that Blender can export COLLADA 1.4 and it can import most formats. If you have a really good suggestion for a site that has simple 3D models in another format that I will just import and export as COLLADA, that would be fine too!
Thanks!
I know two decent places at least:
Thingiverse tag: 3D
Google 3D Warehouse
The google site has lots of collada files, thingiverse has more in the way of stl (sterio lithography) files.
Download Assimp the "Open Asset Import Library" from http://assimp.sourceforge.net/main_downloads.html. It has a whole bunch of Collada files (in test/models/Collada) that it uses for its test suites.
The problem here is the fact that as far as I know there is no tool (at least available to the public) that supports the full Collada specification (especially 1.5). The part that is in most cases well tested and developed is the geometry-library element, materials etc., which in 1.5 are usually taken from some 1.4 implementation (that's why tools that usually state they support 1.5 actually don't (physics, kinematics etc. are in most cases missing or in bad condition). Still you can easily create decent (as in geometry part is OK but the rest - maybe is, maybe not) enough Collada files using various export-features of primarily 3d modeling software (Blender, Maya, 3ds Max, CATIA etc.). OpenRAVE (used for robotics path planning) actually has one of the best export/import capabilities, when it comes to COLLADA and even supports (partially) 1.5 features such as kinematics.
If you decide to use Blender for example (free and open source so you can actually look how the import/export addon works), you can create something simple or complex and export it as COLLADA 1.4 (not 1.5!). OpenRAVE for instance uses a custom XML-format that is converted internally to Collada (in order to hide the complexity of this standard) and allows you to even embed other formats (mostly for the geometry-part) such as OBJ, which is much much easier to find a decend import/export tool for. Khronos Group actually provides OpenCollada (OpenRAVE and many others use it internally, which of course results in bugs in all of them, when something is badly implemented in OpenCollada :P). The Assimp-library offers also quite a lot but the major problem is the misinformation it gives on what it actually supports from the Collada standard. In fact it is really, really hard to find a reference on the implemented features, when it comes to Collada, and sadly recently I started using 1.5 (kinematics) just to discover that Assimp supports only 1.4 and is bound to it to such extent that it throws errors at you the moment it encounters a typically 1.5 element (even if it is empty!), which omho is a really bad implementation on part of the developers. In the list of supported formats the Assimp's site states only Collada and no version is given.
I know this question is old and answered but I hope this info helps. I myself am writing a parser in C# for internal usage where I'm currently working and it's a real pain to discover how badly supported this already an ISO standard is. The complexity of Collada is huge but that's why it is considered a pipeline-format and not something you are supposed to use in a final product that relays on good performance (both speed and storage).
Blendswap.com is a really great site with tons of models for Blender. Once you sign up for an account, you can download them for free. You can even use the majority of the models on the site commercially. Before you download it lets you know if you have to give the author credit, although some of the models can be used without giving credit, although it is recommended still. Keep in mind that there is a 200 MB limit per month for downloading. There are plenty of models that are 1MB, so check the size of the file before downloading. Then you can use the Collada exporter in Blender. Make sure to check the settings on the Collada exporter.
The WebGL framework three.js has some examples here on their GitHUB page
The monster file can be seen in action here.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've used MyGeneration, and I love it for generating code that uses Data Access Applicaiton Blocks from Microsoft for my Data Access Layer, and keeping my database concepts in sync with the domain I am modeling. Although, it took a steeper than expected learning curve one weekend to make it productive.
I'm wondering what others are doing related to code generation.
http://www.mygenerationsoftware.com
http://www.codesmithtools.com/
Others?
Back in 2000, or so, the company I worked for used a product from Veritas Software (I believe it was) to model components and generate code that integrated components (dlls). I didn't get a lot of experience with it, but it seems that code generation has been the "holy grail" for a long time. Is it practical? How are others using it?
Thanks!
T4 is the CodeSmith killer for Microsoft!!!!
Go check it out. Microsoft doesn't want to destroy their partners so they don't advertise it, but it is a thing to be reckoned with and ITS FREE and comes installed in Visual Studio 2008.
www.olegsych.com
codeplex.com/t4toolbox
www.t4editor.net
I have used LLBLGen and nHibernate successfully to generate Entity and DAL layers.
We use Codesmith and have had great success with it. I am now constantly trying to find where we can implement templates to speed up mundane processes.
I've done work with CSLA and used codesmith to generate my code using the CSLA templates.
codesmithtools.com
If your database is your model, SubSonic has an excellent code generator that as of v2.1, no longer requires ActiveRecord (you can use the Repository Pattern instead). It's less flexible than others, but there are customizations that can be made in the stock templates.
I have used CodeSmith and MyGeneration, wasn't overly keen on either, felt somewhat terse to use, learning template languages etc.
SubSonic is what we sometimes use here to generate a Data Access Layer. Used in the right size projects, it is a fantastic time saving tool. clicky
I see code generation harmfull as well, but only if you use 3rd party tools like codesmith and mygeneration. I have 2 stored procedures that generate my domain objects and domain interfaces
Example
GenerateDomainInterface 'TableName'
Then I just copy and paste it into visual studio. Works pretty awesome for those tasks I hate to do.
Two framworks I use often.
Ragel
Something worth checking out is Ragel. It's used to generate code for state machines.
You just add some simple markup to your source code, then run a generator on
Ragel generates code for C, C++, Objective-C, D, Java and Ruby, and it's easy to mix it with your regular source.
Ragel even allow you to execute code on state transitions and such. It makes it easy to create file format and protocol parsers.
Some notable projects that user Ragel are, Mongrel, a great ruby web server. And Hpricot, a ruby based html-parser, sort of inspired by jQuery.
Another great feature of Ragel is how it can generate graphviz-based charts that visualize your state machines. Below is an example taken from Zed Shaw's article on ragel state charts.
(source: zedshaw.com)
XMLBeans
XMLBeans is a java-based xml-binding. It's got a great workflow and I use it often.
XMLBeans processen an xml-schema that describes your model, into a set of java-classes that represents that model. You can programmatically create models then serialise them to and from xml.
I have used CodeSmith. Was pretty helpful.
I love to use
SubSonic. Open source is the way to go with code generation I think because it is very easy to modify the templates and the core as they always tend to have bugs or one or two things you want to do that is not built in.
I've used code generation for swizzle functions in a vector math library. I used a custom PERL script for it. None of the FLOSS generators I looked at seemed well-suited to creating swizzle functions
I generally use C++ templates, rather than code generation.
I've primarily used LLBLGen Pro to generate code. It offers a variety of patterns to use for generation and you can supply your own patters, just like CodeSmith. The customer support has been excellent.
Essentially, I generate my business objects and DAL using LLBLGen and keep them up to date. The code templates have sections where you can add your own logic that won't be wiped out during regeneration. It's definitely worth taking a look.
We custom build our code generation using linq and XML literals (VB).
We haven't found a way to break the solutions into templates yet; however, those two technologies make this task so trivial, I don't think we will.
I'd consider code generation harmful as it bloats the codebase without adding new logic or insight. Ideally one should raise the level of abstraction, use data files, templates or macros etc. to avoid generating large amounts of boiler plate code. It helps you get things done quickly but can hurt maintainability in the long run.
If your chosen programming language becomes much less painful by generating it from some template language, that seems indicate you'd save even more time by doing the higher level work in another, perhaps more dynamic language. YMMV.
LLBLGen Pro is an excellent tool which allows you to write a database agnostic solution. It's really quick to pick up the basic features. Advanced features aren't much more challenging. I highly recommend you check it out.
I worked for four years as the main developer in a web agency, as I wrote from ground-up my first two or three websites, I soon realized that it was going to be a very boring task to do it all the times. So I started writing my own web site generator engine.
My starting point was this site http://www.codegeneration.net/. I took one of their examples for a simple crud generation and extended to the level that i was generating entire sites with it.
I used xml for the definition of various parts of the website (pages, datalists, joins, tables, form management). The generated web sites were completely detached from the generator, so the generated website could also be modified by hand.
Here is their article http://www.codegeneration.net/tiki-read_article.php?articleId=19.
I've done several one-off's of code generation using Castor to create Java source code based on XSD's. The latest use was to create Java classes for an Open Travel Association implementation. The OTA Schema is pretty hairy and would have been a bear to do by hand. Castor did a pretty good job given the complexity of the schema.
Python.
I have used MyGeneration which uses C# to write your code templates. However, I started using Python and I found that I can write code that generates other code faster in that language than I would if written in C#. Subsequently, I have used Python to code gen C#, TSQL, and VB.
Generally, code that generates other code tends to be harder to follow by its very nature. Python's cleaner syntax helps tremendously by making it more readable and more maintainable than the equivalent in C#.
codesmith for .net
I wrote a utility where you specify a table and it generates an Oracle trigger which records all changes to that table. Makes logging really simple.
There's another one I wrote that generates a Delphi class that models any database table you give it, but I consider it a code smell to do that, so I rarely use it.
At the company we've written our own to generate most of our entity/dalc/business classes and the related stored procedures as it took only a little time and we had some special requirements. Although I'm sure we could've achieved the same thing using an existing generator, it was a fun little project to work on.
Codesmith's been recommended by many people and it does seem to be a good one. Personally all I need from a code generator is to make it easy to amend templates.
I use the hibernate tools in myEclipse to generate domain models and DAO code from my data model. It seems to work pretty well (there are some issues if you write custom methods in your DAO's, these seem to get lost on over-writes), but generally it seems to work pretty well- especially in conjunction with Spring.
SubSonic is great!! The query capability is easy to grasp, and the stored procedure implementation is truly awesome. I could go on and on. It makes you productive instantly.
I mainly code in C# and when i need code generation I do it in XLST when the source could be simply converted to XML or a ruby script when it's more complex.
If the code generation part need frequent modifications by more than a few developers CodeSmith works pretty well (And is easier to learn than XSLT or ruby by new developers).
Outsystems' Agile Platform can be used to generate open source, well documented C# and Java applications. Because it has also several features related to deploying, managing and changing, most people end up using it not just to generate the code but actually to manage the full life-cycle of web applications.
For some time, I've used a home-grown script/template language for code generation. (I've used that languge mostly for no other reason than to find use for my little pet project)
Recently, I've created some SQL*PLUS scripts to create database access code (no Hibernate for us...)
MyGeneration all the way!
MyGeneration is an extremely flexible template based code generator written in Microsoft.NET. MyGeneration is great at generating code for ORM architectures. The meta-data from your database is made available to the templates through the MyMeta API.