I'm trying to use MPXJ library to get fields from the MS Project mpp file. I managed to to get the task and resources. My file contains additional fields like start date, end date, comments etc. Can anyone help me to extract these fields??
Thanks in advance :)
You may find it useful to take a look at the notes in the "getting started" section of the MPXJ web site. To summarise briefly, data from Microsoft Project, and other project planning tools, typically consists of a top level project, tasks, resources, and assignments (which link tasks and resources together).
This is pretty much how MPXJ represents the data read from a project plan. The attributes of each of these objects can be set or retrieved using the relevant set and get methods on each object. So for example, the Task object in MPXJ will expose setStart() and getStart() methods to allow you to work with the task start date. The method names follow the names used for the attributes in Microsoft Project so hopefully you will find it stratightforward to locate the attributes you need. You may also find the API documentation helpful in this respect too.
Related
I have a list of words (1K+) in a file, and I would like to get their definitions and save them. I was thinking about getting their definitions from Google, as it's the first thing that it shows. The way I thought about doing that is quite rudimental, which is to create a URL instance pointing to the Goole search of the given word, and read the content using streams. Then, "filter" the definition, which is always in between "data-dobid="dfn"><.span>" and "<./span>"
For example:
[...]data-dobid="dfn"><.span>. unwilling or refusing to change one's views or
to agree about something<./span>.[...]
Which is the definition of intransigent
However I would like to know if there is a more "efficient" way of doing so, for example without retrieving all the other results of the search. And also, If it's possible to load multiple results in a background thread so that when I want to "decode" a definition and save it, I don't always have to be waiting for the search to be completed.
The more efficient approach is to download a dictionary which you can then load locally. This gives you a local file or database that is readily searchable.
This approach is not only computationally efficient but it also will ensure you're are using the information correctly under its license. What you are proposing is commonly called "scraping" and may go against various licenses and terms of service.
This blog post lists several freely available and freely licensed dictionaries.
This AskUbuntu.SE question describes some more of the technical work required to acquire a free dictionary and reference it from the command line. You would want to replicate these reading patterns to load the data in Java.
Yet another approach would be to use a freely available and appropriately licensed API such as https://dictionaryapi.com/ . This would still use HTTP calls but is clearly licensed and is also an explicit API for looking up human-language word defintions. This is an advantage over scraping Google because you won't have to parse HTML and it is appropriately licensed for you to use it.
Finally there are some similar, if not duplicate, questions on StackOverflow and StackExchange such as this one: How to implement an English dictionary in Java?
I have been given a task of "generate sequence diagrams automatically on execution of junit/test case" in eclipse. I am learning UML. I found tools that can generate a sequence, and I am aware of junit, but how do I club this both.
The tools that I found good were UMLet,ModelGoon UML, Object Aid. But I zeroed in on ModelGoon. I found that simple and easy to use. How do I automate this task, if so please guide me.
If there are any-other tools that are available then guide me.
First: This is a very good idea, and there are several ways to go about it. I will make the assumption that you are working in a jvm language (e.g. Kotlin or Java) so the suggestions I will make are biased by that.
Direct approach
Set up your logging to log using json, it makes the rest much simpler: https://www.baeldung.com/java-log-json-output
Make a library where you log the name of the component/method you are in, and the session you are processing. There are many ways of doing this, but a simple one is to a thread local variable: Set the variable to contain the name of the thing you are tracing ("usecase foobar"), and some unique ID (UUIDs are a decent choice). Another would be to generate some tracing ID (or get one from an external interaction), and send that as a parameter to all involved methods. Both of these will work, and which one is the simplest in practice depends on the architecture of your application.
In the methods you want to trace, write a log entry that contains that tracing information (name of usecase, trace ID, or any combination thereof), the location where the log entry was written, and any other information you want to add to your sequence diagram.
Run your test normally. A log will be produced. You need to be able to retrieve that log. There are many ways this can be done, use one :-)
Filter the log entries so you get only the ones you are interested in. Using the "jq" utility is a decent choice.
Process the filtered output to generate "plant uml" input files (http://plantuml.com/) for sequence diagrams.
Process the plant UML files to get sequence diagrams.
Done.
Industrial approach
Use some standard tooling for tracing like "https://opentracing.io/", instrument your application using this tooling, and extract your diagrams using that standard tooling.
This will also work in production an will probably scale much better than the direct approach, but if scaling isn't your thing, then the direct approach may be what you want to do.
I'm looking for a open-source web crawler written in Java which, in addition to usual web crawler features such as depth/multi-threaded/etc. has the ability to custom handling each file type.
To be more precise, when a file is downloaded (or is going to be downloaded), I want to handle the saving operation of the files. The HTML files should be saved in a different repository, images to another location and other files somewhere else. Also, the repository could be not just a simple file system.
I've heard a lot about Apache Nutch. Does it have the ability to do this? I'm looking to achieve this as simple and fast as possible.
Based on assumption that you want a lot of control over how crawler works, I would recommend crawler4j. There are many examples, so you can get quick glimpse of how things are working.
You could easily handle resources based on their content type (take a look at Page.java class - it is class of object that contains information about fetched resource).
There is no limitations regarding repository. You can use anything you wish.
We have used liquibase at our company for a while, and we've had a continuous integration environment set up for the database migrations that would break a job when a patch had an error.
An interesting "feature" of that CI environment is that the breakage had a "likely culprit", because all patches need to have an "author", and the error message shows the author name.
If you don't know what liquibase is, that's ok, its not the point.
The point is: having a person name attached to a error is really good to the software development proccess: problems get addressed way faster.
So I was thinking: Is that possible for Java stacktraces?
Could we possibly had a stacktrace with peoples names along with line numbers like the one below?
java.lang.NullPointerException
at org.hibernate.tuple.AbstractEntityTuplizer.createProxy(AbstractEntityTuplizer.java:372:john)
at org.hibernate.persister.entity.AbstractEntityPersister.createProxy(AbstractEntityPersister.java:3121:mike)
at org.hibernate.event.def.DefaultLoadEventListener.createProxyIfNecessary(DefaultLoadEventListener.java:232:bob)
at org.hibernate.event.def.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:173:bob)
at org.hibernate.event.def.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:87:bob)
at org.hibernate.impl.SessionImpl.fireLoad(SessionImpl.java:862:john)
That kind of information would have to be pulled out from a SCM system (like performing "svn blame" for each source file).
Now, forget about trashing the compilation time for a minute: Would that be even possible?
To add metadata to class files like that?
In principle you can add custom information to .class files (there's and attribute section where you can add stuff). You will have to write your own compiler/compiler extension to do so. There is no way to add something to your source code that then will show up in the class file.
You will also have major problems in practice:
The way stack-traces a built/printed is not aware of anything you add to the class file. So if you want this stuff printed like you show above, you have to hack some core JDK classes.
How much detail do you want? The last person who committed any change to a given file? That's not precise enough in practice, unless files are owned by a single developer.
Adding "last-committed-by" information at a finer granularity, say per method, or even worse, per line will quickly bloat your class file (and class files are limited in size to 64K)
As a side note, whether or not blaming people for bugs helps getting bugs fixed faster strongly depends on the culture of the development organization. Make sure you work in one where this helps before you spend a lot of time developing something like this.
Normally such feature can be implemented on top of the version control system. You need to know revision of your file in your version control system, then you can call blame/annotate command to get information on who has changed each individual line. You don't need to store this info into the class file, as long as you can identify revision of each class you deploy (e.g. you only deploy certain tag or label).
If you don't want to go into the version control when investigating stack trace, you could store line annotation info into the class file, e.g. using class post processor during your build that can add a custom annotation at the class level (this is relatively trivial to implement using ASM). Then logger that prints stack trace could read this annotation at runtime, similarly to showing jar versions.
One way to add add custom information to your class files using annotations in the source code. I don't know how you would put that information reliably in the stack trace, but you could create a tool to retrieve it.
As #theglauber correctly pointed out , you can use annotations to add custom metadata. Althougth i am not really sure you if you cant retrieve that information from your database implementing beans and decorating your custom exceptions manager.
Background: I've been tasked with implementing search engine sitemaps for a website running on Sling. The site has multiple country-specific sites, and every country-specific sites can have multiple localizations - for instance, http://ca.example.com/fr would be the French-localized version of the Canadian site, and would map to /content/ca/fr . I can't change this content structure, and unfortunately both the country and localization nodes have the same sling:resourceType. Also, the administrative types want a sitemap.xml for each country/localization pair, not one per country site.
Generating the sitemaps is an easy task, my problem is needing a 'sitemap' node for each country/localization pair - because of the way countries and localizations nodes added (and them having the same resource type), I can't currently think of a good automated way to add the sitemap node.
It would be nice if I could somehow define a "virtual resource" that maps requests for /{country}/{localization}/sitemap.xml to a handling script; I've been browsing around and have bumped into ResourceProvider and OptingServlet, but they seem to be pretty focused on absolute paths - or adding selectors to an existing resource, which doesn't seem like an option to me.
Any ideas if there's some more or less clean way to handle this? Adding new countries/localizations doesn't happen every day, but having to add the 'sitemap' node manually still isn't an optimal solution.
I've been considering whether it's perhaps a better idea to have a running service that updates the sitemaps X times per day, and generate the sitemap.xml nodes as simple file resources in the JCR, instead of involving the Sling resolver... but before going that route, I'd like some feedback :)
EDIT:
Turns out requirements changed, and they now want the sitemaps to be configurable per localization - makes my job easier, and I won't have to work against Sling :)
Sling is a resource based framework, so you have to have a resource (node) in JCR which your requests targets.
You have two options:
1) Create a Sitemap template which includes the logic to display the Sitemap, or has your Sitemap component included on it. The Sitemap logic can be extracted into a class or service as you see fit. The site map for each site would live at:
- /content/us/en/sitemap.xml
- /content/ca/fr/sitemap.xml
2) Create a single sitemap resource (node) that you reference using 2 sling selectors; the country and language codes - this method allows for caching, however you may run into cache clearing issues as its a single resource.
/content/sitemap.us.en.xml
/content/sitemap.ca.fr.xml
You can look at: PathInfo for extracting the Sling Selector information for determining which Sitemap to render.
http://dev.day.com/docs/en/cq/current/javadoc/com/day/cq/commons/PathInfo.html
If i were doing this I would require the manual addition of the Sitemap to each site, and keep the resource under /content//
You could even look into creating a Blueprint site using MSM (if youre using the platform I think you are) and roll out new sites using that which lets you create a site template.
If you want a GET to /{country}/{localization}/sitemap.xml to be processed by custom code, simply create a node at that location and set its sling:resourceType as needed to call a custom servlet or script.
To create those sitemap.xml nodes automatically, you could use a JCR observer to be notified when new /{country}/{localization} trees are created, and create the sitemap.xml node then.
For configurable sitemaps you can add properties to the sitemap.xml node, and have your custom servlet or script use their values to shape its output.
You could do that without having a sitemap.xml node in the repository, using a servlet filter or a custom ResourceProvider, but having those nodes makes things much easier to implement and understand.
Note I am working on a sling resource merger, which is a custom resource provider, with ability to merge multiple resources based on your search paths.
For instance, if your search paths are
/apps
/libs
Hitting /virtual/my/resource/is/here will check
/apps/my/resource/is/here
/libs/my/resource/is/here
There are some options like:
add/override property
delete a property of the resource under /libs
reorder nodes if available
I intend to submit this patch as soon as possible.
Code is currently located at https://github.com/gknob/sling-resourcemerger and tracked by https://issues.apache.org/jira/browse/SLING-2986