Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have to assign a task to a team member to programatically write data to Microsoft NAVision and also read from it. Specifically we will be writing data that is in one of our systems into the customers NAVision financials module, and this will be on a periodic batch basis, say on a weekly basis.
I have programmed against Sage before where I have provided an XLS output that has been in the expected format by Sage and the system administrator imports from Sage.
Is there a similar process for Navision? Sepcifically fincancials?
I would prefer to write my data to a file and have someone else input that data to NAVision. Would also prefer a data dump from NAVision (Excel or XML) so that I can read it back into our system. I don't want the risks associated with pumping data directly into a financial system.
Our system is Java based and would prefer not to have to use .NET if possible.
Options?
The first thing you need to find out is what version of NAV the customer is using.
Earlier versions (I think pre 4) only allowed data to be imported or exported via a Dataport object. This supported delimited file structures csv, tab e.t.c
Later versions of the software also now have XMLPorts, which as the name suggests allows XML files to be imported or exported.
Both of these solutions would require development work inside of NAV as there are very few standard import/export objects of either Dataport or XMLPort.
These are normally written by the NAV solution center that provides support for the system or occasionally some companies I have worked with do have internal staff with this knowledge.
Expanding further on this there is also capability to read and write directly to Excel spreadsheets, but this approach can be painfully slow as it uses the Excel COM Interop objects to achieve it.
It really depends on the version of Navison you are using and if your customer has programming rights in the system at all(!). It is very likely that they don't.
It also depends if the weekly import is done by hand each week or if it should be 100% automated. Manually started imports/exports work well with dataports.
Often in booking systems imported data is imported in to special tables ("ledger") from where it is booked into the real system that counts after someone has looked at them. As you can discard the data from those ledgers I would not worry too much about pumping data into a live system this way.
If your licence allows programming, you could write/read from/to text files and write your own import/export, which will be more flexible. I'm personally created all kinds of imports and exports. We have one example where Navision pulls a certain email address and looks for special email attachments which it then imports. We have called web services or provided our own. The older "Classic Client" versions also offers read and write access via a C/FRONT interface. This way you can automate data import/export completely. Most of these rely on .net modules, however.
Hope this helps. If not, post the version of Navision and decribe the planned export-/import in detail.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am in the process of designing a tool to visualize java logs. Specifically, these logs are generated by printing to the console whenever a method is invoked and whenever it is returned. These log statements are injected into the entire Android OS source code using bytecode manipulation. So far we have been able to instrument the Android OS and generate these log statements. The fields contained in these log statements are: process that invoked the method, method signature, argument types, return type and time stamp
The tool that uses these logs will have a detailed view (on zooming in) and a high-level overview (on zooming out). I am looking for efficient ways of visualizing and navigating this huge log file at high-level overview to get valuable information. Each log statement has a hierarchical relationship to another log statement. For Example, a log statement for a method invocation will be the parent of all the log statements of the methods invoked from within the parent method.
My questions are,
What would be an effective way to visualize and navigate these hierarchical log statements in a huge log file to get a good high-level overview? Sequence diagrams are useful for a detailed view but aren't suitable for a huge call trace.
Are there any existing tools in the market that have similar functionality? I have looked at log visualization tools but none have the high-level overview visualization.
As an app developer who is equipped with an instrumented VM that generates log statement for each called method and who can run an app on the said android VM, what information would be useful to you?
Any other suggestions?
Thanks in advance.
Edit: I added a few more details about hierarchical nature of the log statements.
I believe the diagram that is closest to what you describe is a Sequence diagram. There used to be a plugin for eclipse that would monitor your source code and build up a sequence diagram of all the call/return/timing/etc.
Here's a description of how the diagrams work and how to create them by hand:
http://agilemodeling.com/artifacts/sequenceDiagram.htm
They are a great way to model how the code interacts at runtime at a very high level.
Just with a quick query--this is in the eclipse marketplace:
https://marketplace.eclipse.org/content/objectaid-uml-explorer#group-metrics-tab
I haven't used this one, just what came up in the query, however I do remember creating an awesome diagram with code I was analyzing--covered an entire wall of my cube and was really helpful.
From your question, the data your using are server logs; and the structure of log files are very complex task and processing different log files for visualization is also possible under a certain limitation, you will need to create or develop your own standard tool to classify and aggregates the log files for processing then for visualization. However, some tools and platforms are already existed such as:
One of the effective ways is to process data in pipelines for big data information, since these log statements are server logs using Apache sparks, splunk, or Cisco Mars with integration in SIEM's solutions to process logs in real-time is one of the most effective ways used for processing server log files.
HPE Security ArcSight Data Platform also can deliver a high-performance, cost-effective solution that unifies big data collection, reporting and analysis across enterprise machine data.
Clock View and PeekKernelFlows use NetFloW log to monitor large IP spaces over long periods. In addition, a tool called ELVIS extensible log visualization tool, can correlate the process of server logs and provide a representation summary view of the important data.
I highly suggest you use a simple log structure ex. FTP or IIS server log files of selected fields, and visualize the output by using D3.js modules for static and interactive visualization specifically with Parallel Coordinates.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I recently asked a question about Neo4j, which I got working and which seems nice. It's embeddable and it's written in Java and there aren't (too) many dependencies.
However it's a graph DB and I don't know if it's a good idea or not to use it as a simply key/value store.
Basically I've got a big map, which in Java would look like this:
Map<Integer,Map<String,String>>
I've got a few tens of millions of entries in the main map and each entry contains itself a map of property/values. The "inner" map is relatively small: about 20 entries.
I need a way to persist that map from on run of the webapp to the other.
Using Neo4j, what I did is create one node for every ID (integer) and then put one property for each entry inside the inner map. From my early testing it seems to work but I'm not sure it's a good way to proceed.
Which embeddable DB, written in Java, would you use?
The requirements are:
written in Java
embeddable (so nothing too big)
not SQL (*)
open source
easy to backup (I need to be able to make "live" backups, while the server is running)
My terminology may be a bit wrong too, so feel free to help me / correct me. For my "map of maps", the best fit would be a key/value pair DB right?
I'm a bit lost as the difference between key/value pairs DB, Document DBs, big tables, graph DBs, etc.
I'd also like if it's a good idea to use a graph DB like Neo4J for my need (I think performance really ain't going to be an issue seen the relatively small amount of entries I'll have).
Of course I could simply persist my map of maps myself but I really don't want to reinvent any wheel here. I want to reuse a tried and tested DB...
(*) The reason I do not want SQL is that I'll always have this "map of maps" and that the inner map is going to constantly evolve, so I don't want something too structured.
There seem to be a couple of ports of Google's LevelDB into Java:
Dain LevelDB Port (pure Java)
Dain LevelDB (JNI)
Then there is a whole list of embedded Java databases here:
Embedded java databases
Java Embedded Databases Comparison
For your use case I would recommend MapDB (http://www.mapdb.org)
It matches your requirements:
written in Java
embeddable - single jar with no dependencies
not SQL - gives you maps that are persisted to disk
open source (Apache 2 licence)
easy to backup (few files)
and has other nice features like transactions, concurrency and performance.
Chronicle-Map is a new nice player on this field.
It is off-heap residing (with ability for being persisted to disk by means of memory-mapped files) Map implementation
Super-fast -- sustains millions of queries/updates per second, i. e. each query has sub-microsecond latency on average
Supports concurrent updates (assumed to be a drop-in replacement of ConcurrentHashMap)
Special support of property maps you mentioned, if the set of properties is fixed within the collection -- allows to update specific properties of the value without any serialization/deserialization of the whole value (20 fields). This feature is called data value generation in Chronicle/Lang project.
And many more...
You could look into berkeley DB
http://docs.oracle.com/cd/E17277_02/html/GettingStartedGuide/index.html
It is quite efficient at dealing with big amount of data and it's key/value.
I cannot really tell more about it since I'm discovering it myself but if you have time to take a look into it...
Checkout www.jsondb.io
This is a pure java, embeddable lightweight database that stores its data as files which makes it easy to backup
Late to the part but you can use Tayzgrid. Its open source and its in-proc cache can be embedded in your application. Its basically an In Memory Data Grid or In Memory Key value store but it has also the capability you want i.e. to be a simple in process embedded key value store.
You could just stick with an XML or JSON file. Neither of those requires a schema and is fairly easy to go back and forth between disk and memory, especially if performance really doesn't matter too much. (eg. you only load configs every now and then)
The advantage is that XML and JSON are both very simple and deal with Maps pretty well.
You also have a much lighter dependency load on your application. An entire embedded DB-type system is pretty heavy if you are just persisting/un-persisting a big data structure when you need to and not using any of the query or similar capabilities most embedded solutions will add.
To pick off your requirements, it's built in to Java for the most part, easy to back up, since it's just a file, highly embed-able, very much Open Source, and not SQL. XML can be a bit verbose and unwieldy at times, but it's a well-known domain and has very rich tooling surrounding it so that you can deal with it external to your app if needed.
I am doing a Java Messenger for people to chat and I an looking for a way to record the message archives on the user's computer.
I have 2 possibilities in my mind :
To Save the conversations in XML files that I store in my documents folder.
To use SQlite, but the problem is that I don't know how it is possible to integrate it to my setup package and I don't know if it is very useful.
What would be the best solution for you ?
Thank you
Another option is using JavaDb, which comes for free with Java 6 (and later versions)
Before you make a choice, you should think about questions such as:
presumably you want this transparent to the user (i.e. no admin involved)
is performance an issue ?
what happens if the storage schema needs migration
do you need transactionality (unlikely, I suspect)
etc. It's quite possible that even a simple text file would suffice. Perhaps your best bet is to choose a simple solution (e.g. a text file) and implement that, and see how far it takes you. However, provide a suitable persistence level abstraction such that you can slot in a different solution in the future with minimal disruption.
I would go for the XML files as they are more generic and could be opened outside your messenger with more or less human readable format. I use Pidgin for instant messaging and it saves chat history in XML. Also to read the history from your application you can transform then easily in HTML to display it nicely.
If you use JAXB, converting Java objects to/from XML is very easy. You just put a few annotations on your classes, and run them through a JAXB marshaller/unmarshaller. See http://docs.oracle.com/javaee/5/tutorial/doc/bnbay.html
Use google's protocolbuffer or 10gen's bson. they are much smaller and faster.
http://code.google.com/apis/protocolbuffers/docs/javatutorial.html
http://bsonspec.org/
One issue is these are in the binary presentation and you might want to make the archive transparent/readable to users
I'm working on (essentially) a calendar application written in Java, and I need a way to store calendar events. This is the first "real" application I've written, as opposed to simple projects (usually for classes) that either don't store information between program sessions or store it as text or .dat files in the same directory as the program, so I have a few very basic questions about data storage.
How should the event objects and other data be stored? (.dat files, database of some type, etc)
Where should they be stored?
I'm guessing it's not good to load all the objects into memory when the program starts and not update them on the hard drive until the program closes. So what do I do instead?
If there's some sort of tutorial (or multiple tutorials) that covers the answers to my questions, links to those would be perfectly acceptable answers.
(I know there are somewhat similar questions already asked, but none of them I could find address a complete beginner perspective.)
EDIT: Like I said in one of the comments, in general with this, I'm interested in using it as an opportunity to learn how to do things the "right" (reasonably scalable, reasonably standard) way, even if there are simpler solutions that would work in this basic case.
For a quick solution, if your data structures (and of course the way you access them) are sufficiently simple, reading and writing the data to files, using your own format (e.g. binary, XML, ...), or perhaps standard formats such as iCalendar might be more suited to your problem. Libraries such as iCal4J might help you with that.
Taking into account the more general aspects of your question, this is a broader topic, but you may want to read about databases (relational or not). Whether you want to use them or not will depend on the overall complexity of your application.
A number of relational databases can be used in Java using JBDC. This should allow you to connect to the relational database (SQL) of your choice. Some of them run within their own server application (e.g. MS SQL, Oracle, MySQL, PostgreSQL), but some of them can be embedded within your Java application, for example: JavaDB (a variant of Apache Derby DB), Apache Derby DB, HSQLDB, H2 or SQLite.
These embeddable SQL databases will essentially store the data on files on the same machine the application is running on (in a format specific to them), but allow you to use the data using SQL queries.
The benefits include a certain structure to your data (which you build when designing your tables and possible constraints) and (when supported by the engine) the ability to handle concurrent access via transactions. Even in a desktop application, this may be useful.
This may imply a learning curve if you have to learn SQL, but it should save you the trouble of handling the details of defining your own file format. Giving structure to your data via SQL (often known by other developers) can be better than defining your own data structures that you would have to save into and read from your own files anyway.
In addition, if you want to deal with objects directly, without knowing much about SQL, you may be interested in Object-Relational Mapping frameworks such as Hibernate. Their aim is to hide the SQL details from you by being able to store/load objects directly. Not everyone likes them and they also come with their own learning curve (which may entail learning some details of how SQL works too). Their pros and cons could be discussed at length (there are certainly questions about this on StackOverflow or even DBA.StackExchange).
There are also other forms of databases, for example XML databases or Semantic-Web/RDF databases, which may or may not suit your needs.
How should the event objects and other data be stored? (.dat files,
database of some type, etc)
It depends on the size of the data to be stored (and loaded), and if you want to be able to perform queries on your data or not.
Where should they be stored?
A file in the user directory (or in a subdirectory of the user directory) is a good choice. Use System.getProperty("user.home") to get it.
I'm guessing it's not good to load all the objects into memory when
the program starts and not update them on the hard drive until the
program closes. So what do I do instead?
It might be a perfectly valid thing to do, unless the amount of data is so great that it would eat far too much memory. I don't think it would be a problem for a simple calendar application. If you don't want to do that, then store the events in a database and perform queries to only load the events that must be displayed.
A simple sequential file should suffice. Basically, each line in your file represents a record, or in your case an event. Separate each field in your records with a field delimiter, something like the pipe (|) symbol works nice. Remember to store each record in the same format, for example:
date|description|etc
This way you can read back each line in the file as a record, extract the fields by splitting the string on your delimiter (|) symbol, and use the data.
Storing the data in the same folder as your application should be fine.
The best way I find to handle the objects (for the most part), is to determine whether or not the amount of data you are storing is going to be large enough to have consequences on the user's memory. Based on your description, it should be fine in this program.
The right answer depends on details, but probably you want to write your events to a database. There are several good free databases out there, like MySQL and Postgres, so you can (relatively) easily grab one and play with it.
Learning to use a database well is a big subject, bigger than I'm going to answer in a forum post. (I could recommend that you read my book, "A Sane Approach to Database Design", but making such a shameless plug on a forum would be tacky!)
Basically, though, you want to read the data from the database when you need it, and update it when it changes. Don't read everything at start up and write it all back at shut-down.
If the amount of data is small and rarely changes, keeping it all in memory and writing it to a flat file is simpler and faster. But most applications don't fit that description.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am working on my Master's project and I am looking for a substantial amount of financial data about a particular company.
Example: let's say "Apple". I want the historic prices, current market price / ratios, quarterly results and the analyst calls.
I saw couple of posts on StackOverflow about YQL. I think I can get current price and various ratios from Yahoo Finance for free. However for other data, there are companies like Thomson Reuters, Bloomberg, etc. but they seem to have a closed system.
Where can I get an API to fetch various data? Is there anything which will help me get that data? I am fine with raw data as well in any format. Whatever I can get. Could you guys please suggest any API?
A Java library under development is IdylFin, which has convenience methods to download historical data.
Disclaimer: I am the author of this library.
Stephen is right on the money, if you really want a real wealth of data, you're probably gonna have to pay for it.
however, I've been successful on my own private projects by using the "API" spelled out here:
http://www.gummy-stuff.org/Yahoo-data.htm
I've pulled down all the stocks from the S&P 500 quite often, but if you ever publish that data, talk with yahoo. you'll probably have to license it.
btw, all this data is in CSV format, so get a CSV reader/converter etc. their easy to find
This is a Yahoo finance Historical data for "Apple"
http://in.finance.yahoo.com/q/hp?s=AAPL
There is a link at the bottom to download the data. May be this could help
I will suggest a couple of APIs that have financial data that is sometimes hard to find (e.g. quarterly results, analyst calls):
1) http://www.zacksdata.com/zacks-data-api
2) http://www.mergent.com/servius
Both have free trials available.
(Disclosure: My company manages both of these APIs)
A Java example to fetch data from Yahoo finance it given here Obba Tutorial: Using a Java class which fetches stock quotes from finance.yahoo.com
I have tackled this problem in the past.
For price history data, I used yahoo's API. When I say API, I mean I was making an HTTP get request for a CSV file of price history data. Unfortunately, that only gets you data for one company at a time, for a time span you specify. So I first made a list of all the ticker symbols, and iterated over that, calling yahoo's API for each. You might be able to find a website that lists ticker symbols too, and just periodically download that list.
Do this too often and too fast, and their website just might block you. I added some code to limit how frequently I made http requests. I also persisted my data so I would not have to get it again. I would always persist the raw/unprocessed form of data, your code could change in ways that make it tough to use anything else. Avro/Thrift might be an exception, since those support schema evolution.
For other kinds of data, you may not have any API that gives you nice CSV files. I had to cope with that problem many times. Here is my advice.
Sometimes a website calls a restful web service behind the scenes, you can discover that by using firebug. Sometimes it will also require certain headers, which you can also discover using firebug.
If you are forced to work with HTML, there are several java libraries that can help you. apache.commons.http is a library you can use to easily make http requests and handle their responses. Google has an http-client jar too, which is probably worth investigating.
The JSoup API is excellent at parsing HTML data, even when it is poorly formatted, and not XHTML. It works with XML too. Instead of traversing or visiting nodes in the jsoup hierarchy, learn XPath and use that to select what you want. The website may periodically change the format of its web page, that should be easy to cope with and fix if you're using JSoup, and tough to cope with otherwise.
If you have to work with JSON, use the Jackson library to parse it.
If you have to work with CSV, use the OpenCSV library to parse and handle it.
Also, always store the data in the raw, and avoid making unnecessary HTTP requests so you don't get blocked. I have been blocked by google finance a couple times, they can do it. Fortunately the block does expire. You might even want to add a random wait period between requests.
Have you tried Google Finance API. (Please google it ;). I am using it for tracking my portfolio. Could you try http://code.google.com/apis/finance/docs/finance-gadgets.html? There is an example of custom widget and it might tell you if you are barking under the right tree.
You are really asking about a free financial data service ... rather than an API.
The problem is that the data is a valuable commodity. It probably has cost the providers a lot of money to set up their systems, and it costs them even more money to keep those systems running. Naturally, they want a return on their investment, and they do this (in part) by selling their data / services.
(In the case of Yahoo, Google, etc, the data is bought from someone else, and Yahoo/Google will be subject to restrictions on how they can use it. Those restrictions will be reflected in respective ToS; e.g. you are only allowed to access the services "for personal use".)
I think your best bet would be to approach a number of the financial data providers, and ask if they can provide you with free access (subject to whatever restrictions they might want to impose) to their data services. You could get lucky ...
Good data is not free. Its as simple as that. The reason is that all data is ultimately licensed from an exchange like NYSE or NASDAQ.
If you can get some money high resolution historical data is available from Automated Trader.
You should also talk to the business school at your school. If they have finance masters/phd students or masters in financial engineering they should have large repositories of high resolution data for their students.
If you make your question more detailed I can provide a more detailed answer.
This is something that I kick myself for at least once a week. Way back when the internet consisted of Gopher and all that, you were able to log into FTP servers at the NASDAQ and NYSE, and download all kinds of stock history files for free. I had done it, even had it imported to a database and did some stuff with it....but that was probably 10 computers ago, its LONG gone now.