I am currently developing a program with Java that collect and illustrate IP traffic from similar information and draw the graph of this information.
So I must use rrd4j rrd with java to save the data flow from a JTable in another table first and then use RRDTool to draw the graph .
but my problem is how to have the info stored in rrd and also how to create database RRD4J
thank you
First of all I would check rrd4j project home page and check some documentation. On the main page there is usage example of how you create the database and I think it's quite clear and no needed to be explained, since it would be copy+paste.
Now what about storing the information.. First of all you need to define how much and what type of data you want to store. For example in the project I'm working, we are aggregating data daily, weekly, monthly and yearly. You also need to specify what is the frequence of data collection: because it really makes difference if it's 5 seconds or 5 minutes.
You should also have a look at former rrd project homepage and ganglia, the part where you defining the RRD files creation, it will really help you to understand how RRDTool and data storing works.
Related
I'm building an application that downloads a set of images from a website, extracts some features from them and then allows a user to compare an image she submits to the downloaded set, to see which one is the closest. At the moment the application downloads the images and extracts the features from them. Then the image and the feature get wrapped in an object and stored in a map, with the key as the name of the image, and the value as the aforementioned wrapped object.
Because this is stored in memory, each time I start the application it has to go through the quite expensive process of downloading and feature extraction. It would be much quicker if it could just load this info from disk, but I'm not sure on the best way to go about it - I've thought about these options:
RDMS: something like Postgres or SQLite
NoSQL: something like
Voldemort or Reddis
Serialisation: use built in java methods to write
objects to a file (could also be used in conjunction with a DB
though...)
I want it to be really light weight; I want to keep the application as small as possible and keep configuration down to a minimum. For this reason serialisation seems like the way to go, but I'd like a second (or more) opinion on that, because something about doing it that way just feels wrong. I can't quite put my finger on why I feel like that...
I should also say that users can add images to the set when the application is running, I'd like to save these images too.
I wouldn't recommend serialzation - just too many pitfalls.
If what you have is really just a map, then i think any of the key-value stores ( like redis) would be appropriate.
If you have more complex data, then you might want to consider a database (whether SQL or no-sql).
I have an issue about my webapp: it's a intranet search webapp that asks Sphinx http://sphinxsearch.com/ (the real search engine) for a query typed by the user. The problem is that the result could be very big (also for a intranet network) so I want to save the result on the server to handle a sort of lazy load of the data.
I suppose to use Hibernate but...if the result is too big and I save, for example, 40.000 items...will it be too effort for hibernate? And retrieving them?!
Any suggestions?
Thanks in advance
You can use a limit and offset in the sphinxsearch itself: http://sphinxsearch.com/docs/2.1.3/api-func-setlimits.html. As from this doc a word about limiting the result from the sphinx server (which is 1000 by default):
One thousand records is enough to present to the end user. And if you're thinking about pulling the results to application for further sorting or filtering, that would be much more efficient if performed on Sphinx side.
maybe I'm missing something, but why not just get it piecemeal direct out of sphinx? You can jsut get small pages worth of results at a time. with setLimits.
That way you dont download the data as you need it.
In my project, we have 2 REST calls which take too much time, so we are planning to optimize that. Here is how it works currently - we make 1st call to system A and then pass the response to system B for further processing. Once we get the response from system B, we have to manipulate it further before passing it to UI layer and this entire process takes lot of time. We planned on using Solr/Lucene but since we are not the data owners, we can't implement that. Can someone please shed some light on how best this can be handled? We are using Spring MVC and Spring webflow. Thanks in advance!!
[EDIT:] This is not the actual scenario and I am writing this as an example for better understanding. Think of this as making a store locator call for a particular zip to get a list of 100 stores and then sending those 100 stores to another call to get a list of inventory etc. So, this list of stores would change for every zip code and also the inventory there.
If your queries parameters to System A / System B are frequently the same you can add a cache framework to your code. If you use Spring3, you can use the cache easily with an #Cacheable annotation on your code calling SystemA. See :
http://static.springsource.org/spring/docs/3.1.0.M1/spring-framework-reference/html/cache.html
The cache subsystem will cache the result including processing code.
I have been wondering about this, which is why I have put off learning app development for so long. Let's say I was making a school timetable app, that all the user had to do was enter the name of their course, and then the app shows the timetable for that course..
The questions is can I get information from the college or do I have to hard code it into the database myself?
How does one get information to use if they need it?
Thanks
It depends. Does the college provide you an interface you can use? Probably not one that was meant to be used by a third party app.
If not, then you have to somehow get the information into your database. Either per parsing their online HTML schedules or inputing it by hand (obviously always one of the last options to consider).
If the college had a website that you could view, you could scan the page for class listings and pull that data in - but more than likely that sort of data will need to be entered manually by you when you ship the app.
If college is having its website and the website provides RSS feed for time table you parse that XML file and show the data which is parse or you can save the time table information of which course in the database and display that using cursors.
What's the best way to do spreadsheet-like calculations in a programming language? Example: A multi-user application needs to be available over the web that crunches columns and cells of numbers like a spread-sheet based on user submission. What are the best data structures/ database models/patterns to handle this type of work so that handling the different columns are done efficiently and easily in php, java, or even .Net. Is it better to use data structures within the language, or is it better to use a database? If using a database is the way, how does one go about doing this?
To do the actual calculation, look at graph theory. Basically you want to represent each cell as a node in a graph and each dependency as a directed edge. Next, do a topological sort to calculate the value of each cell in the right order.
Aspose.Cells (formerly Aspose.Excel.Web) is a good way to get the functionality you are looking for.
Unless you are asking more for a "How is it done?" than "I need to do it." Then I would look at the other answers given.
Along the lines of "I need to do it"
Microsoft has Excel Services which does just what you want.
Spreadsheet operations on the server. It is available via a web services interface, so you can connect and drive calculations from Java, PHP, .NET, whatever.
Excel Services is part of Sharepoint 2007.
Resolver One is a Spreadsheet app made in IronPython.
There is an explanation of the overall mechanic for the calculation [pythonology.org] it uses for user generated ecuations.
The relevant image showing Resolver One's overall algorithm.
Should note that users can write python code to be interpreted both on the cells and a special 'outside of sheet' place.
Look at another question here in SO, from where I reused my answer.
I can't tell you how to do it. But I would recommend you to look at the code of PHPExcel. PHPExcel is a library that allows you to create Excel files within PHP.
The workflow of PHPExcel is simplified like this:
Create an empty Excel file object
Add cells (with either data or formulas) to the "Excel file"
Call the create function which is generating the file itself
In your case you would have to replace 3. with something like "Create web interface".
Therefore I would recommend you to look at the code of this open source project and look how the general structure is. This should help you solving your problem.
I once used a binary tree to store the output of parsing a string using BODMAS. Each node was an operation between two other nodes, which could be a number, a variable or another operation.
So y = x * x + 2
became:
+
* 2
x x
Sadly this was at school in Pascal and is stored on a 5 1/4" disk, so you don't want it :)
SpreadsheetGear for .NET will let you load Excel workbooks, plug in values, calculate and then get the results.
You can see a few simple ASP.NET calculation samples here, other ASP.NET samples here and download a free trial here.
Disclaimer: I own SpreadsheetGear LLC
I must point out that google spreadsheets already does this kind of stuff.