Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to count the number of times multiple id's appears in a database table, then using them numbers to put into a JFreeChart.
Im currently unaware how to do this and cannot find code online to do this.
Well I'm certainly no expert but I recently did something similar in an application. You can hardcode the required SQL query strings into your application code and thus, retrieve the required data from your database. You will need a database connector for this. The connector you need depends on what language you are writing code in and what database you are using.
You will receive resultsets where the data from your database is returned as Strings. This includes data that was stored in your database in numeric format so you may need to cast the strings to another format if thats what you require. You feed the result set into a collection structure such as 'ArrayList' (if you are using Java for example). You said you are trying to count id's so you could use searching methods for whatever collection you use to tell you whether the strings in the collection are duplicates, you may need to use a a set (a collection which cant have duplicates) for comparison purposes. There's not that much detail in your question but this should help, just count the duplicates in your collection and keep track of the numbers.
At the end if this you will have a collection of numerical values so you simply feed this to a class which imports the required JFreeChart modules and uses the data to create a chart.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I would like to know which is the right collection to be used in Java to store and search string/data from millions of data?
Assume you want it for String.
Assume you want it for objects, and search for multiple values.
Please note that I am looking for the best performance to quickly get the search result. I am looking for Java collection which can do this. Want to know the right collection that can be used. Input is an arbitrary string and not necessarily sorted.
Actually if you want to search in such large data structure, none of the available collections in java would be sufficient as you would need to store all the data in memory which would require really powerfull computer to work.
However there are existing solutions to you problem, which is called Full Text Search.
Take a look at Apache Lucene or Elasticsearch (which uses Apache Lucene under the hood)
For more simple solution you could also use any relational database which should also do the trick.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have to build a search functionality where GUI will provide a search field to search objects in Oracle database. There are currently 30K objects I have to search on but they will grow in number over time. 300-400 per month approx.
As a part of the requirement, when user types in any text in search Like for example "ABC", then all objects in the DB that contains ABC should appear in a datatable more like system is predicting results based on what user has types in the search field.
Question is how to architect such feature?
Simple way to do is to load everything in the GUI Javascript object and run search on it. Since JS is ridiculously fast, performance wont be an issue.
Another way is to run query in the Database everytime user types in text in search field. This does not seem convenient as it will put unnecessary load on the database.
Is there a better way to architect this feature? Please share thoughts.
premature optimization is seldom useful.
300-400 object growth per month with a 30k base object is nothing at all for any DB to handle.
loading all 30k object at once on the browser is awful and may affect performance while querying result in the DB will not have this problem until you have LOT of and LOT of users accessing the DB.
You should be building the service using the Database and then if/when you reach a bottleneck you can think about optimization trick such as caching frequent queries on the database.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
While working on a Java web application I was wondering if my model layer was written as it should be. For instance, let's say we have a table USER in our SQL database which consists of 15 columns. Now, when we SELECT all of the columns with SQL we map it to a Java class, serialize via JSON and send it via network to some View and show it on screen.
In a second scenario, we want to select only 2 columns on screen so we do SELECT c1,c2 FROM USER. Thats where my question comes in... am I supposed to map those columns to a same Java model class? Or should i create a new mapper and class to fit it? Both of the approaches seem to have drawbacks, separate class for each query is more work, but it makes sure you always know what data it contains, rather than checking for nulls or working with optionals, also it prevents you from mapping columns you actually don't need.
What is your opinion? Thanks a lot!
Technically you could reuse the same User class for full 15-attribute as well as partial 2-attribute entity. But that will come with a price. Every time you'll see an instance of User class in the code your will have to think if it's the full entity or the partial? Which fields may or may not be null? This will make it much harder to reason about code.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
The app fetches data from a server. The data is returned in an Array. I need to check if one of the Array items starts with a value specified in the app. If it's possible without iterating because that slows down the app significantly.
If you have just an array data structure, there is no way to check what it contains without actually looking into it.
However if you use a different data structure such as a HashMap (which is built on to of an array) you can check/lookup a key like "101" in O(1) time typically. You can check map.isEmpty() in O(1) time.
In short, if it's taking too long to perform a simple operation, chances are you need to be using a different data structure (or possibly more than one)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
simple question: any ideas how this should be properly done?. I have 3 txt files with lots of information, I created a class that will be in charge of reading the data from the txt files and returning the data as a list of DTO components (Yes, the information can be bundle as such logic unit), depending on the txt file, after that the client will use a DAO and will use such a list and insert the data to a local database (sqlite). My concern is that having such a List could be memory demanding, should I avoid using such list and somehow insert this data using the dao object directly without bundling the data into a dto and finally such list?
You are asking a good question and partially answering it yourself.
Yes, sure if you really have a lot of information you should not read all information from file and then store it in DB. You should read information chunk-by chunk or even if it is possible (from application point of view) line-by-line and store each line is DB.
In this case you will need memory for one line only at any time.
You can design the application as following.
File parser that returns Iterable<Row>
DB writer that accepts Iterable<Row> and stores rows in DB,
Manager that calls both.
In this case the logic responsible on reading and writing file will be encapsulated into certain modules and no extra memory consumption will be required.
do not return list, but an iterator like in this example: Iterating over the content of a text file line by line - is there a best practice? (vs. PMD's AssignmentInOperand)
you have to modify this iterator to return your DTO instead of String:
for(MyDTO line : new BufferedReaderIterator(br)){
// do some work
}
Now you will iterate over file line by line, but you will return DTOs instead of returning lines. Such solution has small memory impact.