Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have some questions regarding article management system.
I am thinking of making a website where people will become members and write their articles, they can publish them, rank them etc.
And i have been googling for past two weeks that which technology is best.
And how to store the article so that search engines (like google, yahoo, etc.) can find those articles.
If the articles are stored as html somewhere on my server then Google Spider programs will be able to get them for search results
but if i store the content of my article in MySQL (the database which i want to use), how would search engines rank my website articles.
I am really confused, please guide me.
I need to know if there is any PHP article management script which is open source which i can update or change to suit my needs and
has not been hacked. Or Java Content management script or something which can save me the time to develop this whole thing.
I would really appreciate it.
Generally if you store the content in a database, you have scripts which serve up that content, and thus search engine spiders index the served versions of the article.
There are many content management systems out there, it's really a subjective choice which one you choose. Whether or not something "has been hacked" is a poor indicator of whether it can currently and/or might in the future be compromised; the developers of CMS software tend to patch known holes and it's impossible to predict future holes based on past ones, so really, you're best bet is to just try to find something with solid support and active development, and patch frequently as security updates are released.
As others have said, storing article data in a database is no problem. The articles will get rendered into HTML by some script, and displayed on your site, where search engines will find them. There are a bunch of techniques to improve how well your articles will show up in search.
In this day and age, I wouldn't recommend rolling your own system. There are a great number of off-the-shelf software packages that can handle your requirements. Wordpress is a very popular blogging system, written in PHP (with MySQL), that will probably meet all of your requirements. It supports multiple authors (and various roles for authors such as author/editor/administrator), commenting/discussion, and has a huge array of plugins that provide additional or altered functionality. It's well documented (both user and developer documentation), actively maintained, and pretty good overall.
If Wordpress doesn't float your boat, I'd look around at some of the other PHP-driven blogging tools. There are a ton of them, and it's very likely that one will fit your needs, and you can avoid reinventing the wheel for the 900th time.
I am sorry i still didn't understand. Here is the example lets say user1 submitted the article1 and the content got stored in the database. Now on a home page there is a link "How to train your pet" and user clicks on this link and it goes to a servlet which pulls the article content and information from the database and generates an output and displays it into ... what an html or what like will it save the output as an html on the server so that next time when another user clicks on "How to train your pet" on the homepage he will be directed to this generated html
Or another case where servlet will generate the output on the browser where user will read and vote, rank etc. but in this case there is no html file so how would search engines will rank this article as this file doesn't exist. Its so confusing.
If the articles are stored as html somewhere on my server then Google Spider programs will be able to get them for search results but if i store the content of my article in MySQL (the database which i want to use), how would search engines rank my website articles.
It doesn't matter how they are stored, only that they are addressable via http URIs. Browsers don't access data in databases, they make requests to web servers (which might run programs to fetch data from databases or might fetch data from files on the file system).
I need to know if there is any PHP article management script which is open source which i can update or change to suit my needs and has not been hacked. Or Java Content management script or something which can save me the time to develop this whole thing.
Hundreds, in both Java and PHP.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I write a service which stores millions files (20-30mb file) on a disk and I need to write a search function to find a file by name (there is no need to search file content) or view files in explorer (for example, navigate in browser as a folder structure). I want to make it fast, reliable and simple in Java. Say, I plan to run two services both of which can be used to upload a file or search files by name pattern. What will be the best technology/approach to use to implement this? Store a file on a disk as well as the path and name in the database, search against the database and fetch findings by path from the database? Any other good ideas? I thought about elasticsearch but looks like a heavy solution.
This question is too broad and rather not in a format of SO (concrete programming questions mostly with code snippets that try to address a concrete technical difficulty given the set of technologies).
There are many ways to fulfill your requirements. Yet, based solely on the information presented in your question, its impossible to recommend something because we don't really know your requirements. I'll explain:
I plan to run two services both of which can be used to upload a file or search files by name pattern.
Does this mean that the file system has to be distributed?
If so, consider Cloud solutions style aws's S3.
If you can't run in the cloud, here you can find a comprehensive list of distributed filesystems.
Elasticsearch can also work of course as a search engine, but its more a full fledged search engine, so looks like an overkill for me in this case.
You might want to work directly with lucene so that you won't need to run an additional process that also might fail (ES is built on top of lucene). Lucene will store its index directly on the filesystem, again if it meets the requirements.
Now you're talking also about the database - again a possible direction especially if you're already have one in your project. In general relational database management servers have some support of searching but there are more advanced solutions: in PostgreSQL for example you have a GIN index (inverted index) again the same concepts for full text search that go way beyond standard's SQL's LIKE Operator.
Yet another idea: go with a local disk. If you're on linux there is an indexing utility called "locate" that you can delegate the index creation to.
So the choice is yours.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have to submit a project in my college for which I have only two months! I already have an idea of what to make but don't know which technology to go for. I want to use something latest so as to make my project more efficient and flexible.
I wanted to make something like "Attendance Management System" in which we can take attendance of students and save the records on underlying database, also to perform some kind of data mining on the data (to find some interesting patterns like the_most_attended_lecture or to apply some probabilistic model to estimate the_next_possible_bunk or analysis based on an individual student record to compute anything interesting...) and then to develop an android app for the UI that can handle request and response to the database.
I'm really confused as what to go for? Currently I have no knowledge of the following but my friend suggested me to choose among them: node.js (with express framework) REST API, PHP, JSP, JSON, and MongoDB.
I would really appreciate your help guys. Please help. Thanks
Lets try to decide the technology stack according to your requirements.
1. Latest technology - Although you didn't give any justification for this requirement. But as you want, latest fads going on are for web server are node, go lang, nginx(if you happen to choose php in the end) and mongo, elastic search for data store.
2. Less amount of time - You have only 2 months to learn the technology, build the prototype , design the db schema, implement everything and test. Hence I will suggest you to go with node.js or php(I am assuming you are familiar with JS and php).
3. High database IO - I don't know what scale you will be working upon but the only major thing you server will be doing is DB IO, hence you should choose some non-blocking technology and among them most famous is NODE.JS.
Node.js is something which is fulfilling every requirement.
If I were you, I would have choose express.js (express init and you are ready to go), Mysql (If you are not familiar with any NoSql as mysql seems to be fulfilling every requirement). And android app could be anything like cordova as app is doing nothing but HTTP request and some presentation of data.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have an app out in the market and planning to maintain basic user data somewhere on backend.
My app is free so I am NOT getting any money from users.
My question is what is the best way to store this data(data has name, email, phonenumber etc.)
One option is to use Google Mobile Backend starter kit but that seems too complex for such a small requirement.
Appreciate your help.
Thanks.
Ok, there are many options you can achieve your goal and these options depends on your proficiency in other areas apart from java and your preferences. Below is just a small list,
Server side language: php, jsp, etc
Database: MySQL, NoSQL, SQLite, etc
Webhost: any free provider (just google "Free webhosting service")
Client side: as you mentioned you already have an app on play store so you will have to update you application accordingly and release new version.
I prefer to use combination of php and MySQL for all my back end work as I feel it is very easy to create and maintain. I also use 000webhost.com as free webhosting service, this service is completely free and also supports php and MySql without any restriction.
First Step I would suggest you is to choose your web-hosting provider, sign up on it and setup your database through PHPMyAdmin (Very easy to do so if you know basic database fundamentals).
Second step would be to create an API according to your choice of server side language (I am assuming you would use php but you are free to use any other language). If you don't know any server side language then you might want to follow some online tutorial and get your self familiar with php (which is again very easy of you know some other programming language). You can simply start from coding basic functions such as retrieving all data and echo them on browser or insert some fields in database, etc. I would advice you to completely code and test all your functionality on normal web browser before you go on about updating you android application and the reason is that once you know what response you are expecting and you have tested it on computer screen it becomes easy to code it for android.
And the final step would be to update your android application, for this well know process is using json strings for sending and/or retrieving data to/from database. If you are only looking to insert few fields in database then you can also use GET or POST methods to send and receive data. And the good news is there are many great tutorials available online for HttpRequest from android you can google it yourself.
Disclaimer: I am not promoting any free/paid service provider in my answer, the only reason I mentioned name is because OP has asked twice for it. If you are thinking of downvoting or flagging the answer for that reason please leave comment and I would delete it ASAP.
I would use a simple php page that captures user data via json from Android and saves them in a MySQL db (technology simple and very cheap, there are many hosting php + mysql free or very low cost);
but if you prefer a java-oriented approach, although slightly more complex
I advise you to Google App Engine that is free (with well-defined limits):
http://www.vogella.com/tutorials/GoogleAppEngineJava/article.html
using the latter you will need to use servlets (do not recommend endpoints) and use JPA to access the database which provides GAE (NoSQL database)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
This may be too broad of a question, but how does one access a Bible document in an Android application? Is it a text file that has an index to find specific verses or is it something much more complicated? I hope that is enough to answer from.
The first step would be to actually find a structured bible dataset from somewhere.
You can search and try to see if there's an xml version of your favourite translation somewhere, and maybe download that.
Once you've got it (either as xml, json, or whatever format), you can write some Java code to parse the file, and load it into some appropriate data structure that allows you to do what you want with it efficiently (eg. search it by verse).
You could even put it into a database (eg. MySQL or MongoDB), which would allow you to search it efficiently.
But really, how you want to structure the data depends on how you're going to use it, and what formats it's already available in (as it could be a pain to clean up the XML).
You might find the following resources useful:
Web Service APIs to directly get verses: http://www.4-14.org.uk/xml-bible-web-service-api
These would mean avoiding a lot of the headaches of dealing with file formats, indexing, and all kinds of other stuff.
Web service APIs generally work by your program submitting a query to a website (eg. including the biblical reference), and you get back some structured data (eg. xml/json) containing the verse(s) you requested.
Download a structured offline copy: http://www.bibletechnologies.net./osistext/
This would mean you have to find, download, parse, and index your own data structure for dealing with the text, but it would be much faster (if done right) than using a web service to do it.
The link I posted here has only some example books from the bible, but if you look you'll find more around the web.
It completely depends on the format of the file.
Any book or text document has multiple ways it can be stored and distributed. It could simply be in a .pdf file, or it could be stored in an XML, or .epub
It is beyond broad, because there are so many ways to do it, it's impossible to guess without more information.
This link has some information about the e-book formats:
http://en.wikipedia.org/wiki/Comparison_of_e-book_formats
And that's just one small subsection of ways text can be stored.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently wrote a custom web crawler/spider using Java and the JSoup (http://jsoup.org/) HTML parser. The web crawler is very rudimentary - it uses the Jsoup connect and get methods to get the source of pages and then other JSoup methods to parse the content. It randomly follows almost any links it finds, but no point does it attempt to download files or execute scripts.
The crawler picks seed pages from a long list of essentially random webpages, some of which probably contain adult content and/or malicious code. Recently while I was running the crawler my anti virus (Avast) flagged down one of the requests as a "threat detected". The offending URL looked malicious.
My question is, can my computer get a virus or any sort of malware through my web crawler? Are there any precautions or checks I should put in place?
In theory, it can.
However, as you don't execute Flash and similar plugins, but only process the text data, chances are pretty high that your HTML parser does not have a known vulnerability.
Furthermore, all the viruses and mailicious web sites target the big user groups. There are only so few users using JSoup. Most are using Internet Exploder, for example. That is why the viruses target these platforms. These days, Mac OSX is becoming more and more attractive. I just read about a new malware that infects Mac OSX users only, via some old Java security issue, when they visit a web site. It was found on Dalai Lama related web sites, so maybe it's Chinese.
If you really are paranoid, set up a "nobody" user on your system, which you heavily restrict. This works best with Linux. In particular with SELinux you can narrow down the permissions of the web crawler up to the point where you can stop it from being able to access anything except load an external web site and send the result to a database. An attacker can then only crash your crawler, or maybe abuse it for a DDoS attack, but not corrupt or take over your system.