Trying to connect Cassandra with Java via below code and getting localhost/127.0.0.1:9042] Cannot connect error-
public static void main(String[] args)
{
Cluster cluster;
Session session;
//cluster connects to the address of the node provided.One contact point is required.Good to have multiple
cluster=Cluster.builder().addContactPoint("localhost").build();
session=cluster.connect("ecommerce");
session.execute("INSERT INTO products (pdt_id, cat_id, pdt_name, pdt_desc, price, shipping) VALUES (002,105, 'Candy 0.9 cu. ft. Washing Machine', 'Capacity of 1 cu. ft.10 different power levels', 64.00, 'Expedited')");
session.execute("INSERT INTO products (pdt_id, cat_id, pdt_name, pdt_desc, price, shipping) VALUES (003,106, 'Prestige 0.9 cu.cm. Pressure Cooker', 'Capacity: 18 qt.', 70.00, 'Dispatched from warehouse')");
String pdtid = null, pdtname = null, pdtdesc = null;
float price = 0;
ResultSet resultSet=session.execute("select * from products");
for(Row row:resultSet)
{
pdtid = Integer.toString(row.getInt("pdt_id"));
pdtname = row.getString("pdt_name");
}
cluster.close();
}
}
The java code syntax looks correct to me.
Please Make sure you have Cassandra is running on your machine and port 9042 open (check firewall). You can check to execute cqlsh and see if Cassandra is responding.
I see you are using an outdated version of the driver. You should consider updgrading to 4.x. Here is the complete documentation: https://docs.datastax.com/en/developer/java-driver/4.4/manual/core/
Related
I'm having a hard time getting HBase's FuzzyRowFilter to work.
I have the following test table:
hbase(main):014:0> scan 'test'
ROW COLUMN+CELL
row-01 column=colfam1:col1, timestamp=1481193793338, value=value1
row-02 column=colfam1:col1, timestamp=1481193799186, value=value2
row-03 column=colfam1:col1, timestamp=1481193803941, value=value3
row-04 column=colfam1:col1, timestamp=1481193808209, value=value4
row-05 column=colfam1:col1, timestamp=1481193812737, value=value5
5 row(s) in 0.0200 seconds
Here is my Java code (I started with Scala, but the results are the same - none):
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "localhost:2182");
conf.set("hbase.master", "localhost:60000");
conf.set("hbase.rootdir", "/hbase");
try {
Scan scan = new Scan();
scan.setCaching(5);
byte[] rowKeys = Bytes.toBytesBinary("???-01");
byte[] fuzzyInfo = {0x01,0x01,0x01,0x00,0x00,0x00};
FuzzyRowFilter fuzzyFilter = new FuzzyRowFilter(
Arrays.asList(
new Pair<byte[], byte[]>(
rowKeys,
fuzzyInfo)));
System.out.println("### fuzzyFilter: " + fuzzyFilter.toString());
scan.addFamily(Bytes.toBytesBinary("colfam1"));
scan.setStartRow(Bytes.toBytesBinary("row-01"));
scan.setStopRow(Bytes.toBytesBinary("row-05"));
scan.setFilter(fuzzyFilter);
Connection conn = ConnectionFactory.createConnection(conf);
Table table = conn.getTable(TableName.valueOf("test"));
ResultScanner results = table.getScanner(scan);
int count = 0;
int limit = 100;
for ( Result r : results ) {
System.out.println("" + r.toString());
if (count++ >= limit) break;
}
} catch (Exception e) {
e.printStackTrace();
}
I simply do not get any results back from the server. If I comment out the line scan.setFilter(fuzzyFilter);, I get the exepcted results:
keyvalues={row-01/colfam1:col1/1481193793338/Put/vlen=6/seqid=0}
keyvalues={row-02/colfam1:col1/1481193799186/Put/vlen=6/seqid=0}
keyvalues={row-03/colfam1:col1/1481193803941/Put/vlen=6/seqid=0}
keyvalues={row-04/colfam1:col1/1481193808209/Put/vlen=6/seqid=0}
Am I doing something wrong? Is there a bug in HBase (version 1.2.2)? I am using the version installed through Homebrew on latest Mac OS Sierra.
Update
On a Cloudera Hadoop cluster running CDH 5.7 with HBase 1.2.0-cdh5.7.0, I get the desired output for rowkey row-01. The error must somehow be related to my local setup.
Solution
Indeed, the problem was that HBase server installation and client JAR versions did not match. In my case, I was using the artifacts
hbase-common
hbase-client
hbase-server
with version 1.2.0-cdh5.7.0 instead of 1.2.2.
My mistake was assuming that minor version differences would not have a large impact, but apparently Cloudera has applied some major changes in their versions with respect to the official code base. Changing to the official version 1.2.2 made the FuzzyRowFilter work as expected.
It should print only rowkey of row-01 as can be perceived from the filter condition.
There is no such bug and it will work as expected as I have been using same for some time now.
Check your configurations,dependencies,etc.
Due to versioning,many times libraries and their clients becom incompatible.
Lets take a simple example:
class ServerVersionA {
public static void getData() {
return DataOject(data with headerVersionA);
}
}
class ClientVersionB {
public void showData() {
DataObject dataObject = makeRequest(params);
//Check whether data recieved is of version B after veryfying header boolean status=validate(dataObject);
if (status) {
doIO(dataObject);
}
}
}
In this case,if the header does not match,client does simply sit idle.
These kind of issues are mostly taken care of but sometimes they creep in.
If we look at the sources of installation and client version,we can find out why data is not being returned and no exception is propagated.
Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.
i'm trying to develop a client-server chat application using java servlets and mysql(innoDB engine) and jetty server. i tested the connection code with 100 simulated users hitting the server at once using jmeter but i got 40 secs as average time :( for all of them to get connected with min time taken by thread( 2 secs ) and max time( 80 secs). My connection database table has the followng structure two columns connect(user,stranger) and my servlet code is shown below.I'm using innoDB engine for row level locking.I also used explicit write lock SELECT...... FOR UPDATE inside transaction.I'm looping the transaction if it rollbacks due to deadlock until it executes atleast once.Once two users get connected they update their stranger's column with eachother's randomly generated unique number.
i'm using c3p0 connection pooling with min 100 threads open and jetty with min 100 threads.
please help me to identify the bottle necks or tools needed to find them.
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.naming.*;
import javax.sql.*;
public class connect extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse res)
throws java.io.IOException {
String unumber=null;
String snumber=null;
String status=null;
InitialContext contxt1=null;
DataSource ds1=null;
Connection conxn1=null;
PreparedStatement stmt1=null;
ResultSet rs1=null;
PreparedStatement stmt2=null;
InitialContext contxt3=null;
DataSource ds3=null;
Connection conxn3=null;
PreparedStatement stmt3=null;
ResultSet rs3=null;
PreparedStatement stmt4=null;
ResultSet rs4=null;
PreparedStatement stmt5=null;
boolean checktransaction = true;
unumber=req.getParameter("number"); // GET THE USER's UNIQUE NUMBER
try {
contxt1 = new InitialContext();
ds1 =(DataSource)contxt1.lookup("java:comp/env/jdbc/user");
conxn1 = ds1.getConnection();
stmt1 = conxn1.prepareStatement("SELECT * FROM profiles WHERE number=?"); // GETTING USER DATA FROM PROFILE
stmt1.setString(1,unumber);
rs1 = stmt1.executeQuery();
if(rs1.next()) {
res.getWriter().println("user found in PROFILE table.........");
uage=rs1.getString("age");
usex=rs1.getString("sex");
ulocation=rs1.getString("location");
uaslmode=rs1.getString("aslmode");
stmt1.close();
stmt1=null;
conxn1.close();
conxn1 = null;
contxt3 = new InitialContext();
ds3 =(DataSource)contxt3.lookup("java:comp/env/jdbc/chat");
conxn3 = ds3.getConnection();
conxn3.setAutoCommit(false);
while(checktransaction) {
// TRANSACTION STARTS HERE
try {
stmt2 = conxn3.prepareStatement("INSERT INTO "+ulocation+" (user,stranger) VALUES (?,'')"); // INSERTING RECORD INTO LOCAL CHAT TABLE
stmt2.setString(1,unumber);
stmt2.executeUpdate();
stmt2.close();
stmt2 = null;
res.getWriter().println("inserting row into LOCAL CHAT TABLE.........");
System.out.println("transaction starting........."+unumber);
stmt3 = conxn3.prepareStatement("SELECT user FROM "+ulocation+" WHERE (stranger='' && user!=?) LIMIT 1 FOR UPDATE");
stmt3.setString(1,unumber); // SEARCHING FOR STRANGER
rs3=stmt3.executeQuery();
if (rs3.next()) { // stranger found
stmt4 = conxn3.prepareStatement("SELECT stranger FROM "+ulocation+" WHERE user=?");
stmt4.setString(1,unumber); //CHECKING FOR USER STATUS BEFORE CONNECTING TO STRANGER
rs4=stmt4.executeQuery();
if(rs4.next()) {
status=rs4.getString("stranger");
}
stmt4.close();
stmt4=null;
if(status.equals("")) { // user status is also null
snumber = rs3.getString("user");
stmt5 = conxn3.prepareStatement("UPDATE "+ulocation+" SET stranger=? WHERE user=?"); // CONNECTING USER AND STRANGER
stmt5.setString(1,snumber);
stmt5.setString(2,unumber);
stmt5.executeUpdate();
stmt5.setString(2,snumber);
stmt5.setString(1,unumber);
stmt5.executeUpdate();
stmt5.close();
stmt5=null;
}
} // end of stranger found
stmt3.close();
stmt3 = null;
conxn3.commit(); // TRANSACTION ENDING
checktransaction = false;
} // END OF TRY INSIDE WHILE
catch(java.sql.SQLTransactionRollbackException e) {
System.out.println("transaction restarted......."+unumber);
counttransaction = counttransaction+1;
}
} //END OF WHILE LOOP
conxn3.close();
conxn3 = null;
} // END OF USER FOUND IN PROFILE TABLE
} // end of try
catch(java.sql.SQLException sqlexe) {
try {conxn3.rollback();}
catch(java.sql.SQLException exe) {conxn3=null;}
sqlexe.printStackTrace();
res.getWriter().println("UNABE TO GET CONNECTION FROM POOL!");
}
catch(javax.naming.NamingException namexe) {
namexe.printStackTrace();
res.getWriter().println("DATA SOURCE LOOK UP FAILED!");
}
}
}
How many users do you have? Can you load them all into memory first and do a memory lookup?
If you separate you DB layer from your presentation layer, this is something you can change without changing the servlet (as it shouldn't care where the data comes from)
If you use Java memory it shouldn't take more than a 20 ms per user.
Here is a test which creates one million profiles in memory, looks them up and creates chat entries, which is removed later. The average time per operation was 640 ns (nano-seconds, or billionths of a second)
import java.util.LinkedHashMap;
import java.util.Map;
public class Main {
public static void main(String... args) {
UserDB userDB = new UserDB();
// add 1000,000 users
for (int i = 0; i < 1000000; i++)
userDB.addUser(
new Profile(i,
"user+i",
(short) (18 + i % 90),
i % 2 == 0 ? Profile.Sex.Male : Profile.Sex.Female,
"here", "mode"));
// lookup a users and add a chat session.
long start = System.nanoTime();
int operations = 0;
for(int i=0;i<userDB.profileCount();i+=2) {
Profile p0 = userDB.getProfileByNumber(i);
operations++;
Profile p1 = userDB.getProfileByNumber(i+1);
operations++;
userDB.chatsTo(i, i+1);
operations++;
}
for(int i=0;i<userDB.profileCount();i+=2) {
userDB.endChat(i);
operations++;
}
long time = System.nanoTime() -start;
System.out.printf("Average lookup and update time per operation was %d ns%n", time/operations);
}
}
class UserDB {
private final Map<Long, Profile> profileMap = new LinkedHashMap<Long, Profile>();
private final Map<Long, Long> chatsWith = new LinkedHashMap<Long, Long>();
public void addUser(Profile profile) {
profileMap.put(profile.number, profile);
}
public Profile getProfileByNumber(long number) {
return profileMap.get(number);
}
public void chatsTo(long number1, long number2) {
chatsWith.put(number1, number2);
chatsWith.put(number2, number1);
}
public void endChat(long number) {
Long other = chatsWith.get(number);
if (other == null) return;
Long number2 = chatsWith.get(other);
if (number2 != null && number2 == number)
chatsWith.remove(other);
}
public int profileCount() {
return profileMap.size();
}
}
class Profile {
final long number;
final String name;
final short age;
final Sex sex;
final String location;
final String aslmode;
Profile(long number, String name, short age, Sex sex, String location, String aslmode) {
this.number = number;
this.name = name;
this.age = age;
this.sex = sex;
this.location = location;
this.aslmode = aslmode;
}
enum Sex {Male, Female}
}
prints
Average lookup and update time per operation was 636 ns
If you need this to be faster you could look at using Trove4j which could be twice as fast in this case. Given this is likely to be fast enough, I would try to keep things simple.
Have you considered caching reads and batching writes?
I'm not sure how you can realistically expect anyone to determine where the bottle-necks are by merely looking at the source code.
To find the bottlenecks, you should run your app and the load test with a profiler attached, such as JVisualVM or YourKit or JProfiler. This will tell you exactly how much time is spent in each area of the code.
The only thing that anyone can really critique from looking at your code is the basic architecture:
Why are you looking up the DataSource on each doGet()?
Why are you using transactions for what appears to be unrelated database insertions and queries?
Is using a RDBMS to back a chat system really the best idea in the first place?
If your response times are so high, you need to properly index your db tables. Based on the times you provided I will assume this was not done.You need to speed up your read and writes.
Look up Execution Plans and how to read them. An execution plan will show you if/when indexes are being used with your queries; if you are performing seeks or scans etc on the tables. by using these, you can tweak your query/indexes/tables to be more optimal.
As others have stated, RDMS wont be your best option in large scale applications, but since you are just starting out it should be ok until you learn more.
Learn to properly setup those tables and you should see your deadlock counts and response times go down
I am trying to run a java program with the java/mongo driver on a separate computer than the one running mongod. I only modified the java/mongo tutorial code to include an ip address.
package mongotest;
import com.mongodb.*;
public class Main {
static DBCursor cur;
static DBCollection coll;
public static void main(String[] args) {
Mongo m;
try{
m = new Mongo("192.168.0.102"); // <---- This does not connect. It will eventually time out
DB db = m.getDB("playerdb");
coll = db.getCollection("players");
cur = coll.find();
//while (cur.hasNext())
// coll.remove(cur.next());
coll.ensureIndex(new BasicDBObject("playerID", 1).append("unique", true));
boolean unique = true;
cur = coll.find();
printResults(cur, "Find All Records");
boolean canCreate;
canCreate = createAccount("Josh", "1", cur, coll);
canCreate = createAccount("Jason", "1", cur, coll);
canCreate = createAccount("Ryan", "1", cur, coll);
canCreate = createAccount("Michael", "1", cur, coll);
canCreate = createAccount("John", "1", cur, coll);
canCreate = createAccount("Susan", "1", cur, coll);
cur = coll.find();
printResults(cur, "Find All Records After Insert");
}//try
catch(Exception e){
System.out.println(e);
}//catch
}
(Note: This will eventually time out and quit)
But when I run the same code on the computer running the database it's fine.
How can I get a connection between two computers on different networks to communicate?
First you need to ensure a network route:
can you ping computer b from computer a?
can you telnet to the mongo port from the second computer to the first?
If not, you have a networking problem not a programming one. In which case it might behoove you to ask this question on serverfault or superuser
Check if computer a can ping computer b. If it can, then check mongodb configuration parameters like auth and noauth and set the same according to your convinience.
Two computers on different networks? Because 192.168.0.102 sure looks like an internal address, not an external one.
You need to figure out what's the public IP address of the computer running mongodb, and use that.
What you're doing is almost like (but not quite as bad) as trying to connect to 127.0.0.1 and wondering why this only works when executed on the computer that hosts the service.
This is pretty much unrelated to MongoDB. Either your network connection is not working properly (firewall or routing issue) or your remote mongod daemon is not listening on the related external IP address (ensure that is bound to the proper IP address using the --bind_ip commandline option).
I have a problem: i write entries from java code to cassandra database, it works for a while, and then stops writing. (nodetool cfstats keyspace.users -H on all nodes show no changes in Number of keys (estimate))
Configuration : 4 nodes (4GB, 4GB, 4GB, and 6GB RAM).
I am using datastax driver, and connection like
private Cluster cluster = Cluster.builder()
.addContactPoints(<points>)
.build();
private Session session = cluster.connect("keyspace");
private MappingManager mappingManager = new MappingManager(session);
...
I do insert in database like
public void writeUser(User user) {
Mapper<User> mapper = mappingManager.mapper(User.class);
mapper.saveAsync(user, Mapper.Option.timestamp(TimeUnit.NANOSECONDS.toMicros(System.nanoTime())));
}
I also tried
public void writeUser(User user) {
Mapper<User> mapper = mappingManager.mapper(User.class);
mapper.save(user);
}
And two variants between.
In debug.log from server i see
DEBUG [GossipStage:1] 2016-05-11 12:21:14,565 FailureDetector.java:456 - Ignoring interval time of 2000380153 for /node
Maybe the problem is, that server in another country? But why it is writing entities at the beginning? How can i fix my problem?
Another update: session.execute on mapper.save returns ResultSet[ exhausted: true, Columns[]]