Using local MySQL Database in JPA driven Java Application [duplicate] - java

This question already has answers here:
What is a NullPointerException, and how do I fix it?
(12 answers)
Closed 4 years ago.
I'm trying to create a Java application that will run on a hypothetical client machine, where members of staff can both view or add customer details from a local MySQL database.
I'm trying to use JPA to do so, with query methods being in this form:
public class DataManagerImpl implements DataManager{
#PersistenceContext
private EntityManager em;
public List<Customer> AllCustomers(){
TypedQuery<Customer> query = em.createNamedQuery("Customer.findAll", Customer.class);
return query.getResultList();
} }
I've got a DBConnection class:
public class MyDBConn implements DBConnectivity {
#Resource(mappedName="jdbc:mysql://localhost:3306/solsoft_DB") DataSource dataSource;
Connection myConn = null;
public Connection open_Connection() {
String user = "root";
String pass = "password";
try {
Class.forName("com.mysql.jdbc.Driver");
myConn = dataSource.getConnection(user, pass);
return myConn;
} catch (Exception exc) {
exc.printStackTrace();
return myConn;
}
}}
And then in my main method:
DataManagerImpl dm = new DataManagerImpl();
List<Customer> allCustomers = dm.AllCustomers();
for(Customer c : allCustomers){
String cust = "" + c.getForename() + " " + c.getSurname();
System.out.println(cust);
}
I'd really appreciate if anyone could point my in the right direction on how to actually go about getting some information from the DB using JPA in this way.

The application will be running in a server? What server?
Or is an standalone application?
My guess (the best I can do as there are many things not pointed in the question) is that you are trying to supply the connection that PersistenceContext should use.
If its like this and you are using JPA you should register an EntityManagerFactory with the required properties for connection and get your PersistenceContext from that factory. (See an example here)
Another way to go would be to edit your persistence.xml file defining this properties inside the file like this and just let your context handle the logic for database connection.

Related

How does the postgresql (set role user) command use in SSM projects?

Now the project is using springmvc+ spring + mybatis + druid + postgresql
The users in the project correspond to the users in the database, so each time you run SQL, you switch the users with the (set role user) command and then perform the crud operations of the database.
My question:
Because there are many connections in the connection pool, the first step is to get the connection of the database, then switch users, and then perform the operation of business SQL on the database. But I don't know which part of the project this logic should be processed, because the connection of the connection pool and the execution of SQL are implemented by the underlying code. Do you have any good plans?
Can you provide me with a complete demo, such as the following operations:
Step 1, get the user's name from spring security (or shiro).
Step 2, Get the connection currently using the database from the connection pool.
Step 3, execute SQL (set role user) to switch roles.
Step 4, perform crud operation.
Step 5, Reset the database connection(reset role)
Here is a simple way to do what you need with the help of mybatis-spring.
Unless you already use mybatis-spring the first step would be to change the configuration of your project so that you obtain SqlSessionFactory using org.mybatis.spring.SqlSessionFactoryBean provided by mybatis-spring.
The next step is the implementation of setting/resetting the user role for the connection. In mybatis the connection lifecycle is controlled by the class implementing org.apache.ibatis.transaction.Transaction interface. The instance of this class is used by the query executor to get the connection.
In a nutshell you need to create your own implementation of this class and configure mybatis to use it.
Your implementation can be based on the SpringManagedTransaction from mybatis-spring and would look something like:
import org.springframework.security.core.Authentication;
class UserRoleAwareSpringManagedTransaction extends SpringManagedTransaction {
public UserRoleAwareSpringManagedTransaction(DataSource dataSource) {
super(dataSource);
}
#Override
public Connection getConnection() throws SQLException {
Connection connection = getCurrentConnection();
setUserRole(connection);
return connection;
}
private Connection getCurrentConnection() {
return super.getConnection();
}
#Override
public void close() throws SQLException {
resetUserRole(getCurrentConnection());
super.close();
}
private void setUserRole(Connection connection) {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
String username = authentication.getName();
Statement statement = connection.createStatement();
try {
// note that this direct usage of usernmae is a subject for SQL injection
// so you need to use the suggestion from
// https://stackoverflow.com/questions/2998597/switch-role-after-connecting-to-database
// about encoding of the username
statement.execute("set role '" + username + "'");
} finally {
statement.close();
}
}
private void resetUserRole(Connection connection) {
Statement statement = connection.createStatement();
try {
statement.execute("reset role");
} finally {
statement.close();
}
}
}
Now you need to configure mybatis to use you Transaction implementation. For this you need to implement TransactionFactory similar to org.mybatis.spring.transaction.SpringManagedTransactionFactory provided by mybatis-spring:
public class UserRoleAwareSpringManagedTransactionFactory implements TransactionFactory {
#Override
public Transaction newTransaction(DataSource dataSource, TransactionIsolationLevel level, boolean autoCommit) {
return new UserRoleAwareSpringManagedTransaction(dataSource);
}
#Override
public Transaction newTransaction(Connection conn) {
throw new UnsupportedOperationException("New Spring transactions require a DataSource");
}
#Override
public void setProperties(Properties props) {
}
}
And then define a bean of type UserRoleAwareSpringManagedTransactionFactory in your spring context and inject it into transactionFactory property of the SqlSessionFactoryBeen in your spring context.
Now every time mybatis obtains a Connection the implementation of Transaction will set the current spring security user to set the role.
Best practice is that database users are applications. Application users' access to particular data/resource should be controlled in the application. Applications should not rely on database to restrict data/resource access. Therefore, application users should not have different roles in database. An application should use only a single database user account.
Spring is manifestation of best practices. Therefore, Spring does not implement this functionality. If you want such functionality, you need to hack.
Referring to this, your best bet is to:
#Autowired JdbcTemplate jdbcTemplate;
// ...
public runPerUserSql() {
jdbcTemplate.execute("set role user 'user_1';");
jdbcTemplate.execute("SELECT 1;");
}
I still do not have much confidence in this. Unless you are writing a pgAdmin webapp for multiple users, you should re-consider your approach and design.

EJB and preparedStatement?

I am developing a web application, where, among other things, I need to upload a file to a BLOB column in a mysql table. From what I can see this can be done with JDBC calls (PrepareStatement() etc), but I would like to be able to do this in an EJB class - what I have cobbled together looks like this:
#Stateless
public class ItemsSession {
#PersistenceContext(unitName ="officePU")
private EntityManager em;
private List<Items> itl;
private static final Logger logger=
Logger.getLogger(ItemsSession.class.getName());
...
public String updateDocument(Integer id,InputStream is) throws SQLException{
String msg="";
try{
java.sql.Connection conn = em.unwrap(java.sql.Connection.class);
PreparedStatement pstmt=conn.prepareStatement("UPDATE Documents SET doc = ? WHERE id = ?");
pstmt.setBinaryStream(1, is);
pstmt.setLong(2, id);
pstmt.executeUpdate();
pstmt.close();
}
catch (PersistenceException e){
msg=e.getMessage();
}
return msg;
}
...
}
I have two questions, though:
I would like not to use JDBC directly - is there a way to do this that is 'pure JPA' (edit: not EJB)?
If I have to do it this way, is the PreparedStatement included in the container managed transaction?
Another edit: the code above does the job - I have now tested it. But it isn't pretty, I think.
The first thing you have to do to persist BLOB values the JPA way is you define an entity. The following an example pseodo code:
#Entity
public class Documents {
#Id
private Long id;
#Lob
private byte[] doc;
// .... getters + setters
}
Then you modify your EJB as follows:
#Stateless
public class ItemsSession {
#PersistenceContext(unitName ="officePU")
private EntityManager em;
// ... the rest of your code
public String updateDocument(Integer id,InputStream is) throws SQLException{
String msg = "";
Documents docs = em.find(Documents.class, id); // fetch record from DB
// Assuming your InputStream is a ByteArrayInputStream
byte[] doc = new byte[is.available()]; // create target array
is.read(doc, 0, doc.length); // read bytes into byte array
docs.setDoc(doc); //
return msg; // returning exception message from method call?????
}
...
}
If you don't change the defaults EJB methods are invoked in a transaction by default. So when your method exits, the update should be synchronized with the database.
This answer kann only help you if you read and understand the basics of the JPA. And here is an official tutorial to JPA persistence among other lots of tutorials on the web.
Update
I would like not to use JDBC directly - is there a way to do this that is 'pure JPA'
No.
If I have to do it this way, is the PreparedStatement included in the container managed transaction?
No. But you can use bean managed transaction. If you want to use BMT, the following pseudocode might help you:
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
public class ItemsSession {
#Resource UserTransaction ut;
#Resource DataSource datasource; // you should define datasource on your application server
...
public String updateDocument(Integer id,InputStream is) throws SQLException{
// ...
try (java.sql.Connection conn = datasource.getConnection();
PreparedStatement pstmt=conn.prepareStatement("UPDATE Documents SET doc = ? WHERE id = ?")) {
pstmt.setBinaryStream(1, is);
pstmt.setLong(2, id);
ut.begin();
pstmt.executeUpdate();
ut.commit();
} catch (PersistenceException e){
// ... error handling
}
return ...;
}
...
}
I think you use EJB intergate to with JPA , because you are using this:
#PersistenceContext(unitName ="officePU")
Refernce: http://www.adam-bien.com/roller/abien/entry/ejb_3_persistence_jpa_for

java - java.lang.ClassCastException on calling getSingleResult() method?

I have the following java class to connect and query MySQL database using JPA:
public class UserEntityManager {
private EntityManagerFactory emf;
private EntityManager em;
private EntityTransaction tx;
public UserEntityManager() {
emf = Persistence.createEntityManagerFactory("OmegaThingsPU");
em = emf.createEntityManager();
tx = em.getTransaction();
}
public User getUser(String username, String password) {
Query query = em.createQuery("SELECT u FROM User u "
+ "WHERE u.userUsername = :userUsername "
+ "AND u.userPassword = :userPassword");
query.setParameter("userUsername", username);
query.setParameter("userPassword", password);
User user;
try {
user = (User) query.getSingleResult();
em.close();
emf.close();
return user;
} catch (Exception ex) {
System.out.println("Exception ****************** ");
System.out.println(ex.toString());
em.close();
emf.close();
return null;
}
}
}
I'm always getting this exception:
java.lang.ClassCastException: com.omegathings.persistant.User cannot be cast to com.omegathings.persistant.User
I tried getResultList().get(0), but it didn't work also, what I'm missing here?
UPDATE:
Restarting the glassfish server (Version 4.1) will solve the problem temporary, but then on modifying the code and redeploying the application, I'm getting the exception again.
UPDATE:
It seems that on redeploying the application I'm getting 2 different class loaders as the following:
WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)
WebappClassLoader (delegate=true)
UPDATE:
Printing out the parent of the 2 above classloaders results in the following:
org.glassfish.internal.api.DelegatingClassLoader#5b11b82d
org.glassfish.internal.api.DelegatingClassLoader#5b11b82d
As you can see, at first deployment, the Ids are identical.
Now on the second deployment I got:
org.glassfish.internal.api.DelegatingClassLoader#57c83b1d
org.glassfish.internal.api.DelegatingClassLoader#5b11b82d
Where 2 parents have a different Ids, I'm not sure if this will help in solving my problem.
Finally I figure out the problem after many, many tests....
The problem is in the following websocket code:
#ServerEndpoint("/endpoint/{authentication}")
public class WSManager {
UserEntityManager userEntityManager = new UserEntityManager();
#OnOpen
public void onOpen(Session session, #PathParam("authentication") String authString) {
User user = new User();
user = (User) userEntityManager.getUser(parameter[0]);}
The problem is instantiating UserEntityManager class outside any of the following websocket class methods (onOpen, onMessage, onError, onClose).
Just moving the instantiation inside onOpen method will solve the problem.
I can't elaborate on the reason of such behavior, so, may some experts do.

Java connecting to multiple databases

I am creating a java application that connects to multiple databases. A user will be able to select the database they want to connect to from a drop down box.
The program then connects to the database by passing the name to a method that creates an initial context so it can talk with an oracle web logic data source.
public class dbMainConnection {
private static dbMainConnection conn = null;
private static java.sql.Connection dbConn = null;
private static javax.sql.DataSource ds = null;
private static Logger log = LoggerUtil.getLogger();
private dbMainConnection(String database) {
try {
Context ctx = new InitialContext();
if (ctx == null) {
log.info("JDNI Problem, cannot get InitialContext");
}
database = "jdbc/" + database;
log.info("This is the database string in DBMainConnection" + database);
ds = (javax.sql.DataSource) ctx.lookup (database);
} catch (Exception ex) {
log.error("eMTSLogin: Error in dbMainConnection while connecting to the database : " + database, ex);
}
}
public Connection getConnection() {
try {
return ds.getConnection();
} catch (Exception ex) {
log.error("Error in main getConnection while connecting to the database : ", ex);
return null;
}
}
public static dbMainConnection getInstance(String database) {
if (dbConn == null) {
conn = new dbMainConnection(database);
}
return conn;
}
public void freeConnection(Connection c) {
try {
c.close();
log.info(c + " is now closed");
} catch (SQLException sqle) {
log.error("Error in main freeConnection : ", sqle);
}
}
}
My problem is what happens if say someone forgets to create the data source for the database but they still add it to the drop down box? Right now what happens is if I try and connect to a database that doesn't have a data source it errors saying it cannot get a connection. Which is what I want but if I connect to a database that does have a data source first, which works, then try and connect to the database that doesn't have a data source, again it errors with
javax.naming.NameNotFoundException: Unable to resolve 'jdbc.peterson'. Resolved 'jdbc'; remaining name 'peterson'.
Which again I would expect but what is confusing me is it then grabs the last good connection which is for a different database and process everything as if nothing happened.
Anyone know why that is? Is weblogic caching the connection or something as a fail safe? Is it a bad idea to create connections this way?
You're storing a unique datasource (and connection, and dbMainConnection) in a static variable of your class. Each time someone asks for a datasource, you replace the previous one by the new one. If an exception occurs while getting a datasource from JNDI, the static datasource stays as it is. You should not store anything in a static variable. Since your dbMainConnection class is constructed with the name of a database, and there are several database names, it makes no sense to make it a singleton.
Just use the following code to access the datasource:
public final class DataSourceUtil {
/**
* Private constructor to prevent unnecessary instantiations
*/
private DataSourceUtil() {
}
public static DataSource getDataSource(String name) {
try {
Context ctx = new InitialContext();
String database = "jdbc/" + name;
return (javax.sql.DataSource) ctx.lookup (database);
}
catch (NamingException e) {
throw new IllegalStateException("Error accessing JNDI and getting the database named " + name);
}
}
}
And let the callers get a connection from the datasource and close it when they have finished using it.
You're catching JNDI exception upon lookup of the nonexistent datasource but your singleton still keeps the reference to previously looked up datasource. As A.B. Cade says, null reference to ds upon exception, or even before that.
On a more general note, perhaps using Singleton is not the best idea.

Store database connection as separate Class - Java

Is it possible to store a database connection as a separate class, then call the database objects from a main code? ie;
public class main{
public static void main{
try{
Class.forName("com.jdbc.driver");
Database to = new Database(1,"SERVER1","DATABASE");
Database from = new Database(2,"SERVER2","DATABASE");
String QueryStr = String.format("SELECT * FROM TABLE WHERE Id = %i", to.id)
to.results = sql.executeQuery(QueryStr);
while (to.results.next()) {
String QueryStr = String.format("INSERT INTO Table (A,B) VALUES (%s,%s)",to.results.getString(1),to.results.getString(2));
from.sql.executeQuery("QueryStr");
}
to.connection.close()
from.connection.close()
} catch (Exception ex) {
ex.printStackTrace();
{ finally {
if (to.connection != null)
try {
to.connection.close();
} catch (SQLException x) {
}
if (from.connection != null)
try {
from.connection.close();
} catch (SQLException x) {
}
}
}
public static class Database {
public int id;
public String server;
public String database;
public Connection connection;
public ResultSet results;
public Statement sql;
public Database(int _id, String _server, String _database) {
id = _id;
server = _server;
database = _database;
String connectStr = String.format("jdbc:driver://SERVER=%s;port=6322;DATABASE=%s",server,database);
connection = DriverManager.getConnection(connectStr);
sql = connection.createStatement;
}
}
}
I keep getting a "Connection object is closed" error when I call to.results = sql.executeQuery("SELECT * FROM TABLE"); like the connection closes as soon as the Database is done initializing.
The reason I ask is I have multiple databases that are all about the same that I am dumping into a master database. I thought it would be nice to setup a loop to go through each from database and insert into each to database using the same class. Is this not possible? Database will also contain more methods than shown as well. I am pretty new to java, so hopefully this makes sense...
Also, my code is probably riddled with syntax errors as is, so try not to focus on that.
Connection object is closed doesn't mean that the connection is closed, but that the object relative to the connection is closed (it could be a Statement or a ResultSet).
It's difficult to see from your example, since it has been trimmed/re-arranged, but it looks like you may be trying to use a ResultSet after having re-used its corresponding Statement. See the documentation:
By default, only one ResultSet object per Statement object can be open
at the same time. Therefore, if the reading of one ResultSet object is
interleaved with the reading of another, each must have been generated
by different Statement objects. All execution methods in the Statement
interface implicitly close a statment's current ResultSet object if an
open one exists.
In your example, it may be because autoCommit is set to true by default. You can override this on the java.sql.Connection class. Better yet is to use a transaction framework if you're updating multiple tables.

Categories

Resources