Having a very weird issue here. I have a list of environments built for dev/test/qa/prod. Each connects to a different database, one corresponding to each environment. When I run the code in dev, everything is great, when I run it in any other environment, I get
Factory method 'sessionFactory' threw exception; nested exception is org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: missing table [dbo.Cause_Code]
I would like to now note that this for test/qa/prod, the code is deployed to servers and WORKS. No errors, but running it locally gives me that missing table. That table definately exists and dev/test/qa/prod databases are in the exact same schema state.
You can see here that the table is in the QA database and the IDE sees that:
Adding schema="dbo" to table annotation does nothing, changing hibernate.hbm2ddl.auto to none allows it to build, but all calls fail for the same reason.
I should note that I have tested that is not the environments I built. Dev works, I changed the database connection for that environment to QA and it failed. I also tried adding in the data source directly to hibernate persistance window and that did nothing.
Can someone tell me why the IDE thinks the tabble doesn't exist, but the deployed code works great and the table definitely exists? I have to be missing a simple setting somewhere.
Well, I got tired of trying different things over the last 6 hours and re-installed the IDE. Imported the project which had the shared environment configs already set up. Made no changes and just ran the QA build, worked first try. I guess there was something messed up in the install. No idea why it only affected every environment but dev though. I tried everything form restarting the entire computer to disabling everything to creating an empty project and only the re-install fixed it.
Related
I'm hoping someone can help me figure this one out:
I have two projects on my Eclipse (Spring Tool Suite 3 to be exact) setup:
Our own project with our source code.
Another project from a provider, which our project references as a dependency.
We execute the application locally using a Tomcat v8.5 server.
It was all working fine until recently, when I performed a git pull to update my local code and messed up everything (I'm not sure if I changed something else).
Now, when I try to execute the application, I get the following error (everything compiles and builds correctly):
java.lang.IllegalArgumentException: The servlets named [A] and [A] are both mapped to the url-pattern [/XXXX] which is not permitted.
I didn't make a typo. The error message mentions the same servlet twice, treating it as if it was two different servlets that use the same url-pattern.
Searching for the servlet, I can only find it in a JAR that's downloaded into the local .m2 repository.
I mean, this isn't our servlet, it comes from the provider libraries.
I've seen the other answers to this problem, but those don't work here because:
1) I don't have two servlets steping on each other. There's only one.
2) I can't check if the servlet is defined on the web.xml and on an anotation because it's not ours, but it works for my colleagues so it should be correct.
3) As mentioned before, this servlet is loaded from a dependency, so I can't even try to change anything to try an understand what's happening.
Do you have any idea of what I may have wrong on my setup?
It works correctly for my colleagues, so it isn't a problem with the code.
I've deleted and setup everything from scratch (except deleting the Tomcat server); cleaned and updated the project several times, but I can't get rid of this.
My last attempt was purging and updating the local .m2 repository, but that didn't work either.
Any tips or ideas are much welcomed.
I think you are declaring the servlet mapping both in web.xml and in an annotation, if i remember correctly some tomcat versions allow that.
You said it works for your collegues, check if you all have the same tomcat version.
I recently had to do a major refactor of an older code base at work. It involved changing a lot of objects, variable names, and where things were stored/retrieved. We're building 2 EJB3 projects with maven and deploying them to glassfish 4 instance.
I'd like to know if it's possible to test for named query validation at build time instead of deploy time. I've spent the last few hours deploying, it failing due to a bad named query, fixing it, redeploying, rinse and repeat. It's getting on my nerves.
Named query errors before deploying could be tracked using Netbeans IDE.
For every named query inside entity beans if it is malformed Netbeans IDE shows a warning message in the line that has named query.
But then one has to visit each entity bean individually but then this practice would save a lot of time over deploying and then finding a error.
I personally find it very convenient.
I'm coming from a .Net background cutting my teeth on a Java project that is using Maven, Spring and Liquibase. Needless to say, this is a new bag of concepts and frameworks to deal with.
Tests won't complete:
My tests wont complete successfully because they fail when attempting to access a table within my database. They fail because that table doesn't exist. I see that I have many migration files in a Liquibase XML format within my project, but am looking at how to run them.
liquibase-maven-plugin not an option:
I see that others might use the liquibase-maven-plugin plugin, but in my case, the project does not have that plugin referenced in any the pom.xml, only liquibase-core. There are a handful of other developers that knew what they were doing that worked on this project in the past, given that they never referenced this plugin in the pom.xml file, I assume it was for good reason and I wont be stirring that pot.
SpringLiquibase?
They have a reference to to a bean that looks like this: <bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">, which after further research appears to do automatic data migrations,
GREAT!
....but how do I go about invoking it? Must my project already pass my Tests and actually be "ran" before this logic gets hit? If that is the case and my project must successfully build / test, then I apparently must run my migrations outside of this SpringLiquibase bean.
Should I be using the liquibase command line and if so, can I safely assume this is what the previous developers were doing to initially establish their database?
You are right that the SpringLiquibase setup should do the database update automatically, but it will only do it when the spring framework is started.
My guess is that your normal application startup fires Liquibase through Spring but the test framework does not. Perhaps they had not noticed it because they would tend to make the database change in the liquibase changelog files, then start the normal application for initial testing (which updated the database) then build and run the tests. Now that you are running the tests first, the database is not yet there.
Are you able to tell if your tests are trying to start Spring?
Even in cases where an application is using SpringLiquibase, I usually recommend configuring your project to allow manual updates using liquibase-maven-plugin, ant plugin, or command line because it tends to make a more efficient process. With that setup, you can add changesets and then run liquibase update without going through an entire application startup or even running your tests. You could set it to automatically run on test execution, but the update process is usually infrequent enough that it is better to avoid the liquibase update overhead on every test execution. It is still very helpful to include in your application's spring setup so that in QA and production you don't have to remember to manually update the database, it is just automatically kept up to date.
We have been using tomcat 7.0.19 successfully in embedded mode. However recently due to some fixes in our area of concern we decided to move to tomcat 7.0.32. Most things work as expected with same code and newer version, however the war deployment for some reason has'nt worked well. I have a couple of servlets registered with my tomcat. Facing below 2 issues,
Has something changed from 7.0.19 to 7.0.32 from embedded tomcat behavior. To detail this out let me explain the behavior difference, with 7.0.19, i could deploy my application and when i hit the "host:port/contextpath" it loaded the applications start page (i.e. welcome page, this page is UI centric and does not need a server intervention, so none of my servlets get called). However with 7.0.32 the same url results in my servlet being called.
So to debug the problem, i commented most of my code so that i have a vanilla tomcat implementation, just the very basic stuff, i.e. setting the engine name, default host, setting host properties, adding a connector (nio, with default properties) and deploying a war. No servlets and other things, just to check if the very basic stuff works. To my surprise when i ran this code it still failed with the same problem within my servlet, how did that happen, now that my code is commented it does not register any servlets, still where does it find it from? Does embedded tomcat store some old references, which are not getting cleaned on subsequent runs? I tried changing the port, but that too didn't help.
I am hitting the wall here, not able to understand this wierd behavior, if i figure out #2, only then can i make some progress on #1.
Thanks in advance,
Vikram
Figured out what the problems were.
In reverse order,
2 - This actually was a weird behavior with the vanilla embedded tomcat code too invoking the servlets which never were registered in the first place. The problem here was with eclipse, for some reason it picked up the old reference of my class. The moment i ran the same code from outside of eclipse i.e. via command prompt, things were back to normal.
1 - This problem was related to web deployment, in my code i was additionally setting my classloader into WebappLoader and eventually adding my application jars into it. This for whatever reasons worked fine with 7.0.19, however did not with 7.0.32, the moment i externalized all my jars to be loaded during application startup via classpath this problem too was resolved.
Thanks,
Vicky
First, let me explain shortly how my application is working:
The application handles deals, which are stored as an XML document in our database (Oracle 11g). The table that contains these information is defined like that:
table T_MYDEALS (
DEAL_ID number(9, 0) not null,
DEAL_XML xmltype
)
When we update or insert new items in this table, we have a trigger that will read (using XPath) this XML, and populates some others metadata tables.
Everything works fine, except on my machine.
Now the problem
When I run the application on my machine (i.e. the Tomcat run within my Eclipse, but connected to the Homologation DB), the trigger fails with the following error:
WARN [org.hibernate.util.JDBCExceptionReporter] SQL Error: 1722, SQLState: 42000
ERROR [org.hibernate.util.JDBCExceptionReporter] ORA-01722: invalid number
ORA-06512: at "MY_SCHEMA.AFTER_R_INSERT_MYDEAL", line 628
ORA-04088: error during execution of trigger 'MY_SCHEMA.AFTER_R_INSERT_MYDEAL'
I'm sorry, I can't put my trigger here, for security reasons. Just note that the line 628 is at the end of the trigger code.
My tests
So I tried to understand why this error happened on my machine (as it only happen on my machine, none of my colleague encounter this issue). I can't say when it starts to stop working, maybe when I changed my computer recently?
First, I check the source code, then my DB connection, but everything seems correct.
I also switched in debug mode to have a look in the XML sent to the DB, or at least in our HibernateXMLType (which is an extension of org.hibernate.usertype.UserType, used to transform our XML into data readable by Oracle). But nothing wrong found here neither.
I save you many tests done on my side, but one of my latest tests was to get a fresh Tomcat server, and deploy a WAR that is deployed in an environment (Homologation for ex.).
Then, I executed the same tests, but the trigger is still failing.
So far, I have eliminated the following suspects:
The source code, as I also tested the application from a WAR that is deployed on an environment and which is working correctly;
The DB itself, as I am connected to the same DB as the working environment. I also tried with another DB, the result is the same.
The data used for my tests, as it works if I try to save the same deal, but using the Homologation environment.
JDK, as I also changed the JDK for a new one;
Eclipse, as my latest tests were done outside Eclipse;
Tomcat server, as I also tried on a new Tomcat.
What I am wondering is if my Windows XP environment has some specific encoding configuration, which "transforms" some data within the XML and make them
My questions
What are the possible elements that I may have forgotten in my tests?
Is there a way to know exactly the XML processed by the Oracle trigger (if possible without installing anything on the Oracle instance, as I don't have any control on it)?
I know that I don't provide a lot of information, but if you can give me some hints, or ideas, I would be really grateful!
Regards.
Technical information: Java 1.6, Oracle 11g, Tomcat 5.5.23, JSF 1.2, Hibernate 3.3
Could it be localization? On your system you're populating the xml with "1,5" instead of "1.5" for example. The error that is reported by Hibernate clearly points in that direction. You could disable the trigger and see what the resulting xmltype is in that table. And if you or one of your colleagues can access the database through an SQL Developer like client you could try and run the code in the trigger "manually".
See here for info and possible actions when you encounter ORA-01722.