In my current application I use hibernate search to index and searching data. It works fine. But when building a cluster of server instances I do not need to use Master Slave clusters using JMS or JGroups.
So I am trying to integrate hibernate search with apache solr. I had follow this example.
And did some minor changes to be compatible with new apache.lucene.core version.
public class HibernateSearchSolrWorkerBackend implements BackendQueueProcessor {
private static final String ID_FIELD_NAME = "id";
private static final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private static final ReentrantReadWriteLock.WriteLock writeLock = readWriteLock.writeLock();
private ConcurrentUpdateSolrClient solrServer;
#Override
public void initialize(Properties properties, WorkerBuildContext workerBuildContext, DirectoryBasedIndexManager directoryBasedIndexManager) {
solrServer = new ConcurrentUpdateSolrClient("http://localhost:8983/solr/test", 20, 4);
}
#Override
public void close() {
}
#Override
public void applyWork(List<LuceneWork> luceneWorks, IndexingMonitor indexingMonitor) {
List<SolrInputDocument> solrWorks = new ArrayList<>(luceneWorks.size());
List<String> documentsForDeletion = new ArrayList<>();
for (LuceneWork work : luceneWorks) {
SolrInputDocument solrWork = new SolrInputDocument();
if (work instanceof AddLuceneWork) {
handleAddLuceneWork((AddLuceneWork) work, solrWork);
} else if (work instanceof UpdateLuceneWork) {
handleUpdateLuceneWork((UpdateLuceneWork) work, solrWork);
} else if (work instanceof DeleteLuceneWork) {
documentsForDeletion.add(((DeleteLuceneWork)work).getIdInString());
} else {
throw new RuntimeException("Encountered unsupported lucene work " + work);
}
solrWorks.add(solrWork);
}
try {
deleteDocs(documentsForDeletion);
solrServer.add(solrWorks);
softCommit();
} catch (SolrServerException | IOException e) {
throw new RuntimeException("Failed to update solr", e);
}
}
#Override
public void applyStreamWork(LuceneWork luceneWork, IndexingMonitor indexingMonitor) {
throw new RuntimeException("HibernateSearchSolrWorkerBackend.applyStreamWork isn't implemented");
}
#Override
public Lock getExclusiveWriteLock() {
return writeLock;
}
#Override
public void indexMappingChanged() {
}
private void deleteDocs(Collection<String> collection) throws IOException, SolrServerException {
if (collection.size()>0) {
StringBuilder stringBuilder = new StringBuilder(collection.size()*10);
stringBuilder.append(ID_FIELD_NAME).append(":(");
boolean first=true;
for (String id : collection) {
if (!first) {
stringBuilder.append(',');
}
else {
first=false;
}
stringBuilder.append(id);
}
stringBuilder.append(')');
solrServer.deleteByQuery(stringBuilder.toString());
}
}
private void copyFields(Document document, SolrInputDocument solrInputDocument) {
boolean addedId = false;
for (IndexableField fieldable : document.getFields()) {
if (fieldable.name().equals(ID_FIELD_NAME)) {
if (addedId)
continue;
else
addedId = true;
}
solrInputDocument.addField(fieldable.name(), fieldable.stringValue());
}
}
private void handleAddLuceneWork(AddLuceneWork luceneWork, SolrInputDocument solrWork) {
copyFields(luceneWork.getDocument(), solrWork);
}
private void handleUpdateLuceneWork(UpdateLuceneWork luceneWork, SolrInputDocument solrWork) {
copyFields(luceneWork.getDocument(), solrWork);
}
private void softCommit() throws IOException, SolrServerException {
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.setParam("soft-commit", "true");
updateRequest.setAction(UpdateRequest.ACTION.COMMIT,false, false);
updateRequest.process(solrServer);
}
}
And set Hibernate properties as
<persistence-unit name="JPAUnit">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>search.domain.Book</class>
<properties>
<property name="hibernate.search.default.directory_provider" value="filesystem"/>
<property name="hibernate.search.default.worker.backend" value="search.adapter.HibernateSearchSolrWorkerBackend"/>
</properties>
</persistence-unit>
And tried to index a document bu using following test method
#Test
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Rollback(false)
public void saveBooks() {
Book bk1 = new Book(1L, "book1", "book1 description", 100.0);
Book bk2 = new Book(2L, "book2", "book2 description", 100.0);
bookRepository.save(bk1);
bookRepository.save(bk2);
}
This save records to the DB .If I remove
<property name="hibernate.search.default.worker.backend" value="search.adapter.HibernateSearchSolrWorkerBackend"/>
and give the index location for hibernate search in the configuration file it create the index properly and perform search successfully. But when I add the custom worker backend as apache solr it will not create any indexes within apache solr core data folder.
Related
I created a spring boot project.
I use spring data with elastic search.
The whole pipeline: controller -> service -> repository is ready.
I now have a file that represents country objects (name and isoCode) and I want to create a job to insert them all in elastic search.
I read the spring documentation and I find that there's too much configuration for such a simple job.
So I'm trying to do a simple main "job" that reads a csv, creates objects and insert them in elastic search.
But I have a bit of trouble to understand how injection would work in this case:
#Component
public class InsertCountriesJob {
private static final String file = "D:path\\to\\countries.dat";
private static final Logger LOG = LoggerFactory.getLogger(InsertCountriesJob.class);
#Autowired
public CountryService service;
public static void main(String[] args) {
LOG.info("Starting insert countries job");
try {
saveCountries();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void saveCountries() throws Exception {
try (CSVReader csvReader = new CSVReader(new FileReader(file))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
String name = values[0];
String iso = values[1].equals("N") ? values[2] : values[1];
Country country = new Country(iso, name);
LOG.info("info: country: {}", country);
//write in db;
//service.save(country); <= can't do this because of the injection
}
}
}
}
based on Simon's comment. Here's how I resolved my problem. Might help people that are getting into spring, and that are trying not to get lost.
Basically, to inject anything in Spring, you'll need a SpringBootApplication
public class InsertCountriesJob implements CommandLineRunner{
private static final String file = "D:path\\to\\countries.dat";
private static final Logger LOG = LoggerFactory.getLogger(InsertCountriesJob.class);
#Autowired
public CountryService service;
public static void main(String[] args) {
LOG.info("STARTING THE APPLICATION");
SpringApplication.run(InsertCountriesJob.class, args);
LOG.info("APPLICATION FINISHED");
}
#Override
public void run(String... args) throws Exception {
LOG.info("Starting insert countries job");
try {
saveCountry();
} catch (Exception e) {
e.printStackTrace();
}
LOG.info("job over");
}
public void saveCountry() throws Exception {
try (CSVReader csvReader = new CSVReader(new FileReader(file))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
String name = values[0];
String iso = values[1].equals("N") ? values[2] : values[1];
Country country = new Country(iso, name);
LOG.info("info: country: {}", country);
//write in db;
service.save(country);
}
}
}
}
I have some old playframework 2.2 java webservice that interacts with akka, and now I should port them to playframework 2.3.
However, async has been deprecated and even after reading the doc about the async porting (http://www.playframework.com/documentation/2.3.x/JavaAsync) I wasn't able to understand how to apply it to my case (code below):
I must make the await for a timeout/akka server reply before starting the construction of my reply (ok()), otherwise I will block the thread.
I should make the actorselection async too.
I should make the akka server reply parsing/reply construction async too
I looked around and I wasn't able to find an example of such interactions, even in typesafe templates.
How could I do that?
/* playframework 2.2 code */
public class Resolve extends Controller {
private final static String RESOLVER_ACTOR = play.Play.application().configuration().getString("actor.resolve");
#CorsRest
#VerboseRest
#RequireAuthentication
#BodyParser.Of(BodyParser.Json.class)
public static Result getJsonTree() {
JsonNode json = request().body().asJson();
ProtoBufMessages.ResolveRequest msg;
ResolveRequestInput input;
try {
input = new ResolveRequestInput(json);
} catch (rest.exceptions.MalformedInputException mie) {
return badRequest(mie.getMessage());
}
msg = ((ProtoBufMessages.ResolveRequest)input.getMessage());
ActorSelection resolver = Akka.system().actorSelection(RESOLVER_ACTOR);
Timeout tim = new Timeout(Duration.create(4, "seconds"));
Future<Object> fut = Patterns.ask(resolver, input.getMessage(), tim);
return async (
F.Promise.wrap(fut).map(
new F.Function<Object, Result>() {
public Result apply(Object response) {
ProtoBufMessages.ResolveReply rsp = ((ProtoBufMessages.ResolveReply)response);
ResolveOutput output = new ResolveOutput(rsp);
return ok(output.getJsonReply());
}
}
)
);
}
}
I came out with the code below
public class Resolve extends Controller {
private final static String RESOLVER_ACTOR = play.Play.application().configuration().getString("actor.resolve");
private final static BrainProtoMessages.ResolveReply request_error = BrainProtoMessages.ResolveReply.newBuilder()
.setReturnCode(BResults.REQUEST_FAILED)
.build();
#CorsRest
#VerboseRest
#RequireAuthentication
#BodyParser.Of(BodyParser.Json.class)
public static Result resolve_map() {
final ResolveRequestInput input;
final F.Promise<ActorSelection> selected_target;
final F.Promise<Future<Object>> backend_request;
final F.Promise<BrainProtoMessages.ResolveReply> backend_reply;
final F.Promise<ObjectNode> decode_json;
final F.Promise<Result> ok_result;
final JsonNode json = request().body().asJson();
try {
input = new ResolveRequestInput(json);
} catch (rest.exceptions.MalformedInputException mie) {
return badRequest(mie.getMessage());
}
selected_target = F.Promise.promise(
new F.Function0<ActorSelection>() {
#Override
public ActorSelection apply() throws Throwable {
return Akka.system().actorSelection(RESOLVER_ACTOR);
}
}
);
backend_request =
selected_target.map(
new F.Function<ActorSelection, Future<Object>>() {
#Override
public Future<Object> apply(ActorSelection actorSelection) throws Throwable {
return Patterns.ask(actorSelection, input.getMessage(),new Timeout(Duration.create(4, "seconds")));
}
}
);
backend_reply = backend_request.map(
new F.Function<Future<Object>, BrainProtoMessages.ResolveReply>() {
#Override
public BrainProtoMessages.ResolveReply apply(Future<Object> akka_reply) throws Throwable {
try {
return (BrainProtoMessages.ResolveReply) Await.result(akka_reply, Duration.create(4, "seconds"));
}catch(Exception error)
{
return request_error;
}
}
}
);
decode_json = backend_reply.map(
new F.Function<BrainProtoMessages.ResolveReply, ObjectNode>() {
#Override
public ObjectNode apply(BrainProtoMessages.ResolveReply response) throws Throwable {
return new ResolveOutput(response).getJsonReply();
}
}
);
ok_result = decode_json.map(
new F.Function<ObjectNode, Result>() {
#Override
public Result apply(ObjectNode reply) {
return ok(reply);
}
}
);
try {
return ok_result.get(8000);
}catch(Exception error)
{
return internalServerError();
}
}
}
I have created a very basic producer example using OData4J (JPAProducer)
Now I can see the info about schema on browser normal.
However the excel 2013 data connection wizard showing error.
I tried Excel with http://services.odata.org/V3/OData/OData.svc/
worked fine.
What am I doing wrong.
Any help would be greatly appreciated.
public static JPAProducer createProducer() {
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, DatabaseUtil.getDataSource());
javax.persistence.EntityManagerFactory factory = Persistence.createEntityManagerFactory("persistentunit",
properties);
JPAProducer producer = new JPAProducer(factory, "", 50);
return producer;
}
public static void startService() {
DefaultODataProducerProvider.setInstance(createProducer());
hostODataServer("http://localhost:8887/JPAProducerExample.svc/");
}
public static void main(String[] args) {
startService();
}
private static void hostODataServer(String baseUri) {
ODataServer server = null;
try {
server = startODataServer(baseUri);
} catch (Exception e) {
System.out.print(e.getLocalizedMessage());
}
}
private static ODataServer startODataServer(String baseUri) {
return createODataServer(baseUri).start();
}
private static ODataServer createODataServer(String baseUri) {
return new ODataJerseyServer(baseUri, ODataApplication.class, RootApplication.class);
}
I want to have my persistence.xml in conf folder of my app. How can I tell Persistence.createEntityManagerFactory that it should read it from there?
If you are using EclipseLink you can set the persistence.xml location with the persistence unit property, "eclipselink.persistencexml".
properties.put("eclipselink.persistencexml", "/org/acme/acme-persistence.xml");
EntityManagerFactory factory = Persistence.createEntityManagerFactory("acme", properties);
This solution worked for me
Thread.currentThread().setContextClassLoader(new ClassLoader() {
#Override
public Enumeration<URL> getResources(String name) throws IOException {
if (name.equals("META-INF/persistence.xml")) {
return Collections.enumeration(Arrays.asList(new File("conf/persistence.xml")
.toURI().toURL()));
}
return super.getResources(name);
}
});
Persistence.createEntityManagerFactory("test");
The createEntityManagerFactory methods search for persistence.xml files within the META-INF directory of any CLASSPATH element.
if your CLASSPATH contains the conf directory, you could place an EntityManagerFactory definition in conf/META-INF/persistence.xml
The ClassLoader may be a URLClassLoader, so try it this way:
final URL alternativePersistenceXmlUrl = new File("conf/persistence.xml").toURI().toURL();
ClassLoader output;
ClassLoader current = Thread.currentThread().getContextClassLoader();
try{
URLClassLoader parent = (URLClassLoader)current;
output = new URLClassLoader(parent.getURLs(), parent){
#Override
public Enumeration<URL> getResources(String name) throws IOException {
if (name.equals("META-INF/persistence.xml")) {
return Collections.enumeration(Arrays.asList(alternativePersistenceXmlUrl));
}
return super.getResources(name);
}
};
}catch(ClassCastException ignored) {
output = new ClassLoader() {
#Override
public Enumeration<URL> getResources(String name) throws IOException {
if (name.equals("META-INF/persistence.xml")) {
return Collections.enumeration(Arrays.asList(alternativePersistenceXmlUrl));
}
return super.getResources(name);
}
};
}
It should work. Works for me under certain test etc conditions.
Please this is a hack and should not be used in production.
My solution is for EclipseLink 2.7.0 and Java 9 and it is modified and detailed version of #Evgeniy Dorofeev answer.
In org.eclipse.persistence.internal.jpa.deployment.PersistenceUnitProcessor on line 236 we see the following code:
URL puRootUrl = computePURootURL(descUrl, descriptorPath);
This code is used by EclipseLink to compute root url of the persistence.xml path. That's very important because final path will be made by adding descriptorPath to puRootUrl.
So, let's suppose we have file on /home/Smith/program/some-folder/persistence.xml, then we have:
Thread currentThread = Thread.currentThread();
ClassLoader previousClassLoader = currentThread.getContextClassLoader();
Thread.currentThread().setContextClassLoader(new ClassLoader(previousClassLoader) {
#Override
public Enumeration<URL> getResources(String name) throws IOException {
if (name.equals("some-folder/persistence.xml")) {
URL url = new File("/home/Smith/program/some-folder/persistence.xml").toURI().toURL();
return Collections.enumeration(Arrays.asList(url));
}
return super.getResources(name);
}
});
Map<String, String> properties = new HashMap<>();
properties.put("eclipselink.persistencexml", "some-folder/persistence.xml");
try {
entityManagerFactory = Persistence.createEntityManagerFactory("unit-name", properties);
} catch (Exception ex) {
logger.error("Error occured creating EMF", ex);
} finally {
currentThread.setContextClassLoader(previousClassLoader);
}
Details:
Pay attention that when creating new class loader I pass there previous classloader otherwise it doesn't work.
We set property eclipselink.persistencexml. If we don't do that then default descriptorPath will be equal to META-INF/persistence.xml and we would need to keep our persistence.xml on /home/Smith/program/META-INF/persistence.xml to be found.
I tried these ways when the program is starting (at first line of main function):
Write your persistence.xml to the resources/META-INF/persistence.xml of the jar
I had problem with this way: Java write .txt file in resource folder
Create META-INF folder in the jar directory and put your persistence.xml into it, then execute this command:
jar uf $jarName META-INF/persistence.xml
This command will replace META-INF/persistence.xml (your file) in the jar
private fun persistence() {
val fileName = "META-INF/persistence.xml"
val jarName: String?
val done = try {
jarName = javaClass.protectionDomain.codeSource.location.path
if (File(fileName).exists() && !jarName.isNullOrBlank()
&& jarName.endsWith(".jar") && File(jarName).exists()) {
Command().exec("jar uf $jarName META-INF/persistence.xml", timeoutSec = 30)
true
} else false
} catch (e: Exception) {
false
}
if (done) {
logger.info { "$fileName exist and will be loaded :)" }
} else {
logger.info {
"$fileName not exist in current folder so it will be read from .jar :(" +
" you can run: jar uf jarName.jar META-INF/persistence.xml"
}
}
}
Running Command Line in Java
A solution by creating tweaked PersistenceUnitDescriptor.
import org.hibernate.jpa.boot.internal.ParsedPersistenceXmlDescriptor;
import org.hibernate.jpa.boot.internal.PersistenceXmlParser;
import org.hibernate.jpa.boot.spi.Bootstrap;
import org.hibernate.jpa.boot.spi.EntityManagerFactoryBuilder;
public class HibernateEntityManagerFactoryBuilder {
public static final EntityManagerFactory build(URL xmlUrl) {
final ParsedPersistenceXmlDescriptor xmlDescriptor = PersistenceXmlParser.locateIndividualPersistenceUnit(xmlUrl);
final HibernatePersistenceUnitDescriptor hibernateDescriptor = new HibernatePersistenceUnitDescriptor(xmlDescriptor);
final EntityManagerFactoryBuilder builder = Bootstrap.getEntityManagerFactoryBuilder(hibernateDescriptor, Collections.emptyMap(), (ClassLoader) null);
final EntityManagerFactory factory = builder.build();
return factory;
}
public static final EntityManagerFactory build(URL xmlUrl, final String name) {
final ParsedPersistenceXmlDescriptor xmlDescriptor = PersistenceXmlParser.locateNamedPersistenceUnit(xmlUrl, name);
if(xmlDescriptor == null) throw new RuntimeException("Persistence unit with name '"+name+ "' not found.");
final HibernatePersistenceUnitDescriptor hibernateDescriptor = new HibernatePersistenceUnitDescriptor(xmlDescriptor);
final EntityManagerFactoryBuilder builder = Bootstrap.getEntityManagerFactoryBuilder(hibernateDescriptor, Collections.emptyMap(), (ClassLoader) null);
final EntityManagerFactory factory = builder.build();
return factory;
}
public static void main(String[] args) {
try {
final EntityManagerFactory factory = build(new File("D:/ini/persistence.xml").toURI().toURL());
} catch (Exception e) {e.printStackTrace();}
}
}
public class HibernatePersistenceUnitDescriptor implements PersistenceUnitDescriptor {
private final PersistenceUnitDescriptor descriptor;
public HibernatePersistenceUnitDescriptor(PersistenceUnitDescriptor descriptor) {
this.descriptor = descriptor;
}
#Override
public URL getPersistenceUnitRootUrl() {
return null;
}
#Override
public String getName() {
return descriptor.getName();
}
#Override
public String getProviderClassName() {
return descriptor.getProviderClassName();
}
#Override
public boolean isUseQuotedIdentifiers() {
return descriptor.isUseQuotedIdentifiers();
}
#Override
public boolean isExcludeUnlistedClasses() {
return descriptor.isExcludeUnlistedClasses();
}
#Override
public PersistenceUnitTransactionType getTransactionType() {
return descriptor.getTransactionType();
}
#Override
public ValidationMode getValidationMode() {
return descriptor.getValidationMode();
}
#Override
public SharedCacheMode getSharedCacheMode() {
return descriptor.getSharedCacheMode();
}
#Override
public List<String> getManagedClassNames() {
return descriptor.getManagedClassNames();
}
#Override
public List<String> getMappingFileNames() {
return descriptor.getMappingFileNames();
}
#Override
public List<URL> getJarFileUrls() {
return descriptor.getJarFileUrls();
}
#Override
public Object getNonJtaDataSource() {
return descriptor.getNonJtaDataSource();
}
#Override
public Object getJtaDataSource() {
return descriptor.getJtaDataSource();
}
#Override
public Properties getProperties() {
return descriptor.getProperties();
}
#Override
public ClassLoader getClassLoader() {
return descriptor.getClassLoader();
}
#Override
public ClassLoader getTempClassLoader() {
return descriptor.getTempClassLoader();
}
#Override
public void pushClassTransformer(EnhancementContext enhancementContext) {
descriptor.pushClassTransformer(enhancementContext);
}
}
My story file:
Narrative:
In order to document all the business logic requests
As a user
I want to work with documents
Scenario: Basic new document creation
Given a user name Micky Mouse
When new document created
Then the document should named new document
And the document status should be NEW
My code:
public class DocStories extends JUnitStory {
#Override
public Configuration configuration() {
return new MostUsefulConfiguration().useStoryLoader(
new LoadFromClasspath(getClass().getClassLoader()))
.useStoryReporterBuilder(
new StoryReporterBuilder().withFormats(Format.STATS,
Format.HTML, Format.CONSOLE, Format.TXT));
}
#Override
public List<CandidateSteps> candidateSteps() {
return new InstanceStepsFactory(configuration(), new DocSteps())
.createCandidateSteps();
}
#Override
#Test
public void run() throws Throwable {
try {
super.run();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
In the class with my steps:
public class DocSteps {
private final Map<String, User> users = new HashMap<String, User>();
private final DocManager manager = new DocManager();
private User activeUser;
private Document activeDocument;
private boolean approvedResult;
*****************BEFORE***************//
#BeforeStories
private void initUsers() {
users.put("Micky Mouse", new User("Micky Mouse", UserRole.ANALYST));
users.put("Donald Duck", new User("Donald Duck", UserRole.BCR_LEADER));
System.out.println("Check this out" + users.toString());
}
// **********steps*************//
#Given("a user name $userName")
public void connectUser(String userName) {
// in the real world - it will get the user from the db
System.out.println(userName);
activeUser = new User(userName, UserRole.ANALYST);
// System.out.println(activeDocument.getName());
}
#Given("a new document")
#When("new document created")
public void createDocument() {
activeDocument = new Document();
}
#Given("a document with content")
public void createDocWithContect() {
createDocument();
activeDocument.setContent("this is a document");
}
#Then("the document should named $docName")
#Alias("the document name should be $docName")
public void documentNameShouldBe(String docName) {
Assert.assertEquals(docName, activeDocument.getName());
}
#Then("the document status should be $status")
public void documentStatusShouldBe(String status) {
DocStatus docStatus = DocStatus.valueOf(status);
Assert.assertThat(activeDocument.getStatus(),
Matchers.equalTo(docStatus));
}
// *****************AFTER***************//
#AfterScenario
public void clean() {
activeUser = null;
activeDocument = null;
approvedResult = false;
}
}
The methods with the "before and after" stories annotation are not executed.
the enum converter doesn't work as well.
What is wrong with my configuration (I assume it is my configuration)?
The problem is that your method initUsers is private. Just make it public and it will be visible to JBehave engine:
#BeforeStories
public void initUsers() {
//...
}