Excel 2013 error OData4J JPAProducer odata feed - java

I have created a very basic producer example using OData4J (JPAProducer)
Now I can see the info about schema on browser normal.
However the excel 2013 data connection wizard showing error.
I tried Excel with http://services.odata.org/V3/OData/OData.svc/
worked fine.
What am I doing wrong.
Any help would be greatly appreciated.
public static JPAProducer createProducer() {
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, DatabaseUtil.getDataSource());
javax.persistence.EntityManagerFactory factory = Persistence.createEntityManagerFactory("persistentunit",
properties);
JPAProducer producer = new JPAProducer(factory, "", 50);
return producer;
}
public static void startService() {
DefaultODataProducerProvider.setInstance(createProducer());
hostODataServer("http://localhost:8887/JPAProducerExample.svc/");
}
public static void main(String[] args) {
startService();
}
private static void hostODataServer(String baseUri) {
ODataServer server = null;
try {
server = startODataServer(baseUri);
} catch (Exception e) {
System.out.print(e.getLocalizedMessage());
}
}
private static ODataServer startODataServer(String baseUri) {
return createODataServer(baseUri).start();
}
private static ODataServer createODataServer(String baseUri) {
return new ODataJerseyServer(baseUri, ODataApplication.class, RootApplication.class);
}

Related

Create a simple job spring boot

I created a spring boot project.
I use spring data with elastic search.
The whole pipeline: controller -> service -> repository is ready.
I now have a file that represents country objects (name and isoCode) and I want to create a job to insert them all in elastic search.
I read the spring documentation and I find that there's too much configuration for such a simple job.
So I'm trying to do a simple main "job" that reads a csv, creates objects and insert them in elastic search.
But I have a bit of trouble to understand how injection would work in this case:
#Component
public class InsertCountriesJob {
private static final String file = "D:path\\to\\countries.dat";
private static final Logger LOG = LoggerFactory.getLogger(InsertCountriesJob.class);
#Autowired
public CountryService service;
public static void main(String[] args) {
LOG.info("Starting insert countries job");
try {
saveCountries();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void saveCountries() throws Exception {
try (CSVReader csvReader = new CSVReader(new FileReader(file))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
String name = values[0];
String iso = values[1].equals("N") ? values[2] : values[1];
Country country = new Country(iso, name);
LOG.info("info: country: {}", country);
//write in db;
//service.save(country); <= can't do this because of the injection
}
}
}
}
based on Simon's comment. Here's how I resolved my problem. Might help people that are getting into spring, and that are trying not to get lost.
Basically, to inject anything in Spring, you'll need a SpringBootApplication
public class InsertCountriesJob implements CommandLineRunner{
private static final String file = "D:path\\to\\countries.dat";
private static final Logger LOG = LoggerFactory.getLogger(InsertCountriesJob.class);
#Autowired
public CountryService service;
public static void main(String[] args) {
LOG.info("STARTING THE APPLICATION");
SpringApplication.run(InsertCountriesJob.class, args);
LOG.info("APPLICATION FINISHED");
}
#Override
public void run(String... args) throws Exception {
LOG.info("Starting insert countries job");
try {
saveCountry();
} catch (Exception e) {
e.printStackTrace();
}
LOG.info("job over");
}
public void saveCountry() throws Exception {
try (CSVReader csvReader = new CSVReader(new FileReader(file))) {
String[] values = null;
while ((values = csvReader.readNext()) != null) {
String name = values[0];
String iso = values[1].equals("N") ? values[2] : values[1];
Country country = new Country(iso, name);
LOG.info("info: country: {}", country);
//write in db;
service.save(country);
}
}
}
}

Hibernate Search Integration with Apache Solr unable to index data

In my current application I use hibernate search to index and searching data. It works fine. But when building a cluster of server instances I do not need to use Master Slave clusters using JMS or JGroups.
So I am trying to integrate hibernate search with apache solr. I had follow this example.
And did some minor changes to be compatible with new apache.lucene.core version.
public class HibernateSearchSolrWorkerBackend implements BackendQueueProcessor {
private static final String ID_FIELD_NAME = "id";
private static final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private static final ReentrantReadWriteLock.WriteLock writeLock = readWriteLock.writeLock();
private ConcurrentUpdateSolrClient solrServer;
#Override
public void initialize(Properties properties, WorkerBuildContext workerBuildContext, DirectoryBasedIndexManager directoryBasedIndexManager) {
solrServer = new ConcurrentUpdateSolrClient("http://localhost:8983/solr/test", 20, 4);
}
#Override
public void close() {
}
#Override
public void applyWork(List<LuceneWork> luceneWorks, IndexingMonitor indexingMonitor) {
List<SolrInputDocument> solrWorks = new ArrayList<>(luceneWorks.size());
List<String> documentsForDeletion = new ArrayList<>();
for (LuceneWork work : luceneWorks) {
SolrInputDocument solrWork = new SolrInputDocument();
if (work instanceof AddLuceneWork) {
handleAddLuceneWork((AddLuceneWork) work, solrWork);
} else if (work instanceof UpdateLuceneWork) {
handleUpdateLuceneWork((UpdateLuceneWork) work, solrWork);
} else if (work instanceof DeleteLuceneWork) {
documentsForDeletion.add(((DeleteLuceneWork)work).getIdInString());
} else {
throw new RuntimeException("Encountered unsupported lucene work " + work);
}
solrWorks.add(solrWork);
}
try {
deleteDocs(documentsForDeletion);
solrServer.add(solrWorks);
softCommit();
} catch (SolrServerException | IOException e) {
throw new RuntimeException("Failed to update solr", e);
}
}
#Override
public void applyStreamWork(LuceneWork luceneWork, IndexingMonitor indexingMonitor) {
throw new RuntimeException("HibernateSearchSolrWorkerBackend.applyStreamWork isn't implemented");
}
#Override
public Lock getExclusiveWriteLock() {
return writeLock;
}
#Override
public void indexMappingChanged() {
}
private void deleteDocs(Collection<String> collection) throws IOException, SolrServerException {
if (collection.size()>0) {
StringBuilder stringBuilder = new StringBuilder(collection.size()*10);
stringBuilder.append(ID_FIELD_NAME).append(":(");
boolean first=true;
for (String id : collection) {
if (!first) {
stringBuilder.append(',');
}
else {
first=false;
}
stringBuilder.append(id);
}
stringBuilder.append(')');
solrServer.deleteByQuery(stringBuilder.toString());
}
}
private void copyFields(Document document, SolrInputDocument solrInputDocument) {
boolean addedId = false;
for (IndexableField fieldable : document.getFields()) {
if (fieldable.name().equals(ID_FIELD_NAME)) {
if (addedId)
continue;
else
addedId = true;
}
solrInputDocument.addField(fieldable.name(), fieldable.stringValue());
}
}
private void handleAddLuceneWork(AddLuceneWork luceneWork, SolrInputDocument solrWork) {
copyFields(luceneWork.getDocument(), solrWork);
}
private void handleUpdateLuceneWork(UpdateLuceneWork luceneWork, SolrInputDocument solrWork) {
copyFields(luceneWork.getDocument(), solrWork);
}
private void softCommit() throws IOException, SolrServerException {
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.setParam("soft-commit", "true");
updateRequest.setAction(UpdateRequest.ACTION.COMMIT,false, false);
updateRequest.process(solrServer);
}
}
And set Hibernate properties as
<persistence-unit name="JPAUnit">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>search.domain.Book</class>
<properties>
<property name="hibernate.search.default.directory_provider" value="filesystem"/>
<property name="hibernate.search.default.worker.backend" value="search.adapter.HibernateSearchSolrWorkerBackend"/>
</properties>
</persistence-unit>
And tried to index a document bu using following test method
#Test
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Rollback(false)
public void saveBooks() {
Book bk1 = new Book(1L, "book1", "book1 description", 100.0);
Book bk2 = new Book(2L, "book2", "book2 description", 100.0);
bookRepository.save(bk1);
bookRepository.save(bk2);
}
This save records to the DB .If I remove
<property name="hibernate.search.default.worker.backend" value="search.adapter.HibernateSearchSolrWorkerBackend"/>
and give the index location for hibernate search in the configuration file it create the index properly and perform search successfully. But when I add the custom worker backend as apache solr it will not create any indexes within apache solr core data folder.

How to run a long running processor async and use it from same thread?

Basically, we have a class called OfficeManger which acts as a driver to connect to openoffice software, it needs to be connected all the time so we can use the converter to convert documents. We start the OpenOfficeProcess during the start of the web application, which starts fine. But looks like the executor which running in init() is on different thread and we couldn't able to get the running instance of OfficeManager. How to run in its own thread so that I can have this instance called from different class to use the converter method?
OfficeDocumentConverter converter = OpenOfficeProcessor.getInstance().getDocumentConverter();
converter.convert(inputFile, outputFile, pdf);
OpenOfficeProcessor
public class OpenOfficeProcessor {
private static final OpenOfficeProcessor INSTANCE = new OpenOfficeProcessor();
static ExecutorService executor = Executors.newSingleThreadExecutor();
private final OfficeManager officeManager;
private final OfficeDocumentConverter documentConverter;
public OpenOfficeProcessor(){
DefaultOfficeManagerConfiguration configuration = new DefaultOfficeManagerConfiguration();
String homePath = ConfigurationManager.getApplicationProperty(ConfigurationManager.OPENOFFICE_HOME_PATH);
if(homePath != null){
configuration.setOfficeHome(homePath);
} else {
LOG.error("OpenOffice.home.path is not set in the properties file");
new Throwable("Please set OPENOFFICE.HOME.PATH parameter in the properties file");
}
String port = ConfigurationManager.getApplicationProperty(ConfigurationManager.OPENOFFICE_LISTENER_PORT);
if( port != null){
configuration.setPortNumber(Integer.parseInt(port));
}else {
LOG.error("openoffice.listener.port is not set in the properties file");
}
String executionTimeout = ConfigurationManager.getApplicationProperty(ConfigurationManager.OPENOFFICE_EXECUTION_TIMEOUT);
if(executionTimeout != null){
configuration.setTaskExecutionTimeout(Long.parseLong(executionTimeout));
}
String pipeNames = ConfigurationManager.getApplicationProperty(ConfigurationManager.OPENOFFICE_PIPES_NAMES);
if(ConfigurationManager.getApplicationProperty(ConfigurationManager.OPENOFFICE_PIPES_NAMES)!= null){
configuration.setPipeNames(pipeNames);
}
officeManager = configuration.buildOfficeManager();
documentConverter = new OfficeDocumentConverter(officeManager);
}
public static OpenOfficeProcessor getInstance()
{
return INSTANCE;
}
protected static void init() {
LOG.debug("Starting the open office listener...");
executor.submit(new Callable(){
#Override
public Object call() throws Exception {
OpenOfficeProcessor.getInstance().officeManager.start();
return null;
}
});
}
protected static void destroy() {
LOG.debug("Stopping the open office listener...");
OpenOfficeProcessor.getInstance().officeManager.stop();
}
public OfficeManager getOfficeManager() {
return officeManager;
}
public OfficeDocumentConverter getDocumentConverter() {
return documentConverter;
}
}
OfficeManager
public interface OfficeManager {
void execute(OfficeTask task) throws OfficeException;
void start() throws OfficeException;
void stop() throws OfficeException;
boolean isRunning();
}

Performance issue in Restlet

when I try to request a resource from my API via 100 threads at the same time (It's a simple performance test) I get the following server error message:
Unable to run the following server-side task: Handle inbound messages
I already searched the internet but couldn't find any hint how to get rid of it.
Any experience how to handle that error?
Greetingz,
Cooks
Edit:
Client:
public class PerformanceTest extends Thread {
static ClientResource client;
static int recCounter = 0;
public PerformanceTest(){
super();
}
public static void main(String[] args){
client = new ClientResource("http://localhost:8082/api/module/news/1");
ChallengeScheme scheme = ChallengeScheme.HTTP_BASIC;
ChallengeResponse authentication = new ChallengeResponse(scheme, "app+postapp", "");
client.setChallengeResponse(authentication);
for(int i = 0; i < 100; i++){
System.out.println("Starting test: " + i);
new PerformanceTest().start();
}
}
#Override
public void run() {
JsonRepresentation entity;
try {
entity = new JsonRepresentation(client.get());
recCounter++;
System.out.println("(" + getRecCounter() + ") result: " + entity.getJsonObject().toString());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private synchronized int getRecCounter(){
return recCounter;
}
Server:
This is my Application Class that starts the server:
public class MyApplication extends Application {
private DB db;
public MyApplication() {
db = new DB();
}
public static void main(String[] args) throws Exception {
Component component = new Component();
component.getServers().add(Protocol.HTTP, 8082);
Application application = new MyApplication();
// Attach the application to the component with a defined contextroot
String contextRoot = "/api";
component.getDefaultHost().attach(contextRoot, application);
component.start();
}
public Restlet createInboundRoot() {
router.attach("/module/news", NewsResource.class);
router.attach("/module/news/{itemId}", NewsResource.class);
router.attach("/module/news/item", NewsResource.class);
router.attach("/module/news/item/{itemId}", NewsResource.class);
// Create Verifier
CredentialsVerifier verifier = new CredentialsVerifier(getDB());
// Create a Guard
ChallengeAuthenticator guard = new ChallengeAuthenticator(getContext(), ChallengeScheme.HTTP_BASIC, "app");
guard.setVerifier(verifier);
guard.setNext(router);
return guard;
}
public synchronized DB getDB() {
return this.db;
}
}

Jbehave - #beforeStories doesn't work

My story file:
Narrative:
In order to document all the business logic requests
As a user
I want to work with documents
Scenario: Basic new document creation
Given a user name Micky Mouse
When new document created
Then the document should named new document
And the document status should be NEW
My code:
public class DocStories extends JUnitStory {
#Override
public Configuration configuration() {
return new MostUsefulConfiguration().useStoryLoader(
new LoadFromClasspath(getClass().getClassLoader()))
.useStoryReporterBuilder(
new StoryReporterBuilder().withFormats(Format.STATS,
Format.HTML, Format.CONSOLE, Format.TXT));
}
#Override
public List<CandidateSteps> candidateSteps() {
return new InstanceStepsFactory(configuration(), new DocSteps())
.createCandidateSteps();
}
#Override
#Test
public void run() throws Throwable {
try {
super.run();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
In the class with my steps:
public class DocSteps {
private final Map<String, User> users = new HashMap<String, User>();
private final DocManager manager = new DocManager();
private User activeUser;
private Document activeDocument;
private boolean approvedResult;
*****************BEFORE***************//
#BeforeStories
private void initUsers() {
users.put("Micky Mouse", new User("Micky Mouse", UserRole.ANALYST));
users.put("Donald Duck", new User("Donald Duck", UserRole.BCR_LEADER));
System.out.println("Check this out" + users.toString());
}
// **********steps*************//
#Given("a user name $userName")
public void connectUser(String userName) {
// in the real world - it will get the user from the db
System.out.println(userName);
activeUser = new User(userName, UserRole.ANALYST);
// System.out.println(activeDocument.getName());
}
#Given("a new document")
#When("new document created")
public void createDocument() {
activeDocument = new Document();
}
#Given("a document with content")
public void createDocWithContect() {
createDocument();
activeDocument.setContent("this is a document");
}
#Then("the document should named $docName")
#Alias("the document name should be $docName")
public void documentNameShouldBe(String docName) {
Assert.assertEquals(docName, activeDocument.getName());
}
#Then("the document status should be $status")
public void documentStatusShouldBe(String status) {
DocStatus docStatus = DocStatus.valueOf(status);
Assert.assertThat(activeDocument.getStatus(),
Matchers.equalTo(docStatus));
}
// *****************AFTER***************//
#AfterScenario
public void clean() {
activeUser = null;
activeDocument = null;
approvedResult = false;
}
}
The methods with the "before and after" stories annotation are not executed.
the enum converter doesn't work as well.
What is wrong with my configuration (I assume it is my configuration)?
The problem is that your method initUsers is private. Just make it public and it will be visible to JBehave engine:
#BeforeStories
public void initUsers() {
//...
}

Categories

Resources