Vert.x Web , GraphQL , Hibernate Reactive - java

I am going to write an Api with Vert.x web and GraphQL
I have connected with the database with Hibernate reactive
public void start(final Promise<Void> startPromise)
{
try
{
vertx.executeBlocking(e ->
{
try
{
hibernateConfig();
e.complete();
}
catch (Exception exception)
{
e.fail(exception.getCause());
}
}).onComplete(event ->
{
try
{
runServer(startPromise);
}
catch (Exception e)
{
throw new RuntimeException(e);
}
}).onFailure(Throwable::printStackTrace);
}
catch (Exception e)
{
e.printStackTrace();
}
}
private void runServer(final Promise<Void> startPromise) throws Exception
{
final HttpServer httpServer = vertx.createHttpServer();
final Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.post("/graphql").handler(super::graphqlHandler);
// register `/graphiql` endpoint for the GraphiQL UI
final GraphiQLHandlerOptions graphiqlOptions = new GraphiQLHandlerOptions().setEnabled(true);
router.route("/graphiql/*").handler(GraphiQLHandler.create(graphiqlOptions));
final URL resource = getClass().getResource("/static");
if (resource != null) router.route("/static/*").handler(StaticHandler.create(resource.getFile()));
else throw new Exception("Cannot set static");
httpServer.requestHandler(router).listen(PORT , "localhost" , event ->
{
if (event.succeeded())
{
System.out.printf("Server run on port %d!\n" , PORT);
startPromise.complete();
}
else
{
System.out.println("Error run server!");
startPromise.fail(event.cause());
}
});
}
private void hibernateConfig()
{
Uni.createFrom().deferred(Unchecked.supplier(() ->
{
final Configuration configuration = new Configuration().setProperties(getHibernateProperties());
final Set<Class<?>> entitiesClasses = getEntitiesClasses();
if (entitiesClasses != null)
{
for (final Class<?> entity : entitiesClasses) configuration.addAnnotatedClass(entity);
}
else logger.error("Cannot found entities");
final StandardServiceRegistryBuilder builder = new ReactiveServiceRegistryBuilder()
.addService(Server.class , this)
.applySettings(configuration.getProperties());
final StandardServiceRegistry registry = builder.build();
sessionFactory = configuration.buildSessionFactory(registry).unwrap(Mutiny.SessionFactory.class);
if (!sessionFactory.isOpen()) throw new RuntimeException("Session is close!");
logger.info("✅ Hibernate Reactive is ready");
return Uni.createFrom().voidItem();
})).convert().toCompletableFuture().join();
}
private Properties getHibernateProperties()
{
final Properties properties = new Properties();
properties.setProperty(Environment.DRIVER , "org.mysql.jdbc.DRIVER");
properties.setProperty(Environment.URL , "jdbc:mysql://localhost:3306/DBNAME");
properties.setProperty(Environment.USER , "USENAME");
properties.setProperty(Environment.PASS , "PASSWORD");
properties.setProperty(Environment.DIALECT , "org.hibernate.dialect.MySQL55Dialect");
properties.setProperty(Environment.HBM2DDL_DATABASE_ACTION , "create");
properties.setProperty(Environment.SHOW_SQL , "false");
properties.setProperty(Environment.POOL_SIZE , "10");
return properties;
}
So far it doesn't give any error and creates the entities
The error is given when I want to do an Insert
public Future<UsersDto> addUserTest(final DataFetchingEnvironment environment)
{
return Future.future(event ->
{
final AddUsersDto addUsersDto = Dto.mapped(environment.getArguments() , "user" , AddUsersDto.class);
final Users user = UsersMapper.toUsers(addUsersDto);
try
{
vertx.executeBlocking(e ->
{
try
{
sessionFactory.withTransaction(
(session , transaction) ->
session.persist(user)
.chain(session::flush)
)
.invoke(() -> e.complete(user))
.onFailure()
.invoke(e::fail)
.await()
.indefinitely();
}
catch (Exception exception)
{
exception.printStackTrace();
e.fail(exception.getCause());
}
})
.onComplete(e -> event.complete(UsersMapper.toUsersDto((Users) e.result())))
.onFailure(e -> event.fail(e.getCause()));
}
catch (Exception e)
{
e.printStackTrace();
event.complete(UsersDto.builder().build());
}
});
}
Error:
sessionFactory.withTransaction(
(session , transaction) ->
session.persist(user)
.chain(session::flush)
)
.invoke(() -> e.complete(user))
.onFailure()
.invoke(e::fail)
.await()
.indefinitely(); // This line gives an error
Error text:
java.lang.IllegalStateException: HR000068: This method should exclusively be invoked from a Vert.x EventLoop thread; currently running on thread 'vert.x-worker-thread-1'
Please help me if anyone knows the problem

The problem is that you are running everything in the worker thread (using executeBlocking).
If you are writing reactive code, you don't need to wait for the result to be available.
The code should look something like this:
public Future<UsersDto> addUserTest(final DataFetchingEnvironment environment) {
return Future.future(event -> {
final AddUsersDto addUsersDto = Dto.mapped(environment.getArguments() , "user" , AddUsersDto.class);
final Users user = UsersMapper.toUsers(addUsersDto);
sessionFactory
.withTransaction( (session , tx) -> session.persist(user) )
.subscribe().with( event::complete, event::fail )
});
}
You also don't need to flush because there's a transaction.
There is a working example on the Hibernate Reactive repository

Related

BigQuery JsonStreamWriter sometimes throw PERMISSION_DENIED error

I have simple table with several required and nullable columns. My java application writes data into it via JsonStreamWriter. Most of time everything is ok, but sometimes it fails with error
java.util.concurrent.ExecutionException:
com.google.api.gax.rpc.PermissionDeniedException:
io.grpc.StatusRuntimeException: PERMISSION_DENIED: Permission
'TABLES_UPDATE_DATA' denied on resource
'projects/project-name/datasets/dataset-name/tables/table-name' (or it
may not exist).
Data the similar, I am just append it, without update and I have no idea what goes wrong.
private Queue<Map<String, Object>> queue = new ConcurrentLinkedQueue<>();
private JsonStreamWriter streamWriter;
#Autowired
private BigQueryManager manager;
#PostConstruct
private void initialize() {
WriteStream stream = WriteStream.newBuilder().setType(WriteStream.Type.COMMITTED).build();
TableName parentTable = TableName.of(project, dataset, table);
CreateWriteStreamRequest writeStreamRequest = CreateWriteStreamRequest.newBuilder().setParent(parentTable.toString()).setWriteStream(stream).build();
WriteStream writeStream = manager.getClient().createWriteStream(writeStreamRequest);
try {
streamWriter = JsonStreamWriter.newBuilder(writeStream.getName(), writeStream.getTableSchema(), manager.getClient()).build();
} catch (Exception ex) {
log.error("Unable to initialize stream writer.", ex);
}
}
#Override
public void flush() {
try {
List<Pair<JSONArray, Future>> tasks = new ArrayList<>();
while (!queue.isEmpty()) {
JSONArray batch = new JSONArray();
JSONObject record = new JSONObject();
queue.poll().forEach(record::put);
batch.put(record);
tasks.add(new Pair<>(batch, streamWriter.append(batch)));
}
List<AppendRowsResponse> responses = new ArrayList<>();
tasks.forEach(task -> {
try {
responses.add((AppendRowsResponse) task.getValue().get());
} catch (Exception ex) {
log.debug("Exception while task {} running: {}", task.getKey(), ex.getMessage(), ex);
}
});
responses.forEach(response -> {
if (!"".equals(response.getError().getMessage())) {
log.error(response.getError().getMessage());
}
});
} finally {
streamWriter.close();
}
}
#Override
public void addRow(Map<String, Object> row) {
queue.add(row);
}
This issue was fixed in v1.20.0. If you’re using a lower version, consider updating the library. I you’re using a higher version you could try constructing the JsonStreamWriter builder with BigQuery client being initialized by StreamWriter by default:
streamWriter = JsonStreamWriter.newBuilder(writeStream.getName(), writeStream.getTableSchema()).build();

How to synchronize time with Firebase server in Java client?

I need to synchronize the time on a Java client application with the Firebase server, to prevent fails when the client system time is different from the server. Is it possible?
Here's my Firebase class code that starts the application:
private static FirebaseOptions getOptions() throws Exception {
/* ... */
for (Map<String, FirebaseOptions> map : credentials) {
if (map.containsKey(uri)) {
return map.get(uri);
}
}
try {
FirebaseOptions options = FirebaseOptions.builder()
.setCredentials(getCredentials(uri))
.build();
/* ... */
return options;
} catch (Exception e) {
/* ... */
}
}
public Firebase(String projectId) {
try {
try {
app = FirebaseApp.getInstance(projectId);
} catch (IllegalStateException e) {
FirebaseOptions options = getOptions();
for (FirebaseApp fapp : apps) {
if (fapp.getOptions().equals(options)) {
app = fapp;
}
}
if (app == null) {
app = FirebaseApp.initializeApp(options, projectId);
apps.add(app);
}
}
} catch (Exception e) {
error = e;
}
}
I can request the server timestamp by HTTP request, but how to change the time in the Java client? I tried use the FirestoreOptions setClock, but it still uses the system time, and works only if the system time is ok:
ZonedDateTime server = getServerTime();
ZonedDateTime now = ZonedDateTime.now();
long offsetNano = server.getNano() - now.getNano();
long offsetMilli = server.toInstant().toEpochMilli() - now.toInstant().toEpochMilli();
try {
FirebaseOptions options = FirebaseOptions.builder()
.setCredentials(getCredentials(uri))
.setFirestoreOptions(
FirestoreOptions
.newBuilder()
.setClock(new ApiClock() {
#Override
public long nanoTime() {
return ZonedDateTime.now().getNano() + offsetNano;
}
#Override
public long millisTime() {
return ZonedDateTime.now().toInstant().toEpochMilli() + offsetMilli;
}
})
.build()
)
.build();

OPC Client for milo fails to connect to local OPC discovery service

I am new to OPC UA and I am using milo OPC Subscriber client to connect to local discovery service. I have Prosys simulation server which is connected to my local discovery service.
Note: If I connect directly to prosys endpoint it works fine. It fails only through discovery service.
I get the following exception when I run my code
<pre>12:38:35.916 [main] INFO org.eclipse.milo.opcua.stack.core.Stack -
Successfully removed cryptography restrictions. 12:38:36.167 [main]
INFO com.company.OpcuaClientRunner - security temp dir:
C:\Users\Z003Z2YP\AppData\Local\Temp\security 12:38:36.173 [main] INFO
com.company.KeyStoreLoader - Loading KeyStore at
C:\Users\Z003Z2YP\AppData\Local\Temp\security\example-client.pfx
12:38:37.594 [main] INFO com.company.OpcuaClientRunner - Using
endpoint: opc.tcp://<hostname>:4840 [None] 12:38:37.600 [main] INFO
org.eclipse.milo.opcua.sdk.client.OpcUaClient - Eclipse Milo OPC UA
Stack version: 0.2.3 12:38:37.600 [main] INFO
org.eclipse.milo.opcua.sdk.client.OpcUaClient - Eclipse Milo OPC UA
Client SDK version: 0.2.3 12:38:37.809 [NonceUtilSecureRandom] INFO
org.eclipse.milo.opcua.stack.core.util.NonceUtil - SecureRandom seeded
in 0ms. 12:38:37.815 [ua-netty-event-loop-1] ERROR
org.eclipse.milo.opcua.stack.client.handlers.UaTcpClientMessageHandler
- [remote=<hostname>/<IP>:4840] errorMessage=ErrorMessage{error=StatusCode{name=Bad_ServiceUnsupported,
value=0x800B0000, quality=bad}, reason=null} 12:38:53.828 [main] ERROR
com.company.OpcuaClientRunner - Error running client example:
UaException: status=Bad_Timeout, message=request timed out after
16000ms java.util.concurrent.ExecutionException: UaException:
status=Bad_Timeout, message=request timed out after 16000ms
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at com.company.OpcuaSubscriber.run(OpcuaSubscriber.java:49)
at com.company.OpcuaClientRunner.run(OpcuaClientRunner.java:122)
at com.company.OpcuaSubscriber.main(OpcuaSubscriber.java:120) Caused by:
org.eclipse.milo.opcua.stack.core.UaException: request timed out after
16000ms
at org.eclipse.milo.opcua.stack.client.UaTcpStackClient.lambda$scheduleRequestTimeout$13(UaTcpStackClient.java:326)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
at java.lang.Thread.run(Thread.java:748) 12:38:53.828 [main] ERROR
com.company.OpcuaClientRunner - Error running example: UaException:
status=Bad_Timeout, message=request timed out after 16000ms
java.util.concurrent.ExecutionException: UaException:
status=Bad_Timeout, message=request timed out after 16000ms
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at com.company.OpcuaSubscriber.run(OpcuaSubscriber.java:49)
at com.company.OpcuaClientRunner.run(OpcuaClientRunner.java:122)
at com.company.OpcuaSubscriber.main(OpcuaSubscriber.java:120) Caused by:
org.eclipse.milo.opcua.stack.core.UaException: request timed out after
16000ms
at org.eclipse.milo.opcua.stack.client.UaTcpStackClient.lambda$scheduleRequestTimeout$13(UaTcpStackClient.java:326)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
at java.lang.Thread.run(Thread.java:748)</pre>
Code to create client in ClientRunner.
private OpcUaClient createClient() throws Exception {
File securityTempDir = new File(System.getProperty("java.io.tmpdir"), "security");
if (!securityTempDir.exists() && !securityTempDir.mkdirs()) {
throw new Exception("unable to create security dir: " + securityTempDir);
}
LoggerFactory.getLogger(getClass())
.info("security temp dir: {}", securityTempDir.getAbsolutePath());
KeyStoreLoader loader = new KeyStoreLoader().load(securityTempDir);
loader.load();
SecurityPolicy securityPolicy = client.getSecurityPolicy();
EndpointDescription[] endpoints;
try {
endpoints = UaTcpStackClient
.getEndpoints(client.getEndpointUrl())
.get();
} catch (Throwable ex) {
ex.printStackTrace();
// try the explicit discovery endpoint as well
String discoveryUrl = client.getEndpointUrl();
logger.info("Trying explicit discovery URL: {}", discoveryUrl);
endpoints = UaTcpStackClient
.getEndpoints(discoveryUrl)
.get();
}
EndpointDescription endpoint = Arrays.stream(endpoints)
.filter(e -> e.getSecurityPolicyUri().equals(securityPolicy.getSecurityPolicyUri()))
.findFirst().orElseThrow(() -> new Exception("no desired endpoints returned"));
logger.info("Using endpoint: {} [{}]", endpoint.getEndpointUrl(), securityPolicy);
OpcUaClientConfig config = OpcUaClientConfig.builder()
.setApplicationName(LocalizedText.english("eclipse milo opc-ua client"))
.setApplicationUri("urn:eclipse:milo:examples:client")
.setCertificate(loader.getClientCertificate())
.setKeyPair(loader.getClientKeyPair())
.setEndpoint(endpoint)
.setIdentityProvider(client.getIdentityProvider())
.setRequestTimeout(uint(5000))
.build();
return new OpcUaClient(config);
Client interface class
public interface OpcuaClientInterface {
public static final String USERNAME = "demo";
public static final String PASSWORD = "demo";
default String getEndpointUrl() {
return "opc.tcp://localhost:4840/UADiscovery";
}
default SecurityPolicy getSecurityPolicy() {
return SecurityPolicy.None;
}
default IdentityProvider getIdentityProvider() {
// return new UsernameProvider(USERNAME,PASSWORD);
return new AnonymousProvider();
}
void run (OpcUaClient client, CompletableFuture future) throws Exception;
}
subscriber run implementation
#Override
public void run(OpcUaClient client, CompletableFuture future) throws Exception {
// synchronous connect
client.connect().get();
// create a subscription # 1000ms
UaSubscription subscription = client.getSubscriptionManager().createSubscription(1000.0).get();
List nodeIds = Arrays.asList("SuctionPressure", "DischargePressure", "Flow", "BearingTemperature", "Vibration", "Power");
// List nodeIds = Arrays.asList("DS", "PV");
List MICRs = nodeIds.stream().map(id -> {
ReadValueId readValueId = new ReadValueId(new NodeId(3, id), AttributeId.Value.uid(), null,
QualifiedName.NULL_VALUE);
// important: client handle must be unique per item
UInteger clientHandle = uint(clientHandles.getAndIncrement());
MonitoringParameters parameters = new MonitoringParameters(clientHandle, 1000.0, // sampling interval
null, // filter, null means use default
uint(10), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId, MonitoringMode.Reporting,
parameters);
return request;
}).collect(Collectors.toList());
// when creating items in MonitoringMode.Reporting this callback is where each
// item needs to have its
// value/event consumer hooked up. The alternative is to create the item in
// sampling mode, hook up the
// consumer after the creation call completes, and then change the mode for all
// items to reporting.
BiConsumer onItemCreated = (item, id) -> item
.setValueConsumer(this::onSubscriptionValue);
List items = subscription.createMonitoredItems(TimestampsToReturn.Both, MICRs, onItemCreated)
.get();
for (UaMonitoredItem item : items) {
if (item.getStatusCode().isGood()) {
logger.info("item created for nodeId={}", item.getReadValueId().getNodeId());
} else {
logger.warn("failed to create item for nodeId={} (status={})", item.getReadValueId().getNodeId(),
item.getStatusCode());
}
}
// let the example run for 5 seconds then terminate
// Thread.sleep(1000 * 60 * 1);
// future.complete(client);
}
I got it working with the milo OPC Subscriber client.
following is the changes I did in the classes.
Client Interface
public interface OpcuaClientInterface {
public static final String USERNAME = "demo";
public static final String PASSWORD = "demo";
public static final String UaServerName = "SimulationServer";
default String getEndpointUrl() {
return "opc.tcp://localhost:4840";
}
default SecurityPolicy getSecurityPolicy() {
return SecurityPolicy.None;
}
default String getUaServerName () {
return UaServerName;
}
default IdentityProvider getIdentityProvider() {
return new UsernameProvider(USERNAME,PASSWORD);
}
void run (OpcUaClient client, CompletableFuture<OpcUaClient> future) throws Exception;
}
client runner class
public class OpcuaClientRunner {
static {
CryptoRestrictions.remove();
Security.addProvider(new BouncyCastleProvider());
}
private final AtomicLong requestHandle = new AtomicLong(1L);
private final Logger logger = LoggerFactory.getLogger(getClass());
private final CompletableFuture<OpcUaClient> future = new CompletableFuture<>();
private final OpcuaClientInterface client;
public OpcuaClientRunner(OpcuaClientInterface client) throws Exception {
this.client = client;
}
private KeyStoreLoader createKeyStore() {
KeyStoreLoader loader = null;
try {
File securityTempDir = new File(System.getProperty("java.io.tmpdir"), "security");
if (!securityTempDir.exists() && !securityTempDir.mkdirs()) {
throw new Exception("unable to create security dir: " + securityTempDir);
}
LoggerFactory.getLogger(getClass()).info("security temp dir: {}", securityTempDir.getAbsolutePath());
loader = new KeyStoreLoader().load(securityTempDir);
loader.load();
} catch (Exception e) {
logger.error("Could not load keys {}", e);
return null;
}
return loader;
}
private CompletableFuture<ServerOnNetwork[]> findServersOnNetwork(String discoveryEndpointUrl)
throws InterruptedException, ExecutionException {
UaStackClient c = createDiscoveryClient(client.getEndpointUrl()).connect().get();
RequestHeader header = new RequestHeader(NodeId.NULL_VALUE, DateTime.now(),
uint(requestHandle.getAndIncrement()), uint(0), null, uint(60), null);
FindServersOnNetworkRequest request = new FindServersOnNetworkRequest(header, null, null, null);
return c.<FindServersOnNetworkResponse>sendRequest(request).thenCompose(result -> {
StatusCode statusCode = result.getResponseHeader().getServiceResult();
if (statusCode.isGood()) {
return CompletableFuture.completedFuture(result.getServers());
} else {
CompletableFuture<ServerOnNetwork[]> f = new CompletableFuture<>();
f.completeExceptionally(new UaException(statusCode));
return f;
}
});
}
private UaTcpStackClient createDiscoveryClient(String endpointUrl) {
KeyStoreLoader loader = createKeyStore();
if (loader == null) {
return null;
}
UaTcpStackClientConfig config = UaTcpStackClientConfig.builder()
.setApplicationName(LocalizedText.english("Stack Example Client"))
.setApplicationUri(String.format("urn:example-client:%s", UUID.randomUUID()))
.setCertificate(loader.getClientCertificate()).setKeyPair(loader.getClientKeyPair())
.setEndpointUrl(endpointUrl).build();
return new UaTcpStackClient(config);
}
private OpcUaClient createUaClient() throws Exception {
KeyStoreLoader loader = createKeyStore();
if (loader == null) {
return null;
}
SecurityPolicy securityPolicy = client.getSecurityPolicy();
EndpointDescription[] endpoints = null;
try {
ServerOnNetwork[] servers = findServersOnNetwork(client.getEndpointUrl()).get();
ServerOnNetwork server = Arrays.stream(servers)
.filter(e -> e.getServerName().equals(client.getUaServerName())).findFirst()
.orElseThrow(() -> new Exception("no desired UA Server returned."));
endpoints = UaTcpStackClient.getEndpoints(server.getDiscoveryUrl()).get();
} catch (Throwable ex) {
ex.printStackTrace();
}
EndpointDescription endpoint = Arrays.stream(endpoints)
.filter(e -> e.getSecurityPolicyUri().equals(securityPolicy.getSecurityPolicyUri())).findFirst()
.orElseThrow(() -> new Exception("no desired endpoints returned"));
logger.info("Using endpoint: {} [{}]", endpoint.getEndpointUrl(), securityPolicy);
OpcUaClientConfig config = OpcUaClientConfig.builder()
.setApplicationName(LocalizedText.english("eclipse milo opc-ua client"))
.setApplicationUri("urn:eclipse:milo:examples:client").setCertificate(loader.getClientCertificate())
.setKeyPair(loader.getClientKeyPair()).setEndpoint(endpoint)
.setIdentityProvider(client.getIdentityProvider()).setRequestTimeout(uint(5000)).build();
return new OpcUaClient(config);
}
public void run() {
try {
OpcUaClient uaClient = createUaClient();
future.whenComplete((c, ex) -> {
if (ex != null) {
logger.error("Error running example: {}", ex.getMessage(), ex);
}
try {
uaClient.disconnect().get();
Stack.releaseSharedResources();
} catch (InterruptedException | ExecutionException e) {
logger.error("Error disconnecting:", e.getMessage(), e);
}
try {
Thread.sleep(1000);
System.exit(0);
} catch (InterruptedException e) {
e.printStackTrace();
}
});
try {
client.run(uaClient, future);
future.get(15, TimeUnit.SECONDS);
} catch (Throwable t) {
logger.error("Error running client example: {}", t.getMessage(), t);
future.completeExceptionally(t);
}
} catch (Throwable t) {
logger.error("Error getting client: {}", t.getMessage(), t);
future.completeExceptionally(t);
try {
Thread.sleep(1000);
System.exit(0);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
/*
* try { Thread.sleep(999999999); } catch (InterruptedException e) {
* e.printStackTrace(); }
*/
}
}
I think you may be confused about what a Discovery Server does.
You don't connect "through" a Discovery Server to another server. It's more like a registry where other servers can register themselves and then clients can go and see what servers are available.
The client queries the discovery server for a list of servers, selects one, and then connects directly to that server using the information obtained from the discovery server.

How to handle authentication to MongoDB from Hibernate application

I need to catch errors during authentication (like wrong parameters). I find nothing about it. I have isolted the procedure with threads. But with this bad way, the user can't understand what goes wrong
Below, my code:
public static boolean access(String db, String ip, String usr, String pwd){
Map<String, String> persistenceMap = new HashMap<>();
persistenceMap.put("hibernate.ogm.datastore.database", db);
persistenceMap.put("hibernate.ogm.datastore.host", ip);
persistenceMap.put("hibernate.ogm.datastore.username", usr);
persistenceMap.put("hibernate.ogm.datastore.password", pwd);
Thread mainThread = Thread.currentThread();
Thread logThread = new Thread(() -> {
Connection.EMF = Persistence.createEntityManagerFactory("ogm-jpa-mongo", persistenceMap);
Connection.EM = Connection.EMF.createEntityManager();
Connection.isOpen = true;
});
Thread timeOut = new Thread( () -> {
try{ Thread.sleep( 5000 ); }
catch(InterruptedException ex){ }
mainThread.interrupt();
});
logThread.start();
timeOut.start();
try{ logThread.join(); }
catch(InterruptedException ex){ return false; }
Connection.TM = com.arjuna.ats.jta.TransactionManager.transactionManager();
return Connection.isOpen;
}
The problem is that when I insert worng parameters, it is thrown a MongoSecurityException. But i can't catch it, I can only read it on the monitor-thread. Any ideas?
I believe this is a result of the way your version of Hibernate catches the MongoSecurityException. I believe the MongoSecurityException is caught inside a nested try catch block.
The correct answer here is to update your Hibernate version to the latest release. However, if you would like to see that exception I think you can do the following.
String message = "";
try {
logThread.join();
} catch(Throwable e) {
throw e;
} catch(Exception e) {
message = e.getMessage();
}
If that doesn't work you might be able to chain as follows.
String message = "";
try {
logThread.join();
} catch(Throwable e) {
e.getCause();
e.getCause().getCause();
e.getCause()..getCause().getCause();
}

Handle reconnection for twitter hbc

I'm having trouble figuring how to deal with disconnections with hbc twitter api. The doc says I need slow down reconnect attempts according to the type of error experienced. Where do I get the type of error experienced? Is it in the msgQueue or the eventQueue or wherever?
#Asynchronous
#Override
public void makeLatestsTweets() {
msgList = new LinkedList<Tweet>();
BlockingQueue<String> msgQueue = new LinkedBlockingQueue<String>(100);
BlockingQueue<Event> eventQueue = new LinkedBlockingQueue<Event>(100);
Hosts hosebirdHosts = new HttpHosts(Constants.SITESTREAM_HOST);
StatusesFilterEndpoint hosebirdEndpoint = new StatusesFilterEndpoint();
userIds = addFollowings();
hosebirdEndpoint.followings(userIds);
Authentication hosebirdAuth = new OAuth1(CONSUMER_KEY, CONSUMER_SECRET,
TOKEN, SECRET);
ClientBuilder builder = new ClientBuilder().hosts(hosebirdHosts)
.authentication(hosebirdAuth).endpoint(hosebirdEndpoint)
.processor(new StringDelimitedProcessor(msgQueue))
.eventMessageQueue(eventQueue);
Client hosebirdClient = builder.build();
hosebirdClient.connect();
while (!hosebirdClient.isDone()) {
try {
String msg = msgQueue.take();
Tweet tweet = format(msg);
if (tweet != null) {
System.out.println(tweet.getTweetsContent());
msgList.addFirst(tweet);
if (msgList.size() > tweetListSize) {
msgList.removeLast();
}
caller.setMsgList(msgList);
}
} catch (InterruptedException e) {
hosebirdClient.stop();
e.printStackTrace();
} catch (JSONException e) {
e.printStackTrace();
}
}
}

Categories

Resources