I've a 3 node infinispan cluster with numOwners=2 and I'm running into issues with cluster views when one of the node gets disconnected from the network and joins back. Following are the logs:
(Incoming-1,BrokerPE-0-28575) ISPN000094: Received new cluster view for channel ISPN: [BrokerPE-0-28575|2] (3) [BrokerPE-0-28575, SEM03VVM-201-59385, SEM03VVM-202-33714]
ISPN000094: Received new cluster view for channel ISPN: [BrokerPE-0-28575|3] (2) [BrokerPE-0-28575, SEM03VVM-202-33714] --> one node disconnected
ISPN000093: Received new, MERGED cluster view for channel ISPN: MergeView::[BrokerPE-0-28575|4] (2) [BrokerPE-0-28575, SEM03VVM-201-59385], 2 subgroups: [BrokerPE-0-28575|3] (2) [BrokerPE-0-28575, SEM03VVM-202-33714], [BrokerPE-0-28575|2] (3) [BrokerPE-0-28575, SEM03VVM-201-59385, SEM03VVM-202-33714] --> incorrect merge
Following is my jgroups config:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-3.6.xsd">
<TCP
bind_addr="${jgroups.tcp.address:127.0.0.1}"
bind_port="${jgroups.tcp.port:7800}"
loopback="true"
port_range="30"
recv_buf_size="20m"
send_buf_size="640k"
max_bundle_size="31k"
use_send_queues="true"
enable_diagnostics="false"
sock_conn_timeout="300"
bundler_type="old"
thread_naming_pattern="pl"
timer_type="new3"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="1"
internal_thread_pool.max_threads="10"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="100"
internal_thread_pool.rejection_policy="Discard"
/>
<!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->
<TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}"
port_range="2"
num_initial_members="3"
ergonomics="false"
/>
<!-- MPING bind_addr="${jgroups.bind_addr:127.0.0.1}" break_on_coord_rsp="true"
mcast_addr="${jboss.default.multicast.address:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/-->
<!-- <MPING bind_addr="${jgroups.bind_addr:127.0.0.1}" break_on_coord_rsp="true"
mcast_addr="${jboss.default.multicast.address:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43366}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/> -->
<MERGE3 max_interval="30000" min_interval="10000"/>
<FD_SOCK bind_addr="${jgroups.bind_addr}"/>
<FD timeout="3000" max_tries="3"/>
<VERIFY_SUSPECT timeout="3000"/>
<!-- <BARRIER /> -->
<!-- <pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/> -->
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100" discard_delivered_msgs="true"/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true" merge_timeout="6000"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<UFC max_credits="2m" min_threshold="0.40"/>
<!-- <MFC max_credits="2m" min_threshold="0.40"/> -->
<FRAG2 frag_size="30k"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
<!-- <pbcast.STATE_TRANSFER/> -->
</config>
I'm using Infinispan 7.0.2 and jgroups 3.6.1 version. I've tried a lot of configs but nothing worked. Your help would be much appreciated.
[UPDATE] Things worked fine after setting the following property to more than 1 : "internal_thread_pool.min_threads".
So to simplify this, we have
View broker|2={broker,201,202}
201 leaves, the view is now broker|3={broker,202}
Then there is a merge between views broker|3 and broker|2, which leads to incorrect view broker|4={broker,201}
I created [1] to investigate what's going on here. First off, the subviews of the merge view should have included 202 being a subgroup coordinator, but that wasn't the case.
Can you describe what exactly happened here? Can this be reproduced? It would be nice to have TRACE level logs for FD,FD_ALL, MERGE3 and GMS...
[1] https://issues.jboss.org/browse/JGRP-2128
Related
I am using mybatis 3.4.6 along with org.xerial:sqlite-jdbc 3.28.0. Below is my configuration to use an in-memory database with shared mode enabled
db.driver=org.sqlite.JDBC
db.url=jdbc:sqlite:file::memory:?cache=shared
The db.url is correct according to this test class
And I managed to setup the correct transaction isolation level with below mybatis configuration though there is a typo of property read_uncommitted according to this issue which is reported by me as well
<environment id="${db.env}">
<transactionManager type="jdbc"/>
<dataSource type="POOLED">
<property name="driver" value="${db.driver}" />
<property name="url" value="${db.url}"/>
<property name="username" value="${db.username}" />
<property name="password" value="${db.password}" />
<property name="defaultTransactionIsolationLevel" value="1" />
<property name="driver.synchronous" value="OFF" />
<property name="driver.transaction_mode" value="IMMEDIATE"/>
<property name="driver.foreign_keys" value="ON"/>
</dataSource>
</environment>
This line of configuration
<property name="defaultTransactionIsolationLevel" value="1" />
does the trick to set the correct value of PRAGMA read_uncommitted
I am pretty sure of it since I debugged the underneath code which initialize the connection and check the value has been set correctly
However with the above setting, my program still encounters SQLITE_LOCKED_SHAREDCACHE intermittently while reading, which I think it shouldn't happen according the description highlighted in the red rectangle of below screenshot. I want to know the reason and how to resolve it, though the occurring probability of this error is low.
Any ideas would be appreciated!!
The debug configurations is below
===CONFINGURATION==============================================
jdbcDriver org.sqlite.JDBC
jdbcUrl jdbc:sqlite:file::memory:?cache=shared
jdbcUsername
jdbcPassword ************
poolMaxActiveConnections 10
poolMaxIdleConnections 5
poolMaxCheckoutTime 20000
poolTimeToWait 20000
poolPingEnabled false
poolPingQuery NO PING QUERY SET
poolPingConnectionsNotUsedFor 0
---STATUS-----------------------------------------------------
activeConnections 5
idleConnections 5
requestCount 27
averageRequestTime 7941
averageCheckoutTime 4437
claimedOverdue 0
averageOverdueCheckoutTime 0
hadToWait 0
averageWaitTime 0
badConnectionCount 0
===============================================================
Attachments:
The exception is below
org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
### The error may exist in mapper/MsgRecordDO-sqlmap-mappering.xml
### The error may involve com.super.mock.platform.agent.dal.daointerface.MsgRecordDAO.getRecord
### The error occurred while executing a query
### Cause: org.apache.ibatis.transaction.TransactionException: Error configuring AutoCommit. Your driver may not support getAutoCommit() or setAutoCommit(). Requested setting: false. Cause: org.sqlite.SQLiteException: [SQLITE_LOCKED_SHAREDCACHE] Contention with a different database connection that shares the cache (database table is locked)
I finally resolved this issue by myself and share the workaround below in case someone else encounters similar issue in the future.
First of all, we're able to get the completed call stack of the exception shown below
Going through the source code indicated by the callback, we have below findings.
SQLite is built-in with auto commit enabled by default which is contradict with MyBatis which disables auto commit by default since we're using SqlSessionManager
MyBatis would override the auto commit property during connection initialization using method setDesiredAutoCommit which finally invokes SQLiteConnection#setAutoCommit
SQLiteConnection#setAutoCommit would incur a begin immediate operation against the database which is actually exclusive, check out below source code screenshots for detailed explanation since we configure our transaction mode to be IMMEDIATE
<property name="driver.transaction_mode" value="IMMEDIATE"/>
So until now, An apparent solution is to change the transaction mode to be DEFERRED. Furthermore, the solution of making the auto commit setting the same between MyBatis and SQLite has been considered as well, however, it's not adopted since there is no way to set the auto commit of SQLiteConnection during initialization stage, there would be always switching (from true to false or vice versa) and switch would cause the above error probably if transaction mode is not set properly
So We have a problem where a penetration checker being run for something like 12 hours is causing Jgroups to disconnect, the slave doesn't rejoin the cluster, split brain, some other issues that represent the lack of replication, and it doesn't recover.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.6.xsd">
<TCP bind_addr="NON_LOOPBACK"
bind_port="${infinispan.jgroups.bindPort}"
enable_diagnostics="false"
thread_naming_pattern="pl"
send_buf_size="640k"
sock_conn_timeout="300"
thread_pool.min_threads="${jgroups.thread_pool.min_threads:2}"
thread_pool.max_threads="${jgroups.thread_pool.max_threads:30}"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="false"
internal_thread_pool.min_threads="${jgroups.internal_thread_pool.min_threads:5}"
internal_thread_pool.max_threads="${jgroups.internal_thread_pool.max_threads:20}"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="500"
oob_thread_pool.min_threads="${jgroups.oob_thread_pool.min_threads:20}"
oob_thread_pool.max_threads="${jgroups.oob_thread_pool.max_threads:200}"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
/>
<TCPPING async_discovery="true"
initial_hosts="${infinispan.jgroups.tcpping.initialhosts}"
port_range="1"/>
/>
<MERGE3 min_interval="10000"
max_interval="30000"
/>
<FD_SOCK />
<FD />
<VERIFY_SUSPECT />
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
resend_last_seqno="true"
/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="50"
xmit_table_msgs_per_row="1024"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="100"
conn_expiry_timeout="0"
/>
<pbcast.STABLE stability_delay="500"
desired_avg_gossip="5000"
max_bytes="1M"
/>
<pbcast.GMS print_local_addr="true" join_timeout="15000"/>
<pbcast.FLUSH />
<FRAG2 />
</config>
versions
jgroups 3.6.13
infinispan 8.1.0,
hibernate search 5.3
I'm wondering if we can change our jgroups configuration so that the cluster node will eventually be able to rejoin. Even after 12 hours of "attack" so that we don't have to restart the servers.
Define disconnect for me first, please!
Regarding your stack, I have a few suggestions / questions:
I suggest in general to use tcp.xml from the version you use and then modify it according to your needs
TCPPING: does initial_hosts contain all cluster members?
Replace FD with FD_ALL
STABLE: desired_avg_gossip of 5s is a bit small; this generates more traffic than needed
GMS.join_timeout of 15s is quite high; this is the startup time of the first member, and it also influences discovery time
What do you need FLUSH for?
I have installed jProfiler in my Linux machine and I am saving the data into .jps file. I am then loading this file into jProfiler UI in my local machine.
Here is my config file:
<?xml version="1.0" encoding="UTF-8"?>
<config>
<nextId id="104" />
<generalSettings setupHasRun="false">
<recordingProfiles>
<recordingProfile id="10" name="CPU recording">
<actionKey id="cpu"/>
</recordingProfile>
</recordingProfiles>
</generalSettings>
<templates>
<template id="50" name="Instrumentation, all features supported" startFrozen="false" recordCPUOnStartup="false" vmCannotExit="false" instrumentationType="1" samplingNoFilters="false" lineNumbers="false" samplingFrequency="5" timeType="1" disableCPUProfiling="false" recordAllocOnStartup="true" recordArrayAlloc="true" enableTriggersOnStartup="true" allocTreeRecordingType="1" disableMonitorContentions="false" componentDetection="true" chronoHeap="false" autoUpdatePeriodLong="5" autoUpdatePeriodShort="2" allUrls="false" payloadCap="50" eventCap="20000" showSystemThreads="false" utilConcurrentHandling="true" libraryDebugParameters="" exceptionalCap="5" exceptionalTimeType="4" autoTuneInstrumentation="true" autoTuneMaxAverage="100" autoTuneMinPerMille="10" samplingPayloadCallStacks="true" description="This is JProfiler's fully featured mode. In this setting, call stack information is accurate, but CPU overhead and distortion of measured call times may be high, depending on your filter settings. You should define inclusive filters for your own packages." system="true" />
<template id="51" name="Sampling for CPU profiling, some features not supported" startFrozen="false" recordCPUOnStartup="false" vmCannotExit="false" instrumentationType="3" samplingNoFilters="false" lineNumbers="false" samplingFrequency="5" timeType="1" disableCPUProfiling="false" recordAllocOnStartup="true" recordArrayAlloc="true" enableTriggersOnStartup="true" allocTreeRecordingType="1" disableMonitorContentions="false" componentDetection="true" chronoHeap="false" autoUpdatePeriodLong="5" autoUpdatePeriodShort="2" allUrls="false" payloadCap="50" eventCap="20000" showSystemThreads="false" utilConcurrentHandling="true" libraryDebugParameters="" exceptionalCap="5" exceptionalTimeType="4" autoTuneInstrumentation="true" autoTuneMaxAverage="100" autoTuneMinPerMille="10" samplingPayloadCallStacks="true" description="This template is particularly suitable for CPU profiling and for memory profiling when accurate allocation information is not important. Sampling has a very low overhead and does not distort measured call tines. Some views, like the method statistics are no available. JEE payloads cannot be annotated in the call tree, but payload hotspots without backtraces are available." system="true" />
</templates>
<sessions>
<session id="80" templateId="50" name="Animated Bezier Curve Demo" type="1" isStartupWorkingDirectory="true" mainClass="bezier.BezierAnim">
<filters>
<filter type="inclusive" name="com." />
</filters>
<exceptionalMethods/>
<classPath>
<classPathEntry path="demo/bezier/classes" />
</classPath>
<sourcePath>
<sourcePathEntry path="demo/bezier/src" />
</sourcePath>
<probes>
<probe name="com.jprofiler.agent.probe.interceptor.TrackingInterceptor" enabled="true" startProbeRecording="false" events="false" annotatePayloads="false">
<id value="3" />
</probe>
</probes>
</session>
The problem which I am facing is that I am not able to get any details regarding method statistics under the CPU views tab in jProfiler UI.
But I am able to get other fields in Telemetrics.
The version in use: JProfiler 9.1 and I have used sample config.xml to start with my test. DO i need to make any changed in my config file to get the method level statistics in my .jps file
Method statistics is recorded separately, because the overhead is too high to always be recorded together with CPU data.
When the session is live, go the method statistics view and enabled recording. For offline profiling, there is a trigger action that starts method statistics recording.
I am very new to Mule.
I am facing a problem. I have a requirement where I need to read data from a CSV file which is in D:\ drive & insert data into PostgreSQL Database using Mule for every 2min.
So i have chosen Quartz.
Here is my code :
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<quartz:connector name="myQuartzConnector" validateConnections="true" doc:name="Quartz">
<receiver-threading-profile maxThreadsActive="1"/>
</quartz:connector>
<flow name="QuartzFlow" processingStrategy="synchronous">
<quartz:inbound-endpoint doc:name="Quartz"
jobName="CronJobSchedule" cronExpression="0 0/2 * * * ?"
connector-ref="myQuartzConnector" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<flow-ref name="csvFileToDatabase" doc:name="Flow Reference" />
</flow>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
This is working fine but in D:\ if i keep csv file mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
If i set polling frequency of file connector with pollingfrequency="120000"(for 2 min 2*60*1000 milliseconds) also quartz is not waiting for 2 min. Within the scheduling time of 2 min., if i place csv file in D:\ mule reading the csv file and writing to the database without waiting for 2 min(means Quartz scheduling time here).
Even if i place csv file in D:\, mule Quartz has to perform action for every 2 min. only, for this Which changes do i need to include here..
Could any one pls help me..
It is because the flow csvFileToDatabase has its own inbound-enpoint that will execute regardless of the quartz flow. You have the file:inbound-enpoint set to poll every 5000 milliseconds.
Theres no need to have both an inbound-endpoint and quartz scheduling the flow.
Either change the file:inbound-endpoint frequency or if you really want to use quartz to trigger the flow take a look at the mule request module that allows you to use an inbound-endpoint mid-flow: https://github.com/mulesoft/mule-module-requester
Revision:
I have a stand-alone version of Cassandra. I launch that using the following command:
./cassandra -f
I also have a Java Application that the Titan Graph Library installed. To obtain a TitanGraph object I use the following code:
BaseConfiguration configuration = new BaseConfiguration();
configuration.setProperty("storage.backend", "cassandra");
configuration.setProperty("storage.hostname", "127.0.0.1");
TitanGraph graph = TitanFactory.open(configuration);
After this I can add Vertices/Edges and Query them as well. I did an additional check on the local Cassandra database and can verify there are records being generated and persisted
cqlsh> select count(*) from titan.edgestore;
count
--------
185050
(1 rows)
The problem arises when I launch the rexster-server. I am launching this in stand-alone mode using the following command:
./rexster.sh -s -c ../config/rexster.xml
Then I launch the rexster console and load the graph. The issues is that the graph seems to contain no data? I am really not sure what is going on here. There is only 1 instance of Cassandra running.
(l_(l
(_______( 0 0
( (-Y-) <woof>
l l-----l l
l l,, l l,,
opening session [127.0.0.1:8184]
?h for help
rexster[groovy]> ?h
-= Console Specific =-
?<language-name>: jump to engine
?l: list of available languages on Rexster
?b: print available bindings in the session
?r: reset the rexster session
?e <file-name>: execute a script file
?q: quit
?h: displays this message
-= Rexster Context =-
rexster.getGraph(graphName) - gets a Graph instance
:graphName - [String] - the name of a graph configured within Rexster
rexster.getGraphNames() - gets the set of graph names configured within Rexster
rexster.getVersion() - gets the version of Rexster server
rexster[groovy]> rexster.getGraphNames()
==>kpdlp
rexster[groovy]> rexster.getGraph('graph')
==>titangraph[cassandrathrift:[127.0.0.1]]
rexster[groovy]> g = rexster.getGraph('graph')
==>titangraph[cassandrathrift:[127.0.0.1]]
rexster[groovy]> g.V.count()
==>0
rexster[groovy]>
Below is the rexster.xml I am using
<?xml version="1.0" encoding="UTF-8"?>
<rexster>
<http>
<server-port>8182</server-port>
<server-host>0.0.0.0</server-host>
<base-uri>http://localhost</base-uri>
<web-root>public</web-root>
<character-set>UTF-8</character-set>
<enable-jmx>false</enable-jmx>
<enable-doghouse>true</enable-doghouse>
<max-post-size>2097152</max-post-size>
<max-header-size>8192</max-header-size>
<upload-timeout-millis>30000</upload-timeout-millis>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</http>
<rexpro>
<server-port>8184</server-port>
<server-host>0.0.0.0</server-host>
<session-max-idle>1790000</session-max-idle>
<session-check-interval>3000000</session-check-interval>
<read-buffer>65536</read-buffer>
<enable-jmx>false</enable-jmx>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</rexpro>
<shutdown-port>8183</shutdown-port>
<shutdown-host>127.0.0.1</shutdown-host>
<config-check-interval>10000</config-check-interval>
<script-engines>
<script-engine>
<name>gremlin-groovy</name>
<reset-threshold>-1</reset-threshold>
<init-scripts>config/init.groovy</init-scripts>
<imports>com.tinkerpop.rexster.client.*</imports>
<static-imports>java.lang.Math.PI</static-imports>
</script-engine>
</script-engines>
<security>
<authentication>
<type>none</type>
<configuration>
<users>
<user>
<username>rexster</username>
<password>rexster</password>
</user>
</users>
</configuration>
</authentication>
</security>
<metrics>
<reporter>
<type>jmx</type>
</reporter>
<reporter>
<type>http</type>
</reporter>
<reporter>
<type>console</type>
<properties>
<rates-time-unit>SECONDS</rates-time-unit>
<duration-time-unit>SECONDS</duration-time-unit>
<report-period>10</report-period>
<report-time-unit>MINUTES</report-time-unit>
<includes>http.rest.*</includes>
<excludes>http.rest.*.delete</excludes>
</properties>
</reporter>
</metrics>
<graphs>
<graph>
<graph-name>graph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location></graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.backend>cassandrathrift</storage.backend>
<storage.hostname>127.0.0.1</storage.hostname>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
</graphs>
</rexster>
Perhaps there is just some confusion in Rexster's role. Your question was:
My issue is that when I instantiate an TitanGraph using the
TitanFactory as seen below there does not seem to be the option to
specify the graph name?
Note that using TitanFactory will open a TitanGraph instance that connects directly to cassandra. That has nothing to do with Rexster. If you want to connect to Rexster (which remotely holds a TitanGraph instance given your configuration) then you must do so through REST or RexPro. With the more simple approach for verifying operations being REST, try to curl:
curl http://localhost:8182/graphs
That should return some JSON that contains the name of the TitanGraph instance you configured in the <graph-name> field in rexster.xml. The <graph-name> simply identifies the graph instance in Rexster so that you can uniquely identify it in requests when there are multiple instances hosted in there.