Revision:
I have a stand-alone version of Cassandra. I launch that using the following command:
./cassandra -f
I also have a Java Application that the Titan Graph Library installed. To obtain a TitanGraph object I use the following code:
BaseConfiguration configuration = new BaseConfiguration();
configuration.setProperty("storage.backend", "cassandra");
configuration.setProperty("storage.hostname", "127.0.0.1");
TitanGraph graph = TitanFactory.open(configuration);
After this I can add Vertices/Edges and Query them as well. I did an additional check on the local Cassandra database and can verify there are records being generated and persisted
cqlsh> select count(*) from titan.edgestore;
count
--------
185050
(1 rows)
The problem arises when I launch the rexster-server. I am launching this in stand-alone mode using the following command:
./rexster.sh -s -c ../config/rexster.xml
Then I launch the rexster console and load the graph. The issues is that the graph seems to contain no data? I am really not sure what is going on here. There is only 1 instance of Cassandra running.
(l_(l
(_______( 0 0
( (-Y-) <woof>
l l-----l l
l l,, l l,,
opening session [127.0.0.1:8184]
?h for help
rexster[groovy]> ?h
-= Console Specific =-
?<language-name>: jump to engine
?l: list of available languages on Rexster
?b: print available bindings in the session
?r: reset the rexster session
?e <file-name>: execute a script file
?q: quit
?h: displays this message
-= Rexster Context =-
rexster.getGraph(graphName) - gets a Graph instance
:graphName - [String] - the name of a graph configured within Rexster
rexster.getGraphNames() - gets the set of graph names configured within Rexster
rexster.getVersion() - gets the version of Rexster server
rexster[groovy]> rexster.getGraphNames()
==>kpdlp
rexster[groovy]> rexster.getGraph('graph')
==>titangraph[cassandrathrift:[127.0.0.1]]
rexster[groovy]> g = rexster.getGraph('graph')
==>titangraph[cassandrathrift:[127.0.0.1]]
rexster[groovy]> g.V.count()
==>0
rexster[groovy]>
Below is the rexster.xml I am using
<?xml version="1.0" encoding="UTF-8"?>
<rexster>
<http>
<server-port>8182</server-port>
<server-host>0.0.0.0</server-host>
<base-uri>http://localhost</base-uri>
<web-root>public</web-root>
<character-set>UTF-8</character-set>
<enable-jmx>false</enable-jmx>
<enable-doghouse>true</enable-doghouse>
<max-post-size>2097152</max-post-size>
<max-header-size>8192</max-header-size>
<upload-timeout-millis>30000</upload-timeout-millis>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</http>
<rexpro>
<server-port>8184</server-port>
<server-host>0.0.0.0</server-host>
<session-max-idle>1790000</session-max-idle>
<session-check-interval>3000000</session-check-interval>
<read-buffer>65536</read-buffer>
<enable-jmx>false</enable-jmx>
<thread-pool>
<worker>
<core-size>8</core-size>
<max-size>8</max-size>
</worker>
<kernal>
<core-size>4</core-size>
<max-size>4</max-size>
</kernal>
</thread-pool>
<io-strategy>leader-follower</io-strategy>
</rexpro>
<shutdown-port>8183</shutdown-port>
<shutdown-host>127.0.0.1</shutdown-host>
<config-check-interval>10000</config-check-interval>
<script-engines>
<script-engine>
<name>gremlin-groovy</name>
<reset-threshold>-1</reset-threshold>
<init-scripts>config/init.groovy</init-scripts>
<imports>com.tinkerpop.rexster.client.*</imports>
<static-imports>java.lang.Math.PI</static-imports>
</script-engine>
</script-engines>
<security>
<authentication>
<type>none</type>
<configuration>
<users>
<user>
<username>rexster</username>
<password>rexster</password>
</user>
</users>
</configuration>
</authentication>
</security>
<metrics>
<reporter>
<type>jmx</type>
</reporter>
<reporter>
<type>http</type>
</reporter>
<reporter>
<type>console</type>
<properties>
<rates-time-unit>SECONDS</rates-time-unit>
<duration-time-unit>SECONDS</duration-time-unit>
<report-period>10</report-period>
<report-time-unit>MINUTES</report-time-unit>
<includes>http.rest.*</includes>
<excludes>http.rest.*.delete</excludes>
</properties>
</reporter>
</metrics>
<graphs>
<graph>
<graph-name>graph</graph-name>
<graph-type>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type>
<graph-location></graph-location>
<graph-read-only>false</graph-read-only>
<properties>
<storage.backend>cassandrathrift</storage.backend>
<storage.hostname>127.0.0.1</storage.hostname>
</properties>
<extensions>
<allows>
<allow>tp:gremlin</allow>
</allows>
</extensions>
</graph>
</graphs>
</rexster>
Perhaps there is just some confusion in Rexster's role. Your question was:
My issue is that when I instantiate an TitanGraph using the
TitanFactory as seen below there does not seem to be the option to
specify the graph name?
Note that using TitanFactory will open a TitanGraph instance that connects directly to cassandra. That has nothing to do with Rexster. If you want to connect to Rexster (which remotely holds a TitanGraph instance given your configuration) then you must do so through REST or RexPro. With the more simple approach for verifying operations being REST, try to curl:
curl http://localhost:8182/graphs
That should return some JSON that contains the name of the TitanGraph instance you configured in the <graph-name> field in rexster.xml. The <graph-name> simply identifies the graph instance in Rexster so that you can uniquely identify it in requests when there are multiple instances hosted in there.
Related
I'm trying to write a test for my Spring Cloud service while it runs against Kafka and Schema Registry which run inside Docker containers.
Kafka and Schema Registry communicate with each other via a docker network, and have ports that are exposed on the host. The service I am testing is running on the host - it is able to communicate with both the docker kafka broker and docker schema registry. I am starting it up from a JUnit test which is annotated as shown below.
#ExtendWith(SpringExtension.class)
#SpringBootTest
#EnableAutoConfiguration(exclude = TestSupportBinderAutoConfiguration.class)
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class MyTest {
...
}
My service spins up and is able to write a message to the Kafka broker running inside the Docker container, however when my service is started using the various Spring / JUnit test annotations, there appears to be something different about the way the message it writes is serialized compared to when my service runs in 'production mode' (i.e. if I run it using using java -jar com.xyz.MyService).
The message needs to be written in Avro format, so I've configured the binder in application.yml as
my-topic:
destination: my-topic
contentType: application/*+avro
producer:
useNativeEncoding: true
When attempting to consume the message that my service has written, AbstractKafkaAvroDeserializer blows up, complaining that it was unable to marshal it into a completely unrelated Avro type:
{"logger_name":"org.apache.kafka.streams.errors.LogAndFailExceptionHandler","message":"Exception caught during Deserialization, taskId: 0_0, topic: my-topic, partition: 0, offset: 1","stack_trace":"org.apache.kafka.common.errors.SerializationException: Could not find class com.xyz.SomeOtherMessageType specified in writer's schema whilst finding reader's schema for a SpecificRecord.
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getSpecificReaderSchema(AbstractKafkaAvroDeserializer.java:265)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getReaderSchema(AbstractKafkaAvroDeserializer.java:247)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getDatumReader(AbstractKafkaAvroDeserializer.java:194)
...
This does not happen if my service runs in 'production mode'.
I think therefore that some setting is being applied to my service when I spin it up in 'test mode', which changes the way messages are encoded or serialized.
Can anyone suggest some things I can try to resolve this?
Update 1
So, it turns out that the messages looks pretty much identical when they are written to the topic and then read back (UUIDs are random for each test run):
Written to topic by service running in 'test mode':
Address 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
------- -------- -------- -------- -------- ----------------
000000: 00000000 01483335 63616366 62642D30 .....H35cacfbd-0
000010: 3165642D 34653564 2D613936 652D6665 1ed-4e5d-a96e-fe
000020: 30626339 65313033 34664832 35313436 0bc9e1034fH25146
000030: 6237392D 66643334 2D346430 322D6261 b79-fd34-4d02-ba
000040: 37362D36 61396535 62623861 31343448 76-6a9e5bb8a144H
000050: 30653364 30326536 2D383732 372D3466 0e3d02e6-8727-4f
000060: 64312D38 3730662D 33646633 35353166 d1-870f-3df3551f
000070: 37343861 084D7220 54064D72 730A4A69 748a.Mr T.Mrs.Ji
000080: 6D6D790A 57686974 6514536E 6F772068 mmy.White.Snow h
000090: 6F757365 00000012 4C697665 72706F6F ouse....Liverpoo
0000A0: 6C0C4C4C 32335252 0E456E67 6C616E64 l.XXXXXX.England
0000B0: 16303735 31323334 35363738 021E4D72 .XXXXXXXXXXX..Mr
0000C0: 20542773 20427573 696E6573 73483737 T's BusinessH77
0000D0: 32383064 36352D36 3633362D 34376565 280d65-6636-47ee
0000E0: 2D393864 302D6361 36646531 32373838 -98d0-ca6de12788
0000F0: 63610000 ca..
Written to topic by service running in 'production mode':
Address 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
------- -------- -------- -------- -------- ----------------
000000: 00000000 57483433 64343264 61372D30 ....WH43d42da7-0
000010: 6533392D 34646665 2D383966 362D6531 e39-4dfe-89f6-e1
000020: 37363036 34383730 61344833 38663864 76064870a4H38f8d
000030: 3561342D 65386532 2D346134 372D6235 5a4-e8e2-4a47-b5
000040: 30662D37 31623435 36653837 33393348 0f-71b456e87393H
000050: 63666463 33653964 2D303362 612D3464 cfdc3e9d-03ba-4d
000060: 62372D62 3034622D 31393137 37323634 b7-b04b-19177264
000070: 36623665 084D7220 54064D72 730A4A69 6b6e.Mr T.Mrs.Ji
000080: 6D6D790A 57686974 6514536E 6F772068 mmy.White.Snow h
000090: 6F757365 00000012 4C697665 72706F6F ouse....Liverpoo
0000A0: 6C0C4C4C 32335252 0E456E67 6C616E64 l.XXXXXX.England
0000B0: 16303735 31323334 35363738 021E4D72 .XXXXXXXXXXX..Mr
0000C0: 20542773 20427573 696E6573 73486161 T's BusinessHaa
0000D0: 35326636 34662D36 6131642D 34393030 52f64f-6a1d-4900
0000E0: 2D616537 612D3432 33326333 65613938 -ae7a-4232c3ea98
0000F0: 38330000 83..
Testcontainers Kafka module runs a single node Kafka installation. It doesn't spin up a Schema Registry. Which I suspect might be a problem for Avro serialization.
You can add it manually to the tests. Testcontainers allows to run any Docker image programmatically with a simple API call:
var schemaRegistry = new GenericContainer(DockerImageName.parse("confluentcp/cp-schema-registry:version"));
I don't know for certain, but you probably need to connect Kafka and the schema registry, which you can do with the Network, see the Advanced networking chapter in the docs.
Unfortunately, I don't have a good example to refer to.
You can also look at something like this: https://github.com/kreuzwerker/kafka-consumer-testing.
They mock schema registry url so there's no separate schema registry container.
I have web app that allow sent messages to queue, it deployed on Websphere Application Server and work very well.
I try to build light environment for autotests, but when i try to sent message to queue from test it returns to me MQJE001: Completion Code '2', Reason '2035'
I thought that problem in CHLAUTH rules but seems that i have all rights.
C:/> dspmqaut -m M00.EDOGO -n OEP.FROM.GW_SBAST.DLV -t q -p out-bychek-ao
Entity out-bychek-ao has the following authorizations for object OEP.FROM.GW_SBA
ST.DLV:
get
browse
put
inq
set
crt
dlt
chg
dsp
passid
passall
setid
setall
clr
error from logs :
AMQ8075: Authorization failed because the SID for entity 'out-bychek-a' cannot
be obtained.
EXPLANATION:
The Object Authority Manager was unable to obtain a SID for the specified
entity. This could be because the local machine is not in the domain to locate
the entity, or because the entity does not exist.
ACTION:
Ensure that the entity is valid, and that all necessary domain controllers are
available. This might mean creating the entity on the local machine.
----- amqzfubn.c : 2252 -------------------------------------------------------
7/9/2018 15:39:57 - Process(2028.3) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(SBT-ORSEDG-204) Installation(Installation1)
VRMF(7.5.0.4) QMgr(M00.EDOGO)
AMQ9557: Queue Manager User ID initialization failed.
EXPLANATION:
The call to initialize the User ID failed with CompCode 2 and Reason 2035.
ACTION:
Correct the error and try again.
----- cmqxrsrv.c : 1975 -------------------------------------------------------
7/9/2018 15:39:57 - Process(2028.3) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(SBT-ORSEDG-204) Installation(Installation1)
VRMF(7.5.0.4) QMgr(M00.EDOGO)
AMQ9999: Channel 'SC.EDOGO' to host '10.82.38.188' ended abnormally.
EXPLANATION:
The channel program running under process ID 2028(11564) for channel 'SC.EDOGO'
ended abnormally. The host name is '10.82.38.188'; in some cases the host name
cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 909 --------------------------------------------------------
notice AMQ8075: Authorization failed because the SID for entity 'out-bychek-a' cannot in my account name lost last letter. Is it normal?
and this
DISPLAY CHLAUTH('SYSTEM.DEF.SVRCONN') MATCH(RUNCHECK) ALL ADDRESS('127.0.0.1') CLNTUSER('out-bychek-ao')
7 : DISPLAY CHLAUTH('SYSTEM.DEF.SVRCONN') MATCH(RUNCHECK) ALL ADDRESS('127.0.0.1') CLNTUSER('out-bychek-ao')
AMQ8898: Display channel authentication record details - currently disabled.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
DESCR(Default rule to disable all SYSTEM channels)
CUSTOM( ) ADDRESS(*)
USERSRC(NOACCESS) WARN(NO)
ALTDATE(2016-11-14) ALTTIME(17.33.34)
dmpmqaut -m M00.EDOGO -n OEP.FROM.GW_SBAST.DLV -t q -p out-bychek-ao -e
profile : OEP.FROM.GW_SBAST.DLV
object type: queue
entity : out-bychek-ao#alpha
entity tyoe: principal
authority : allmqi dlt chg dsp clr
- - - - - - - - -
profile : CLASS
object type: queue
entity : out-bychek-ao#alpha
entity tyoe: principal
authority : clt
Currently I have two maps in hazelcast, and they are configured like so:
<hz:map name="some-map"
max-idle-seconds="0"
time-to-live-seconds="0">
<hz:map-store enabled="true"
initial-mode="EAGER"
write-delay-seconds="0"
class-name="SomeMapStore">
</hz:map-store>
<hz:partition-strategy>com.hazelcast.partition.strategy.DefaultPartitioningStrategy</hz:partition-strategy>
</hz:map>
I would expect the initial-mode="EAGER" from the hazelcast-beans.xml configuration to populate the hazelcast map. Instead the application process hangs for a moment, and then I see the following error:
my-service 21:14:15.247Z [hz.my-service-name.SlowOperationDetectorThread] WARN com.hazelcast.spi.impl.operationexecutor.slowoperationdetector.SlowOperationDetector - [localhost]:8085 [my-service-name-local] [3.9.4] Slow operation detected: com.hazelcast.map.impl.operation.PutTransientOperation
Has anyone run into this? I'm on hazelcast 3.9.4
I have 5GB worth of data in DSE 4.8.9. I am trying to load the same data into DSE 5.0.2. The command I use is following:
root#dse:/mnt/cassandra/data$ sstableloader -d 10.0.2.91 /mnt/cassandra/data/my-keyspace/my-table-0b168ba1637111e6b40131c603254a9b/
This gives me following exception:
DEBUG 15:27:12,850 Using framed transport.
DEBUG 15:27:12,850 Opening framed transport to: 10.0.2.91:9160
DEBUG 15:27:12,850 Using thriftFramedTransportSize size of 16777216
DEBUG 15:27:12,851 Framed transport opened successfully to: 10.0.2.91:9160
Could not retrieve endpoint ranges:
InvalidRequestException(why:unconfigured table schema_columnfamilies)
java.lang.RuntimeException: Could not retrieve endpoint ranges: at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:342)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:109)
Caused by: InvalidRequestException(why:unconfigured table schema_columnfamilies)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:50297)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result$execute_cql3_query_resultStandardScheme.read(Cassandra.java:50274)
at org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:50189)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
at org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1734)
at org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1719)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:321)
... 2 more
Thoughts?
For scenarios when you have few nodes and not a lot of data, you can follow these steps for a cluster migration (ensure the clusters are at most 1 major release apart)
1) create the schema in the new cluster
2) move both node's data to each new node (into the new cfid tables)
3) nodetool refresh to pick up the data
4) nodetool cleanup to clear out the extra data
5) If the old cluster was from a previous major version, run sstable upgrade on the new cluster.
We have 10 Cassandra nodes in production running Cassandra-2.1.8. We recently upgraded to 2.1.8 version. Previously we were using only 3 nodes running Cassandra-2.1.2. First we upgraded the initial 3 nodes from 2.1.2 to 2.1.8 (following the procedure as described in Upgrading Cassandra). Then we added 7 more nodes running Cassandra-2.1.8 in cluster. Then we started our client programs. For first few hours everything worked fine, but after few hours, we saw some errors in client program logs like
Thread-0 [29/07/15 17:41:23.356] ERROR com.cleartrail.entityprofiling.engine.InterpretationWriter - Error:com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:259)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:175)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.cleartrail.entityprofiling.engine.InterpretationWriter.WriteInterpretation(InterpretationWriter.java:430)
at com.cleartrail.entityprofiling.engine.Profiler.buildProfile(Profiler.java:1042)
at com.cleartrail.messageconsumer.consumer.KafkaConsumer.run(KafkaConsumer.java:336)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/172.50.33.161:9041, /172.50.33.162:9041, /172.50.33.95:9041, /172.50.33.96:9041, /172.50.33.165:9041, /172.50.33.166:9041, /172.50.33.163:9041, /172.50.33.164:9041, /172.50.33.42:9041, /172.50.33.167:9041] - use getErrors() for details)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:176)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Now, I double checked the Firewall (as suggested in few posts), ports, timeouts in client as well as nodes and they all are correct.
I am also not closing the connection anywhere in between. I am using batch queries with batch size of 1000 and the queries are update queries updating counters in my table with three columns
entity , twfwv , cvalue
where entity and twfwv columns are text and primary key and cvalue is counter column.
I even restarted all my nodes (because this trick helped me in my dev environment when I faced the same exception) but its not helping. Please suggest what can be the probable problem here.
My issue was resolved by checking the errors collection of NoHostAvailableException as advised by Olivier Michallat in the comments. For me it was the protocol version on the cluster configuration. Mine was null, setting it to 3 fixed the problem.
My issue was resolved by removing/using a property to set or unset the custom load balancing TokenAwarePolicy my connection was using, and relying on the default.
Specifically, I was trying to get a local spring boot app talking to a single dockerized Cassandra instance.
Cluster.Builder builder = Cluster.builder()
.addContactPoints(cassandraProperties.getHosts())
.withPort(cassandraProperties.getPort())
.withProtocolVersion(ProtocolVersion.V4)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withCredentials(cassandraProperties.getUsername(), cassandraProperties.getPassword())
.withCodecRegistry(codecRegistry);
if (loadBalanced) {
builder.withLoadBalancingPolicy(
new TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().withLocalDc(localDc).build()));
}