How to insert TimeUUID and TimeStamp with Spark Cassandra connector? - java

I am using Spark Cassandra connector in Java to insert data. My data has a timeuuid and timestamp field. I have the following table:
CREATE TABLE abc.log (
time_uuid timeuuid,
session_id text,
event text,
time timestamp,
sequence int,
PRIMARY KEY (customer)
);
I am using this code to insert:
JavaRDD<EventLog> rdd = sc.parallelize(eventLogs);
javaFunctions(rdd)
.writerBuilder("dove", "event_log", mapToRow(EventLog.class))
.saveToCassandra();
how do I insert the timeuuid and timestamp fields? Using normal insert I would just use the now() function, how do I do that here?

You may use com.datastax.driver.core.utils.UUIDs for this.
The UUIDsTest utilizes the class like this to create a TimeUUID:
UUID uuid = UUIDs.timeBased();
Note that UUID is java.util.UUID. Note sure if you need it for your use case, but after that you can retrieve timestamp of the UUID by calling UUIDs.unixTimestamp(uuid);.
As for your timestamp, you pass an instance of java.util.Date, as proposed on the docs.

Related

Can you execute a create table from Datastax driver with session.execute?

com.datastax.driver.core.Session
for example:
//This works
session.execute ("select * from table");
//This returns a nullpointer
session.execute ("create table testtable ( number int, string varchar)");
Do I have to use some sort of schema builder?
NOTE: im connected to the cassandra instance and can query it no problem. I just want to be able to create tables from the datastax driver
What error do you get when trying to CREATE a table?
As for some quick things to try, it's possible that your user might be missing the CREATE permission.
session.execute ("create table testtable ( number int, string varchar)");
Another thing I noticed about this statement, was that you don't seem to be specifying a PRIMARY KEY. All tables in Cassandra must have a primary key.
Try altering your CQL to this, and see if it helps:
create table testtable ( number int, string varchar, PRIMARY KEY (number))

apache ignite postgres persistence json and jsonb problem

I want to cache a postgres table into apache ignite cache; with json and jsonb support.
CREATE TABLE public.some_event (
id BIGINT NOT NULL,
name TEXT,
team_squad JSON,
event_type TEXT,
start_datime BIGINT,
end_datime BIGINT,
is_running BOOLEAN,
is_finished BOOLEAN,
recent_matches JSON,
CONSTRAINT event_info_pkey
PRIMARY KEY (id)
);
for say in apache ingite configuration,
"recent_matches" field is selected as jdbc type of other, and java type is Object; I get PGObject.
if field is selected as jdbc type varchar and java type is String; I get escaped json like this
"\"id\"": ..."
if sql with type casting ::text I get BufferedStream.
I don't need any special filtering for json or any special sql. Just want to send string for insert and update. For reading, the json string without double quote escaping.
As I am new to apache ignite, From documentation I am unable to understand binary marshaling and I am unable to find any complete example.
Can you provide any complete example?
Ignite has no specific support for PostgreSQL's JSON type.
You might need to extend CacheJdbcPojoStore, override fillParameter() method.

jOOQ fetched timestamp has no nanoseconds

In a postgres db, using jOOQ, when I fetch a row that has a column defined as
timestamp without time zone
when I do a select and I get the value (fetched by jOOQ into a java.sql.Timestamp), then I see that the nanoseconds are missing.
E.g., in the database I have:
2016-04-04 15:14:10.970048
but jOOQ returns a Timestamp with value
2016-04-04 15:14:10.0
This is a problem for further comparisons. How can I prevent this?
UPDATE
Upon request, I'll provide more details.
In Postgresql I have a type:
CREATE TYPE schema.my_type AS
(
mt_page smallint,
mt_active_from timestamp without time zone,
);
I call a function, using the routines:
DSLContext dsl = ....
MyTypeRecord [] records = Routines.myRoutine(dsl.configuration);
Then, the Timestamp will not have no nanos
The function is:
CREATE OR REPLACE FUNCTION shop.myRoutine(
OUT my_types schema.my_type[]
)
RETURNS schema.my_type[] AS
$BODY$
DECLARE
BEGIN
BEGIN
SELECT ARRAY(
SELECT
ROW(a_id, a_timestamp)::schema.my_type
FROM schema.article
) INTO my_types;
END;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER
COST 1000;
This is a bug in jOOQ: https://github.com/jOOQ/jOOQ/issues/5193
jOOQ internally implements a composite type deserialisation algorithm, as PostgreSQL's JDBC driver, unfortunately, doesn't implement SQLData and related API for out-of-the-box composite type support. The current implementation (jOOQ 3.7.3) parses timestamps without their fractional seconds part.
As a workaround, you could implement your own data type binding to work around this issue.

How to create log activity from java to MySQL?

I want to create a log activity for each user when they login from my apps and insert into MySQL database with the structure like
id_log (int, primary key, auto increment)
username (varchar)
time (timestamp)
The problem is every pc has a different timestamp and I only know to get timestamp on a local machine, is there any way to create a log activity based on a timestamp from a PC that storing database? I ask the other and said to use log4j but I still don't get it.
You can use MySQL's NOW() function, but the TIMESTAMP datatype automatically initialises to the current time by default:
CREATE TABLE logs (
id_log SERIAL,
username VARCHAR(255),
time TIMESTAMP
);
INSERT INTO logs (username) VALUES ('eggyal');
See it on sqlfiddle.

INSERT with DEFAULT id doesn't work in PostgreSQL

I tried running this statement in Postgres:
insert into field (id, name) values (DEFAULT, 'Me')
and I got this error:
ERROR: null value in column "id" violates not-null constraint
I ended up having to manually set the id. The problem with that is when my app inserts a record I get a duplicate key error. I am building a java app using Play framework and ebean ORM. So the entire schema is generated automatically by ebean. In this case, what is the best practice for inserting a record manually into my db?
Edit:
Here is how I'm creating my Field class
#Entity
public class Field {
#id
public Long id;
public String name;
}
Edit:
I checked the field_seq sequence and it looks like this:
CREATE SEQUENCE public.field_seq INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START 1 CACHE 1;
Edit:
Here is the generated SQL in pgAdmin III:
CREATE TABLE field
(
id bigint NOT NULL,
created timestamp without time zone,
modified timestamp without time zone,
name character varying(255),
enabled boolean,
auto_set boolean,
section character varying(17),
input_type character varying(8),
user_id bigint,
CONSTRAINT pk_field PRIMARY KEY (id),
CONSTRAINT fk_field_user_3 FOREIGN KEY (user_id)
REFERENCES account (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT ck_field_input_type CHECK (input_type::text = ANY (ARRAY['TEXT'::character varying, 'TEXTAREA'::character varying]::text[])),
CONSTRAINT ck_field_section CHECK (section::text = ANY (ARRAY['MAIN_CONTACT_INFO'::character varying, 'PARTICIPANT_INFO'::character varying]::text[]))
);
CREATE INDEX ix_field_user_3
ON field
USING btree
(user_id);
There is no column default defined for field.id. Since the sequence public.field_seq seems to exist already (but is not attached to field.id) you can fix it with:
ALTER SEQUENCE field_seq OWNED BY field.id;
ALTER TABLE field
ALTER COLUMN id SET DEFAULT (nextval('field_seq'::regclass));
Make sure the sequence isn't in use for something else, though.
It would be much simpler to create your table like this to begin with:
CREATE TABLE field
(
id bigserial PRIMARY KEY,
...
);
Details on serial or bigserial in the manual.
Not sure how the the Play framework implements this.
This works.
insert into field (id, name) values (nextval('field_seq'), "Me");

Categories

Resources