Map with Long key does not work in Serializable class - java

I have a Serializable class, with a map property. When the map has a Long as key the code does not work, while with String it works.
This doesn't work:
public class UserSession implements Serializable {
Map<Long, Date> timeQuestionAsked = new HashMap<>();
}
This does work:
public class UserSession implements Serializable {
Map<String, Date> timeQuestionAsked = new HashMap<>();
}
The weird thing I get no exception. This class is loaded in a filter in Jetty (google app engine app), and when I try to use the class with the Long key, I get a weird "Not found" error.

Actually it was caused by the database framework I was using: objectify. It turns out Maps must have string as keys: https://code.google.com/p/objectify-appengine/wiki/Entities#Maps
It has nothing to do with Serializable...

Related

how can we use custom objects with several property as Key in a map?

i have a Map<LeaveDto, Duration> myMap = new HashMap<>(); and this is my LeaveDTo
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class LeaveDto implements Serializable {
private String name;
private Boolean payable;
private Float factor;
}
when I want to call my action i recieve 415 Media type error after little considering i understand problem is that myMap and if I change Key to String it works fine i also override hashCode() and equals() but it doesn't work could some one help me for that?

"Optional group value" Exception while writing Map<String, Set<String>> to parquet in Spark-Java

I have a POJO model as follows -
public class Model implements Serializable {
private Map<String,Set<String>> map;
}
which I am trying to write to a parquet file using Spark. The code for the same is follows:
JavaRDD<Context> dataSet = generate();
JavaPairRDD<Model, Model> outputRDD = dataSet.mapToPair((PairFunction<Context, Model, Model>)
context -> {
return new Tuple2<>(context.getMoel1(), context.getModel2());
});
Dataset<Tuple2<Model, Model>> outputDS = sqlContext.createDataset(JavaPairRDD.toRDD(outputRDD),
Encoders.tuple(Encoders.bean(Model.class), Encoders.bean(Model.class)));
outputDS.coalesce(numPartitions).write().mode(SaveMode.Overwrite).parquet(outputPath + "v2/");
It gives me the following exception because of using a Set<> inside the map.
Caused by: org.apache.parquet.schema.InvalidSchemaException: Cannot write a schema with an empty group: optional group value {
}
at org.apache.parquet.schema.TypeUtil$1.visit(TypeUtil.java:27)
at org.apache.parquet.schema.GroupType.accept(GroupType.java:255)
at org.apache.parquet.schema.TypeUtil$1.visit(TypeUtil.java:31)
at org.apache.parquet.schema.GroupType.accept(GroupType.java:255)
at org.apache.parquet.schema.TypeUtil$1.visit(TypeUtil.java:31)
So, I tried making it a Map<String, List<String>> and it worked fine.
However, since this model is used all over the code base, there will be many repurcussions for changing it.
Why is this happening? And how to resolve this?
Thanks in advance!
P.S. I am using Spark 3.1.2

Serialization in the Hazelcast IMap

I'm trying to load data into IMap inside jet pipeLine stage I'm getting the error
Here is my code
public static Pipeline pipeLineStage(JetInstance jet) {
Pipeline pipeLine = Pipeline.create();
BatchStage<DataModel> dbValue = pipeLine.readFrom(Sources.jdbc(
"jdbc:postgresql://localhost/postgres?user=postgres&password=root",
"SELECT id1, id2, id3, id4\r\n"
+ " FROM public.tbl_test where id1='3'",
resultSet -> new DataModel(resultSet.getString(2), resultSet.getString(3), resultSet.getString(4))));
dbValue.filter(model -> model.getId2().equals("person"))
.map(model -> JsonUtil.mapFrom(model.getObject_value())).map(map -> {
IMap<Object, Object> map1 = jet.getMap("map1");
map1.put("employee_id", map.get("id"));
return map;
}).writeTo(Sinks.logger());
return pipeLine;
}
Error:-
Exception in thread "main" java.lang.IllegalArgumentException: "mapFn" must be serializable
at com.hazelcast.jet.impl.util.Util.checkSerializable(Util.java:203)
*If I store the data in a normal Map I'm not getting any error and getting error only if I store in IMap Object and In the above code I'm using model class i,e DataModel and that implements public class DataModel implements Serializable {}..... Any suggestions would also be helpful.. Thanks *
It appears that you have configured a serialization factory by the name of mapFn which requires serialization. Simply add implements Serializable to the class definition.

JPA: Hibernate: updating of column with Converter class doesn't work

I'm using a Converter class to store my entity field (type - Map<String, Object>) in MySql DB as a JSON text:
#Entity
#Table(name = "some_table")
public class SomeEntity {
...
#Column(name = "meta_data", nullable = false)
#Convert(converter = MetaDataConverter.class)
Map<String, Object> metaData = null;
...
}
Here is the MetaDataConverter class:
#Converter
public class MetaDataConverter implements AttributeConverter<Map<String, Object>, String> {
private final ObjectMapper objectMapper = new ObjectMapper();
#Override
public String convertToDatabaseColumn(Map<String, Object> metadata) {
try {
return objectMapper.writeValueAsString(metadata);
} catch (Exception e) {
...
}
}
#Override
public Map<String, Object> convertToEntityAttribute(String dbData) {
try {
return objectMapper.readValue(dbData, Map.class);
} catch (Exception e) {
...
}
}
}
Here is the my service class:
#Service
#RequiredArgsConstructor
#Transactional
public class MetaDataService {
private final JpaRepository<MetaDataEntity, String> metaDataRepository;
public MetaDataEntity updateOnlyMetadata(String someParameter, Map<String, Object> newMetaData) {
MetaDataEntity metaDataEntity = metaDataRepository.findBySomeParameter(someParameter);
metaDataEntity.setMetaData(newMetaData);
return metaDataRepository.save(metaDataEntity);
}
}
For the creation it works fine, but it doesn't work with the updating of the converted field. If i try to update only the
metaData field the appropriate column in database is not updated. However, the metaData is updated in case of updating with the other entity fields.
I've already seen the same questions (JPA not updating column with Converter class and Data lost because of JPA AttributeConverter?), but i have not found the answer. Is there something like a standard or best practice for such a case?
FYI: For the CRUD operations i'm using the Spring Data JpaRepository class and its methhods.
Implement proper equals and hashcode methods on your map value objects. JPA can then use those methods to identify that your Map is dirty.
I had a similar problem. i figure out that hibernate used the equals method to determine if one of attribute is dirty and then make an update.
you have two choices : implement correctly the equals method for the entity including the converted attribute
or
instead of using "Map" try to use "HashMap" as attribute type.
the HashMap has an already implemented equals method and hibernate will use it

MongoDB document's #Id to be a property of an HashMap?

I have a class which it's only field is an HashMap.
Is it possible to define the #Id of the class to be one of the keys of the HashMap? (which always exists there)
Thanks!
If you have a Class that contains only an HashMap don't define a Class because it not make any sense, instead convert your query result directly in an Map likes Map<String, Object> dbCursor = mongoTemplate.getCollection("articles").find(query.getQueryObject(), Map.class).first();
I suggest you to use the Class in order to define your object, maybe you can use something like
public class Foo {
#Id
private String id;
private Map<String, Object> data;
private Map<String, Object> metadata;
}
in order to maintain flexibility

Categories

Resources