cache data from database on application startup - java

I am trying to build a cache for a web application, below is the above code which I have so far. This cache has to be loaded on application startup.
#Service
public class CacheBuilder {
private MultiKeyMap cache = new MultiKeyMap();
#Autowired
private ConfigurationDAO configurationDAO;
public void loadConfigurations() {
Map<String, Collection<Config>> map = null;
try {
if (cache != null || cache.isEmpty()) {
map = configDAO.loadConfigs();
map.forEach((k, v) -> {
v.forEach((c) -> {
cache.put(k, c.getAttributeName(), c.getAttributeValue());
});
});
}
} catch (DaoException e) {
throw new RuntimeException(e);
}
}
public Object getValue(String key1, String key2) {
return cache.get(key1, key2);
}
public void clearCache() {
if (cache != null) {
cache.clear();
}
}
public String printCache() {
return cache.toString();
}
}
I am currently using spring boot application and I am trying to improve my pseudo code base and what I would like to have is.
the loadConfigurations() method should not be exposed to anyone, so that no one accidentally calls it.
I want to ideally have a single instance of this class created with methods as static ? would that be the best thing to do ?
If I do step 2, how do I achieve step 1 ?
Can anyone please suggest me some pointers ?

Related

Resilience4j context propagator not able to propagte thread local values

I am trying to migrate my circuit breaker code from Hystrix to Resilience4j. The communication is between two applications out of which one is an artifact containing all the resilience 4j config in the java code itself and the second application which is a microservice uses it directly.
There's one RequestId which generates in the microservice and propagates to the artifact context where it gets printed in the logs. With Hystrix, it was working perfectly fine but ever since I moved to resilience, I am getting null for the request Id.
Below is my config for bulk head and context propagator :
ThreadPoolBulkheadConfig bulkheadConfig = ThreadPoolBulkheadConfig.custom()
.maxThreadPoolSize(maxThreadPoolSize)
.coreThreadPoolSize(coreThreadPoolSize)
.queueCapacity(queueCapacity)
.contextPropagator(new DummyContextPropagator())
.build();
// Bulk Head Registry
ThreadPoolBulkheadRegistry bulkheadRegistry = ThreadPoolBulkheadRegistry.of(bulkheadConfig);
// Create Bulk Head
ThreadPoolBulkhead bulkhead = bulkheadRegistry.bulkhead(name, bulkheadConfig);
Dummy Context Propagator :
public class DummyContextPropagator implements ContextPropagator {
private static final Logger log = LoggerFactory.getLogger( DummyContextPropagator.class);
#Override
public Supplier<Optional<Object>> retrieve() {
return () -> (Optional<Object>) get();
}
#Override
public Consumer<Optional<Object>> copy() {
return (t) -> t.ifPresent(e -> {
clear();
put(e);
});
}
#Override
public Consumer<Optional<Object>> clear() {
return (t) -> DummyContextHolder.clear();
}
public static class DummyContextHolder {
private static final ThreadLocal threadLocal = new ThreadLocal();
private DummyContextHolder() {
}
public static void put(Object context) {
if (threadLocal.get() != null) {
clear();
}
threadLocal.set(context);
}
public static void clear() {
if (threadLocal.get() != null) {
threadLocal.set(null);
threadLocal.remove();
}
}
public static Optional<Object> get() {
return Optional.ofNullable(threadLocal.get());
}
}
}
However, nothing seems to work so that I can get the RequestId.
Am I doing everything right or is there another way to do that ?
i think you want to get params from threadlocal from parent-thread when you in sub-thread, in hystrix it use command-model to decorate callabletask
in resilience4j i think u can fix it like this:
#Resource
DispatcherServlet dispatcherServlet;
#PostConstruct
public void changeThreadLocalModel() {
dispatcherServlet.setThreadContextInheritable(true);
}
i find my last answer may lead to some problems, when you use "dispatcherServlet.setThreadContextInheritable(true);"
it may pollute your custom thread-pool`s threadlocalmap;
so here is my final resolve, and it only works at resilience4j;
#Resource
Resilience4jBulkheadProvider resilience4jBulkheadProvider;
#PostConstruct
public void concurrentThreadContextStrategy() {
ThreadPoolBulkheadConfig threadPoolBulkheadConfig = ThreadPoolBulkheadConfig.custom().contextPropagator(new CustomInheritContextPropagator()).build();
resilience4jBulkheadProvider.configureDefault(id -> new Resilience4jBulkheadConfigurationBuilder()
.bulkheadConfig(BulkheadConfig.ofDefaults()).threadPoolBulkheadConfig(threadPoolBulkheadConfig)
.build());
}
private static class CustomInheritContextPropagator implements ContextPropagator<RequestAttributes> {
#Override
public Supplier<Optional<RequestAttributes>> retrieve() {
// give requestcontext to reference from threadlocal;
// this method call by web-container thread, such as tomcat, jetty,or undertow, depends on what you used;
return () -> Optional.ofNullable(RequestContextHolder.getRequestAttributes());
}
#Override
public Consumer<Optional<RequestAttributes>> copy() {
// load requestcontex into real-call thread
// this method call by resilience4j bulkhead thread;
return requestAttributes -> requestAttributes.ifPresent(context -> {
RequestContextHolder.resetRequestAttributes();
RequestContextHolder.setRequestAttributes(context);
});
}
#Override
public Consumer<Optional<RequestAttributes>> clear() {
// clean requestcontext finally ;
// this method call by resilience4j bulkhead thread;
return requestAttributes -> RequestContextHolder.resetRequestAttributes();
}
}
i got the same problem with springboot 2.5 et springboot cloud 2020.0.6
and I solved it with an implementation of ContextPropagator
public class SleuthPropagator implements ContextPropagator<TraceContext> {
ThreadLocal<ScopedSpan> scopedSpanThreadLocal = new ThreadLocal<>();
#Override
public Supplier<Optional<TraceContext>> retrieve() {
return this::getCurrentcontext;
}
#Override
public Consumer<Optional<TraceContext>> copy() {
return c -> {
if (!c.isPresent()) {
return;
}
TraceContext traceContext = c.get();
ScopedSpan resilience4jSpan = getTracer()
.map(t -> t.startScopedSpanWithParent("Resilience4j", traceContext))
.orElse(null);
scopedSpanThreadLocal.set(resilience4jSpan);
};
}
#Override
public Consumer<Optional<TraceContext>> clear() {
return t -> {
try {
ScopedSpan resilience4jSpan = scopedSpanThreadLocal.get();
if (resilience4jSpan != null) {
resilience4jSpan.finish();
}
} finally {
scopedSpanThreadLocal.remove();
}
};
}
private static Optional<Tracer> getTracer() {
return Optional.ofNullable(Tracing.current())
.map(Tracing::tracer);
}
private Optional<TraceContext> getCurrentcontext() {
return getTracer()
.map(Tracer::currentSpan)
.map(Span::context);
}
}
And use the propagator in adding this to your application.properties
resilience4j.thread-pool-bulkhead.instances.YOUR_BULKHEAD_CONFIG.context-propagators=com.your.package.SleuthPropagator

How to add instrumentation to GraphQL Java with graphql-spring-boot?

does anybody know how I can add instrumentation to a GraphQL execution when using graphql-spring-boot (https://github.com/graphql-java-kickstart/graphql-spring-boot) ? I know how this is possible with plain-vanilla graphql-java: https://www.graphql-java.com/documentation/v13/instrumentation/
However, I don't know how to do this when graphql-spring-boot is used and takes control over the execution. Due to lack of documentation I tried it simply this way:
#Service
public class GraphQLInstrumentationProvider implements InstrumentationProvider {
#Override
public Instrumentation getInstrumentation() {
return SimpleInstrumentation.INSTANCE;
}
}
But the method getInstrumentation on my InstrumentationProvider bean is (as expected) never called. Any help appreciated.
Answering my own question. In the meantime I managed to do it this way:
final class RequestLoggingInstrumentation extends SimpleInstrumentation {
private static final Logger logger = LoggerFactory.getLogger(RequestLoggingInstrumentation.class);
#Override
public InstrumentationContext<ExecutionResult> beginExecution(InstrumentationExecutionParameters parameters) {
long startMillis = System.currentTimeMillis();
var executionId = parameters.getExecutionInput().getExecutionId();
if (logger.isInfoEnabled()) {
logger.info("GraphQL execution {} started", executionId);
var query = parameters.getQuery();
logger.info("[{}] query: {}", executionId, query);
if (parameters.getVariables() != null && !parameters.getVariables().isEmpty()) {
logger.info("[{}] variables: {}", executionId, parameters.getVariables());
}
}
return new SimpleInstrumentationContext<>() {
#Override
public void onCompleted(ExecutionResult executionResult, Throwable t) {
if (logger.isInfoEnabled()) {
long endMillis = System.currentTimeMillis();
if (t != null) {
logger.info("GraphQL execution {} failed: {}", executionId, t.getMessage(), t);
} else {
var resultMap = executionResult.toSpecification();
var resultJSON = ObjectMapper.pojoToJSON(resultMap).replace("\n", "\\n");
logger.info("[{}] completed in {}ms", executionId, endMillis - startMillis);
logger.info("[{}] result: {}", executionId, resultJSON);
}
}
}
};
}
}
#Service
class InstrumentationService {
private final ContextFactory contextFactory;
InstrumentationService(ContextFactory contextFactory) {
this.contextFactory = contextFactory;
}
/**
* Return all instrumentations as a bean.
* The result will be used in class {#link com.oembedler.moon.graphql.boot.GraphQLWebAutoConfiguration}.
*/
#Bean
List<Instrumentation> instrumentations() {
// Note: Due to a bug in GraphQLWebAutoConfiguration, the returned list has to be modifiable (it will be sorted)
return new ArrayList<>(
List.of(new RequestLoggingInstrumentation()));
}
}
It helped me to have a look into the class GraphQLWebAutoConfiguration. There I found out that the framework expects a bean of type List<Instrumentation>, which contains all the instrumentations that will be added to the GraphQL execution.
There is a simpler way to add instrumentation with spring boot:
#Configuration
public class InstrumentationConfiguration {
#Bean
public Instrumentation someFieldCheckingInstrumentation() {
return new FieldValidationInstrumentation(env -> {
// ...
});
}
}
Spring boot will collect all beans which implement Instrumentation (see GraphQLWebAutoConfiguration).

Passing a second argument to Guava Cache load() method

I've previously asked a question on here on how to implement Guava Cache in Java, seen here. While it has worked, I've recently noticed a bug in the getAllProfile method.
private LoadingCache<Integer, List<Profile>> loadingCache = CacheBuilder.newBuilder()
.refreshAfterWrite(10,TimeUnit.MINUTES)
.maximumSize(100).build(
new CacheLoader<Integer, List<Profile>>() {
#Override
public List<Profile> load(Integer integer) throws Exception {
Profile profile= new Profile();
if (integer == null) {
integer = 10;
}
return profileDAO.getAllProfiles(profile, integer);
}
}
);
public List<Profile> getAllProfiles(Profile profile, Integer integer) throws Exception {
return loadingCache.get(integer);
}
In the method, I'm passing in a Profile object called profile. This is so that on the Service layer, the user can set a parameter for the profiles of workers, to see if they are still employed, using #QueryParam:
#GET
public List<Profile> getProfiles(#QueryParam("employed") Boolean employed, #QueryParam("size") Integer integer) {
//code for service here. the value for query param is used in a
//new Profile object
}
The profile object created here is passed down through the manager tier, and into the DAO tier, where the parameters set in it, like the boolean employed, are parsed into arguments for a select statement.
The issue here is that since I've started using the cache, the boolean is no longer being parsed. calling the method with a System.out.println to evaluate the employed field evaluates as null. This makes sense, as I create a new Profile object in the cache manager, with no setters called, in addition to the cache get method not taking profile at the getAllProfile method; it only takes size.
I thought I could get around this by adding in a new Profile parameter in the load method, like so:
private LoadingCache<Integer, List<Profile>> loadingCache = CacheBuilder.newBuilder()
.refreshAfterWrite(10,TimeUnit.MINUTES)
.maximumSize(100).build(
new CacheLoader<Integer, List<Profile>>() {
#Override
public List<Profile> load(Integer integer) throws Exception {
#Override
public List<Profile> load(Integer integer, Profile profile) throws Exception {
if (integer == null) {
integer = 10;
}
return profileDAO.getAllProfiles(profile, integer);
}
}
}
);
However, load() appears to be designed only to take one argument, so this brings up this error:
Class 'Anonymous class derived from CacheLoader' must either be declared abstract or implement abstract method 'load(K)' in 'CacheLoader'
To reiterate, all I need to do is pass the profile object created in Service layer to the manager layer and cache. This seems to be as simple as passing a second argument to load(), but that does not seem to be possible.
EDIT:
I've edited the getAllProfiles method to use Callable:
public List<Profile> getAllProfiles(Profile profile, Integer integer) throws Exception {
return loadingCache.get(size, new Callable<Profile>() {
#Override
public Profile call() throws Exception {
return profile;
}
});
}
This produces an error on the fact that I'm passing in Profile instead of List<Profile>. I need to pass in profile, though, so I can parse through the fields in the DAO for the SQL statement.
Here is an example:
public class ImageCache2 extends CaffeineCache<URL, Image> {
ImageCache2() {
this.cache = Caffeine.newBuilder()
.maximumSize(300)
.expireAfterWrite(5, TimeUnit.MINUTES)
.refreshAfterWrite(1, TimeUnit.MINUTES)
.build((k) -> null);
}
}
just give build a null return because we don't use it.
public static Image LoadImageFromURL(URL url, double w, double h) {
URLConnection conn;
Image returnImage;
try {
conn = url.openConnection();
} catch (IOException e1) {
e1.printStackTrace();
return null;
}
conn.setRequestProperty("User-Agent", "Wget/1.13.4 (linux-gnu)");
try (InputStream stream = conn.getInputStream()) {
returnImage = new Image(stream, w, h, true, true);
} catch (IOException e2) {
e2.printStackTrace();
return null;
}
return returnImage;
}
here is the code I really use to get the item.
public static void useExecutors(Runnable run) {
executorServices.execute(run);
}
public void LoadImage(URL url, double w, double h, Consumer<Image> callWhenFinish) {
useExecutors(() ->
{
Image thumbImage = ImageCacheInstance.Cache().get(url, (u) -> LoadImageFromURL(url, w, h));
Platform.runLater(() ->
{
callWhenFinish.accept(thumbImage);
System.out.println("ImageLoad >> Finish -- " + this);
});
});
}
here is where I call the cache get method. PS: useExecutors run it in a background thread

Cache in GWT app/widget with HTML5 localStorage

I am trying to incorporate a data cache for one of my GWT widgets.
I have a datasource interface/class which retrieves some data from my backend via RequestBuilder and JSON. Because I display the widget multiple times I only want to retrieve the data once.
So I tried to come with an app cache. The naive approach is to use a HashMap in a singleton object to store the data. However I also want to make use of HTML5's localStorage/sessionStorage if supported.
HTML5 localStorage only supports String values. So I have to convert my object into JSON and store as a string. However somehow I can't come up with a nice clean way of doing this. here is what I have so far.
I define a interface with two functions: fetchStatsList() fetches the list of stats that can be displayed in the widget and fetchStatsData() fetches the actual data.
public interface DataSource {
public void fetchStatsData(Stat stat,FetchStatsDataCallback callback);
public void fetchStatsList(FetchStatsListCallback callback);
}
The Stat class is a simple Javascript Overlay class (JavaScriptObject) with some getters (getName(), etc)
I have a normal non-cachable implementation RequestBuilderDataSource of my DataSource which looks like the following:
public class RequestBuilderDataSource implements DataSource {
#Override
public void fetchStatsList(final FetchStatsListCallback callback) {
// create RequestBuilderRequest, retrieve response and parse JSON
callback.onFetchStatsList(stats);
}
#Override
public void fetchStatsData(List<Stat> stats,final FetchStatsDataCallback callback) {
String url = getStatUrl(stats);
//create RequestBuilderRquest, retrieve response and parse JSON
callback.onFetchStats(dataTable); //dataTable is of type DataTable
}
}
I left out most of the code for the RequestBuilder as it is quite straightforward.
This works out of the box however the list of stats and also the data is retrieved everytime even tough the data is shared among each widget instance.
For supporting caching I add a Cache interface and two Cache implementations (one for HTML5 localStorage and one for HashMap):
public interface Cache {
void put(Object key, Object value);
Object get(Object key);
void remove(Object key);
void clear();
}
I add a new class RequestBuilderCacheDataSource which extends the RequestBuilderDataSource and takes a Cache instance in its constructor.
public class RequestBuilderCacheDataSource extends RequestBuilderDataSource {
private final Cache cache;
publlic RequestBuilderCacheDataSource(final Cache cache) {
this.cache = cache;
}
#Override
public void fetchStatsList(final FetchStatsListCallback callback) {
Object value = cache.get("list");
if (value != null) {
callback.fetchStatsList((List<Stat>)value);
}
else {
super.fetchStatsList(stats,new FetchStatsListCallback() {
#Override
public void onFetchStatsList(List<Stat>stats) {
cache.put("list",stats);
callback.onFetchStatsList(stats);
}
});
super.fetchStatsList(callback);
}
}
#Override
public void fetchStatsData(List<Stat> stats,final FetchStatsDataCallback callback) {
String url = getStatUrl(stats);
Object value = cache.get(url);
if (value != null) {
callback.onFetchStatsData((DataTable)value);
}
else {
super.fetchStatsData(stats,new FetchStatsDataCallback() {
#Override
public void onFetchStatsData(DataTable dataTable) {
cache.put(url,dataTable);
callback.onFetchStatsData(dataTable);
}
});
}
}
}
Basically the new class will lookup the value in the Cache and if it is not found it will call the fetch function in the parent class and intercept the callback to put it into the cache and then call the actual callback.
So in order to support both HTML5 localstorage and normal JS HashMap storage I created two implementations of my Cache interface:
JS HashMap storage:
public class DefaultcacheImpl implements Cache {
private HashMap<Object, Object> map;
public DefaultCacheImpl() {
this.map = new HashMap<Object, Object>();
}
#Override
public void put(Object key, Object value) {
if (key == null) {
throw new NullPointerException("key is null");
}
if (value == null) {
throw new NullPointerException("value is null");
}
map.put(key, value);
}
#Override
public Object get(Object key) {
// Check for null as Cache should not store null values / keys
if (key == null) {
throw new NullPointerException("key is null");
}
return map.get(key);
}
#Override
public void remove(Object key) {
map.remove(key);
}
#Override
public void clear() {
map.clear();
}
}
HTML5 localStorage:
public class LocalStorageImpl implements Cache{
public static enum TYPE {LOCAL,SESSION}
private TYPE type;
private Storage cacheStorage = null;
public LocalStorageImpl(TYPE type) throws Exception {
this.type = type;
if (type == TYPE.LOCAL) {
cacheStorage = Storage.getLocalStorageIfSupported();
}
else {
cacheStorage = Storage.getSessionStorageIfSupported();
}
if (cacheStorage == null) {
throw new Exception("LocalStorage not supported");
}
}
#Override
public void put(Object key, Object value) {
//Convert Object (could be any arbitrary object) into JSON
String jsonData = null;
if (value instanceof List) { // in case it is a list of Stat objects
JSONArray array = new JSONArray();
int index = 0;
for (Object val:(List)value) {
array.set(index,new JSONObject((JavaScriptObject)val));
index = index +1;
}
jsonData = array.toString();
}
else // in case it is a DataTable
{
jsonData = new JSONObject((JavaScriptObject) value).toString();
}
cacheStorage.setItem(key.toString(), jsonData);
}
#Override
public Object get(Object key) {
if (key == null) {
throw new NullPointerException("key is null");
}
String jsonDataString = cacheStorage.getItem(key.toString());
if (jsonDataString == null) {
return null;
}
Object data = null;
Object jsonData = JsonUtils.safeEval(jsonDataString);
if (!key.equals("list"))
data = DataTable.create((JavaScriptObject)data);
else if (jsonData instanceof JsArray){
JsArray<GenomeStat> jsonStats = (JsArray<GenomeStat>)jsonData;
List<GenomeStat> stats = new ArrayList<GenomeStat>();
for (int i = 0;i<jsonStats.length();i++) {
stats.add(jsonStats.get(i));
}
data = (Object)stats;
}
return data;
}
#Override
public void remove(Object key) {
cacheStorage.removeItem(key.toString());
}
#Override
public void clear() {
cacheStorage.clear();
}
public TYPE getType() {
return type;
}
}
The post got a little bit long but hopefully clarifies what I try to reach. It boils down to two questions:
Feedback on the design/architecture of this approach (for example subclassing RequestBilderDataSource for cache function, etc). Can this be improved (this is probably more related to general design than specifically GWT).
With the DefaultCacheImpl it is really easy to store and retrieve any arbitrary objects. How can I achieve the same thing with localStorage where I have to convert and parse JSON? I am using a DataTable which requires to call the DataTable.create(JavaScriptObject jso) function to work. How can I solve this without to many if/else and instance of checks?
My first thoughts: make it two layers of cache, not two different caches. Start with the in-memory map, so no serialization/deserialization is needed for reading a given object out, and so that changing an object in one place changes it in all. Then rely on the local storage to keep data around for the next page load, avoiding the need for pulling data down from the server.
I'd tend to say skip session storage, since that doesn't last long, but it does have its benefits.
For storing/reading data, I'd encourage checking out AutoBeans instead of using JSOs. This way you could support any type of data (that can be stored as an autobean) and could pass in a Class param into the fetcher to specify what kind of data you will read from the server/cache, and decode the json to a bean in the same way. As an added bonus, autobeans are easier to define - no JSNI required. A method could look something like this (note that In DataSource and its impl, the signature is different).
public <T> void fetch(Class<T> type, List<Stat> stats, Callback<T, Throwable> callback);
That said, what is DataTable.create? If it is already a JSO, you can just cast to DataTable as you (probably) normally do when reading from the RequestBuilder data.
I would also encourage not returning a JSON array directly from the server, but wrapping it in an object, as a best practice to protect your users' data from being read by other sites. (Okay, on re-reading the issues, objects aren't great either). Rather than discussing it here, check out JSON security best practices?
So, all of that said, first define the data (not really sure how this data is intended to work, so just making up as I go)
public interface DataTable {
String getTableName();
void setTableName(String tableName);
}
public interface Stat {// not really clear on what this is supposed to offer
String getKey();
void setKey(String key);
String getValue();
String setValue(String value);
}
public interface TableCollection {
List<DataTable> getTables();
void setTables(List<DataTable> tables);
int getRemaining();//useful for not sending all if you have too much?
}
For autobeans, we define a factory that can create any of our data when given a Class instance and some data. Each of these methods can be used as a sort of constructor to create a new instance on the client, and the factory can be passed to AutoBeanCodex to decode data.
interface DataABF extends AutoBeanFactory {
AutoBean<DataTable> dataTable();
AutoBean<Stat> stat();
AutoBean<TableCollection> tableCollection();
}
Delegate all work of String<=>Object to AutoBeanCodex, but you probably want some simple wrapper around it to make it easy to call from both the html5 cache and from the RequestBuilder results. Quick example here:
public class AutoBeanSerializer {
private final AutoBeanFactory factory;
public AutoBeanSerializer(AutoBeanFactory factory) {
this.factory = factory;
}
public String <T> encodeData(T data) {
//first, get the autobean mapped to the data
//probably throw something if we can't find it
AutoBean<T> autoBean = AutoBeanUtils.getAutoBean(data);
//then, encode it
//no factory or type needed here since the AutoBean has those details
return AutoBeanCodex.encode(autoBean);
}
public <T> T decodeData(Class<T> dataType, String json) {
AutoBean<T> bean = AutoBeanCodex.decode(factory, dataType, json);
//unwrap the bean, and return the actual data
return bean.as();
}
}

RequestFactoryEditorDriver doesn't save full graph even though "with()" is called. Is circular reference an issue?

Could you guys please help me find where I made a mistake ?
I switched from SimpleBeanEditorDriver to RequestFactoryEditorDriver and my code no longer saves full graph even though with() method is called. But it correctly loads full graph in the constructor.
Could it be caused by circular reference between OrganizationProxy and PersonProxy ? I don't know what else to think :( It worked with SimpleBeanEditorDriver though.
Below is my client code. Let me know if you want me to add sources of proxies to this question (or you can see them here).
public class NewOrderView extends Composite
{
interface Binder extends UiBinder<Widget, NewOrderView> {}
private static Binder uiBinder = GWT.create(Binder.class);
interface Driver extends RequestFactoryEditorDriver<OrganizationProxy, OrganizationEditor> {}
Driver driver = GWT.create(Driver.class);
#UiField
Button save;
#UiField
OrganizationEditor orgEditor;
AdminRequestFactory requestFactory;
AdminRequestFactory.OrderRequestContext requestContext;
OrganizationProxy organization;
public NewOrderView()
{
initWidget(uiBinder.createAndBindUi(this));
requestFactory = createFactory();
requestContext = requestFactory.contextOrder();
driver.initialize(requestFactory, orgEditor);
String[] paths = driver.getPaths();
createFactory().contextOrder().findOrganizationById(1).with(paths).fire(new Receiver<OrganizationProxy>()
{
#Override
public void onSuccess(OrganizationProxy response)
{
if (response == null)
{
organization = requestContext.create(OrganizationProxy.class);
organization.setContactPerson(requestContext.create(PersonProxy.class));
} else
organization = requestContext.edit(response);
driver.edit(organization, requestContext);
}
#Override
public void onFailure(ServerFailure error)
{
createConfirmationDialogBox(error.getMessage()).center();
}
});
}
private static AdminRequestFactory createFactory()
{
AdminRequestFactory factory = GWT.create(AdminRequestFactory.class);
factory.initialize(new SimpleEventBus());
return factory;
}
#UiHandler("save")
void buttonClick(ClickEvent e)
{
e.stopPropagation();
save.setEnabled(false);
try
{
AdminRequestFactory.OrderRequestContext ctx = (AdminRequestFactory.OrderRequestContext) driver.flush();
if (!driver.hasErrors())
{
// Link to each other
PersonProxy contactPerson = organization.getContactPerson();
contactPerson.setOrganization(organization);
String[] paths = driver.getPaths();
ctx.saveOrganization(organization).with(paths).fire(new Receiver<Void>()
{
#Override
public void onSuccess(Void arg0)
{
createConfirmationDialogBox("Saved!").center();
}
#Override
public void onFailure(ServerFailure error)
{
createConfirmationDialogBox(error.getMessage()).center();
}
});
}
} finally
{
save.setEnabled(true);
}
}
}
with() is only used for retrieval of information, so your with() use with a void return type is useless (but harmless).
Whether a full graph is persisted is entirely up to your server-side code, which is intimately bound to your persistence API (JPA, JDO, etc.)
First, check that the Organization object you receive in your save() method on the server-side is correctly populated. If it's not the case, check your Locators (and/or static findXxx methods) ; otherwise, check your save() method's code.
Judging from the code above, I can't see a reason why it wouldn't work.
It took me some time to realize that the problem was the composite id of Person entity.
Below is the code snippet of PojoLocator that is used by my proxy entities.
public class PojoLocator extends Locator<DatastoreObject, Long>
{
#Override
public DatastoreObject find(Class<? extends DatastoreObject> clazz, Long id)
{
}
#Override
public Long getId(DatastoreObject domainObject)
{
}
}
In order to fetch child entity from DataStore you need to have id of a parent class. In order to achieve that I switched "ID class" for Locator<> to String which represents textual form of Objectify's Key<> class.
Here is how to looks now:
public class PojoLocator extends Locator<DatastoreObject, String>
{
#Override
public DatastoreObject find(Class<? extends DatastoreObject> clazz, String id)
{
Key<DatastoreObject> key = Key.create(id);
return ofy.load(key);
}
#Override
public String getId(DatastoreObject domainObject)
{
if (domainObject.getId() != null)
{
Key<DatastoreObject> key = ofy.fact().getKey(domainObject);
return key.getString();
} else
return null;
}
}
Please note that your implementation may slightly differ because I'm using Objectify4.

Categories

Resources