How to prevents read from happening whenever I am doing a write? - java

I am trying to implement lock by which I don't want to have reads from happening whenever I am doing a write.
Below is my ClientData class in which I am using CountDownLatch -
public class ClientData {
private static final AtomicReference<Map<String, Map<Integer, String>>> primaryMapping = new AtomicReference<>();
private static final AtomicReference<Map<String, Map<Integer, String>>> secondaryMapping = new AtomicReference<>();
private static final AtomicReference<Map<String, Map<Integer, String>>> tertiaryMapping = new AtomicReference<>();
// should this be initialized as 1?
private static final CountDownLatch hasBeenInitialized = new CountDownLatch(1)
public static Map<String, Map<Integer, String>> getPrimaryMapping() {
try {
hasBeenInitialized.await();
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return primaryMapping.get();
}
public static void setPrimaryMapping(Map<String, Map<Integer, String>> map) {
primaryMapping.set(map);
hasBeenInitialized.countDown();
}
public static Map<String, Map<Integer, String>> getSecondaryMapping() {
try {
hasBeenInitialized.await();
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return secondaryMapping.get();
}
public static void setSecondaryMapping(Map<String, Map<Integer, String>> map) {
secondaryMapping.set(map);
hasBeenInitialized.countDown();
}
public static Map<String, Map<Integer, String>> getTertiaryMapping() {
try {
hasBeenInitialized.await();
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return tertiaryMapping.get();
}
public static void setTertiaryMapping(Map<String, Map<Integer, String>> map) {
tertiaryMapping.set(map);
hasBeenInitialized.countDown();
}
}
PROBLEM STATEMENT:-
I need to wait on the get calls on three AtomicReferences I have in the above code. Once all the writes has been done on the three AtomicReferences I have with the set call, then I would allow making the call to three getters which I have.
So I decided to use CountDownLatch which I have initialized as 1? Do I need to initialize it to 3? And every time before I do the first set on a new update, should I need to resetup the countdown latch back to 3? Because I will be setting those three AtomicReferences in separate three statements.
I am guess there is something wrong in my above code?
NOTE:-
I will be setting like this from some other class -
ClientData.setPrimaryMapping(primaryTables);
ClientData.setSecondaryMapping(secondaryTables);
ClientData.setTertiaryMapping(tertiaryTables);
Some other threads has to read the data from these AtomicReferences once they have been set.
Update:-
Below is my background thread code which will get the data from the URL, parse it and store it in a ClientDataclass variable.
public class TempScheduler {
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void startScheduler() {
final ScheduledFuture<?> taskHandle = scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
try {
callServers();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}, 0, 10, TimeUnit.MINUTES);
}
}
// call the servers and get the data and then parse
// the response.
private void callServers() {
String url = "url";
RestTemplate restTemplate = new RestTemplate();
String response = restTemplate.getForObject(url, String.class);
parseResponse(response);
}
// parse the response and store it in a variable
private void parseResponse(String response) {
//...
ConcurrentHashMap<String, Map<Integer, String>> primaryTables = null;
ConcurrentHashMap<String, Map<Integer, String>> secondaryTables = null;
ConcurrentHashMap<String, Map<Integer, String>> tertiaryTables = null;
//...
// store the data in ClientData class variables which can be
// used by other threads
ClientData.setPrimaryMapping(primaryTables);
ClientData.setSecondaryMapping(secondaryTables);
ClientData.setTertiaryMapping(tertiaryTables);
}
}

If you want to treat all 3 variables independently (i.e. getting tertiary does not need to wait for primary to be set), which is how I read your question, you simply need to create 1 countdown latch for each map. Each setter counts down the respective latch for the variable being set. Each getter calls await on the respective latch.

This setup is a complete overkill IMO.
Here's an alternative which works correctly and is far more simpler:
public class MappingContainer {
private final Map<String, Map<Integer, String>> primaryMapping;
private final Map<String, Map<Integer, String>> secondaryMapping;
private final Map<String, Map<Integer, String>> tertiaryMapping;
// + constructor and getters
}
public class ClientData {
private static volatile MappingContainer mappingContainer;
// regular setters and getters
}
public class TempScheduler {
//...
private void parseResponse(String response) {
//...
ConcurrentHashMap<String, Map<Integer, String>> primaryTables = null;
ConcurrentHashMap<String, Map<Integer, String>> secondaryTables = null;
ConcurrentHashMap<String, Map<Integer, String>> tertiaryTables = null;
//...
// store the data in ClientData class variables which can be
// used by other threads
ClientData.setMappingContainer( new MappingContainer( primaryTables, secondaryTables, tertiaryTables );
}
}
Latches and atomic references should be left to when the simpler constructs won't cut it. In particular, a latch is good if you have to count to any N events (and not 3 specific ones), and an atomic reference is only useful if you use the compare-and-set or get-and-set idiom.

Related

Time Optimization For Feed API (Inside A List Of different API calls)

There is a REST API for a Dashboard Feed Page. The is containing different Activities with pagination. The Different APIS are getting data from different Database collection as well as Some Http 3rd party API's.
public List<Map<String, Object>> getData(params...) {
List<Map<String, Object>> uhfList = null;
Map<String, Object> uhf = null;
for (MasterModel masterModel : pageActivities) { //Taking Time n (which I need to reduce)
uhf = new HashMap<String, Object>();
uhf.put("Key", getItemsByMethodName(params..));
uhfList.add(uhf);
}
return uhfList;
}
private List<? extends Object> getItemsByMethodName(params...) {
java.lang.reflect.Method method = null;
List<? extends Object> data = null;
try {
method = uhfRelativeService.getClass().getMethod(params...);
data = (List<? extends Object>) method.invoke(params...);
} catch (Exception e) {
LOG.error("Error Occure in get Items By Method Name :: ", e.getMessage());
}
return data;
}
I tried It with different approach by using Complatable Future But not much effective !
private CompletableFuture<List<? extends Object>> getItemsByMethodName(UserDetail userIdentity, UHFMaster uhfMaster) {
java.lang.reflect.Method method = null;
CompletableFuture<List<? extends Object>> data = null;
try {
method = uhfRelativeService.getClass().getMethod(uhfMaster.getMethodName().trim(),params...);
data = (CompletableFuture<List<? extends Object>>) method.invoke(uhfRelativeService, userIdentity);
} catch (Exception e) {
LOG.error("Error :: ", e.getMessage());
}
return data;
}
//MasterModel Class
public class MasterModel {
#Id
private ObjectId id;
private String param;
private String param1;
private String param2;
private Integer param3;
private Integer param4;
private Integer param5;
private Integer param6;
private Integer param7;
private Integer param8;
private Integer param9;
private String param10;
private String param11;
//getter & setter
}
But the time is not much reduced. I need a solution to perform this operation fast with less response time. Please Help me on this
If you want to do multithreading, then just casting to a CompletetableFuture won't help. To actually run a process asynchronously in a separate thread, you can do something like:
public List<Map<String, Object>> getData(params...) {
UHFMaster master = null; // assume present
List<UserDetail> userDetails = null; // assume list present
// just an example
// asynchronous threads
List<CompletableFuture<List<? extends Object>>> futures =
userDetails.stream().map(u -> getItemsByMethodName(u, master));
// single future which completes when all the threads/futures in list complete
CompletableFuture<List<? extends Object>> singleFuture =
CompletableFuture.allOf(futures);
// .get() on future will block - meaning waiting for the completion
List<Map<String, Object>> listToReturn =
singleFuture.get().map(i -> new HashMap() {{ put("Key", i); }});
return listToReturn;
}
private CompletableFuture<List<? extends Object>> getItemsByMethodName(UserDetail userIdentity, UHFMaster uhfMaster) {
try {
java.lang.reflect.Method method = uhfRelativeService.getClass().getMethod(uhfMaster.getMethodName().trim(),params...);
return CompletableFuture
.suppyAsync(() -> method.invoke(uhfRelativeService, userIdentity));
} catch (Exception e) {
LOG.error("Error :: ", e.getMessage());
return CompletableFuture.completedFuture(null);
}
}

Editing a List defined in the driver by the worker nodes [duplicate]

This question already has answers here:
Scala spark, listbuffer is empty
(2 answers)
Closed 5 years ago.
I have a DataFrame that I want to loop through its rows and add its values to a list that can be used by the driver? Broadcast variables are read-only and as far as I know accumulators are only for sum.
Is there a way to do this? am using spark 1.6.1
Here is the code that runs on the worker nodes. I tried passing the List to the constructor, but it did not work as it seems once the code is streamed to the worker nodes it does not return any values to the driver.
public class EnrichmentIdentifiersBuilder implements Serializable{
/**
*
*/
private static final long serialVersionUID = 269187228897275370L;
private List<Map<String, String>> extractedIdentifiers;
public EnrichmentIdentifiersBuilder(List<Map<String, String>> extractedIdentifiers) {
//super();
this.extractedIdentifiers = extractedIdentifiers;
}
public void addIdentifiers(DataFrame identifiers)
{
final List<String> parameters=Arrays.asList(identifiers.schema().fieldNames());
identifiers.foreach(new MyFunction<Row, BoxedUnit>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public BoxedUnit apply(Row line)
{
for (int i = 0; i < parameters.size(); i++)
{
Map<String, String> identifier= new HashMap<>();
identifier.put(parameters.get(i), line.getString(i));
extractedIdentifiers.add(identifier);
}
return BoxedUnit.UNIT;
}
});
}
}
Instead of trying to expose the list to the workers, you can rather convert rows into maps, and then collect the result in the driver:
this.extractedIdentifiers = identifiers.rdd().map(
new MyFunction<Row, Map<String, String>>() {
private static final long serialVersionUID = 1L;
#Override
public Map<String, String> apply(Row line)
{
Map<String, String> identifier= new HashMap<>();
for (int i = 0; i < parameters.size(); i++)
{
identifier.put(parameters.get(i), line.getString(i));
}
return identifier;
}
}).collect(); //This returns the list of maps...
This is the correct way to do it as concurrent modification (were it possible) would be problematic. This code transforms each element of the array into a map with its value and then all the maps are collected back to the driver as a list.
Thanks for the idea!
I made slight changes to make that work. here is the code so anyone might need it in the future
public List<Map<String, String>> addIdentifiers(DataFrame identifiers)
{
final List<String> parameters=Arrays.asList(identifiers.schema().fieldNames());
List<Map<String, String>> extractedIdentifiers = new ArrayList<>();
extractedIdentifiers = identifiers.javaRDD().flatMap( new FlatMapFunction<Row, Map<String, String>>() {
/**
*
*/
private static final long serialVersionUID = -2369617506532322680L;
#Override
public List<Map<String, String>> call(Row line) throws Exception {
List<Map<String, String>> identifier= new ArrayList<>();
for (int i = 0; i < parameters.size(); i++)
{
Map<String, String> keyValue= new HashMap<>();
keyValue.put(parameters.get(i), line.getString(i));
identifier.add(keyValue);
}
return identifier;
}
}).collect();
return extractedIdentifiers;
}
Also, there is a collection accumulator which can be used with the code in the question and it can be generated using javaSparkContext.sc().accumulablecollection ()

How to reuse code in multiple enum names?

I have below Enum from which I am calling appropriate execute method basis on what type of enum (eventType) is passed.
public enum EventType {
EventA {
#Override
public Map<String, Map<String, String>> execute(String eventMapHolder) {
final Map<String, String> holder = parseStringToMap(eventMapHolder);
if (holder.isEmpty() || Strings.isNullOrEmpty(holder.get("m_itemId"))) {
return ImmutableMap.of();
}
String itemId = holder.get("m_itemId");
Map<String, String> clientInfoHolder = getClientInfo(itemId);
holder.putAll(clientInfoHolder);
return ImmutableMap.<String, Map<String, String>>builder().put(EventA.name(), holder)
.build();
}
},
EventB {
#Override
public Map<String, Map<String, String>> execute(String eventMapHolder) {
final Map<String, String> holder = parseStringToMap(eventMapHolder);
if (holder.isEmpty() || Strings.isNullOrEmpty(holder.get("m_itemId"))) {
return ImmutableMap.of();
}
return ImmutableMap.<String, Map<String, String>>builder().put(EventB.name(), holder)
.build();
}
},
EventC {
#Override
public Map<String, Map<String, String>> execute(String eventMapHolder) {
final Map<String, String> holder = parseStringToMap(eventMapHolder);
if (holder.isEmpty() || Strings.isNullOrEmpty(holder.get("m_itemId"))) {
return ImmutableMap.of();
}
String itemId = holder.get("m_itemId");
Map<String, String> clientInfoHolder = getClientInfo(itemId);
holder.putAll(clientInfoHolder);
return ImmutableMap.<String, Map<String, String>>builder().put(EventC.name(), holder)
.build();
}
};
public abstract Map<String, Map<String, String>> execute(String eventMapHolder);
public Map<String, String> parseStringToMap(String eventMapHolder) {
// parse eventMapHolder String to Map
}
public Map<String, String> getClientInfo(final String clientId) {
// code to populate the map and return it
}
}
For example: If I get "EventA", then I am calling it's execute method. Similarly if I get "EventB" then I am callings it's execute method and so on.
String eventType = String.valueOf(payload.get("eventType"));
String eventMapHolder = String.valueOf(payload.get("eventMapHolder"));
Map<String, Map<String, String>> processedMap = EventType.valueOf(eventType).execute(eventMapHolder);
In general I will have more event types (around 10-12) in the same enum class and mostly they will do same operation as EventA, EventB and EventC.
Question:
Now as you can see, code in execute method of EventA and EventC are identically similar but the only difference is what I put as "key" (event name) in the returned immutable map. Is there any way to remove that duplicated code but still achieve the same functionality in the enum.
For example, something on this ground. By writing multiple enums side by side separated by comma (if the execute method functionality is same). I know this doesn't work because I have a abstract method which I need to implement it everywhere but is it still possible by making some changes or any other better way?
public enum EventType {
EventA,
EventC {
#Override
public Map<String, Map<String, String>> execute(String eventMapHolder) {
// same code which is there in execute method for EventA and EventC
}
},
EventB {
#Override
public Map<String, Map<String, String>> execute(String eventMapHolder) {
// same code which is there in execute method of EventB
}
};
// other methods which are there already
}
I know one way is to make a method with all the common things and call those method by passing appropriate Event type enum name. Is there any other way apart from that by using enum features or any other changes?
If there is any other better way or any other design pattern to do this then I am open for suggestions as welll which can help me remove duplicated code.
Idea is - basis on what type of event is passed, I want to call its execute method and avoid duplication if possible.
There are two simple mechanisms (that can of course be combined).
The first one consists in having the execute() in the base class, delegating to specific code defined in each subclass (i.e. the template method pattern):
enum Foo {
A {
#Override
protected void specificCode() {
//...
}
},
B {
#Override
public void specificCode() {
//...
}
};
public void execute() {
// ... common code
specificCode();
// ... common code
}
protected abstract void specificCode();
}
The second one consists in having the execute() overridden in each subclass but delegating to a common method defined in the base class:
enum Foo {
A {
#Override
public void execute() {
//...
commonCode();
// ...
}
},
B {
#Override
public void execute() {
//...
commonCode();
// ...
}
};
public abstract void execute();
protected void commonCode() {
// ...
}
}
Something like this?
package enumCodeReuse;
import java.util.Map;
import com.google.common.collect.ImmutableMap;
public enum EventType2 {
EventA
, EventB
, EventC
;
public Map<String, Map<String, String>> execute(String eventMapHolder) {
final Map<String, String> holder = parseStringToMap(eventMapHolder);
if (holder.isEmpty() || Strings.isNullOrEmpty(holder.get("m_itemId"))) {
return ImmutableMap.of();
}
String itemId = holder.get("m_itemId");
Map<String, String> clientInfoHolder = getClientInfo(itemId);
holder.putAll(clientInfoHolder);
return ImmutableMap.<String, Map<String, String>>builder()
.put(this.name(), holder)
.build();
};
public Map<String, String> parseStringToMap(String eventMapHolder) {
// parse eventMapHolder String to Map
return null; // FIXME
}
public Map<String, String> getClientInfo(final String clientId) {
// code to populate the map and return it
return null; // FIXME
}
}

How to make builder pattern thread safe in the multithreading environment?

I am working on a project in which I need to have synchronous and asynchronous method of my java client. Some customer will call synchronous and some customer will call asynchronous method of my java client depending on there requirement.
Below is my java client which has synchronous and asynchronous methods -
public class TestingClient implements IClient {
private ExecutorService service = Executors.newFixedThreadPool(10);
private RestTemplate restTemplate = new RestTemplate();
// for synchronous
#Override
public String executeSync(ClientKey keys) {
String response = null;
try {
Future<String> handle = executeAsync(keys);
response = handle.get(keys.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
} catch (Exception e) {
}
return response;
}
// for asynchronous
#Override
public Future<String> executeAsync(ClientKey keys) {
Future<String> future = null;
try {
ClientTask ClientTask = new ClientTask(keys, restTemplate);
future = service.submit(ClientTask);
} catch (Exception ex) {
}
return future;
}
}
And now below is my ClientTask class which implements Callable interface and I am passing around the dependency using DI pattern in the ClientTask class. In the call method, I am just making a URL basis on machineIPAddress and using the ClientKeys which is passed to ClientTask class and then hit the server using RestTemplate and get the response back -
class ClientTask implements Callable<String> {
private ClientKey cKeys;
private RestTemplate restTemplate;
public ClientTask(ClientKey cKeys, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.cKeys = cKeys;
}
#Override
public String call() throws Exception {
// .. some code here
String url = generateURL("machineIPAddress");
String response = restTemplate.getForObject(url, String.class);
return response;
}
// is this method thread safe and the way I am using `cKeys` variable here is also thread safe?
private String generateURL(final String hostIPAdress) throws Exception {
StringBuffer url = new StringBuffer();
url.append("http://" + hostIPAdress + ":8087/user?user_id=" + cKeys.getUserId() + "&client_id="
+ cKeys.getClientId());
final Map<String, String> paramMap = cKeys.getParameterMap();
Set<Entry<String, String>> params = paramMap.entrySet();
for (Entry<String, String> e : params) {
url.append("&" + e.getKey());
url.append("=" + e.getValue());
}
return url.toString();
}
}
And below is my ClientKey class using Builder patter which customer will use to make the input parameters to pass to the TestingClient -
public final class ClientKey {
private final long userId;
private final int clientId;
private final long timeout;
private final boolean testFlag;
private final Map<String, String> parameterMap;
private ClientKey(Builder builder) {
this.userId = builder.userId;
this.clientId = builder.clientId;
this.remoteFlag = builder.remoteFlag;
this.testFlag = builder.testFlag;
this.parameterMap = builder.parameterMap;
this.timeout = builder.timeout;
}
public static class Builder {
protected final long userId;
protected final int clientId;
protected long timeout = 200L;
protected boolean remoteFlag = false;
protected boolean testFlag = true;
protected Map<String, String> parameterMap;
public Builder(long userId, int clientId) {
this.userId = userId;
this.clientId = clientId;
}
public Builder parameterMap(Map<String, String> parameterMap) {
this.parameterMap = parameterMap;
return this;
}
public Builder remoteFlag(boolean remoteFlag) {
this.remoteFlag = remoteFlag;
return this;
}
public Builder testFlag(boolean testFlag) {
this.testFlag = testFlag;
return this;
}
public Builder addTimeout(long timeout) {
this.timeout = timeout;
return this;
}
public ClientKey build() {
return new ClientKey(this);
}
}
public long getUserId() {
return userId;
}
public int getClientId() {
return clientId;
}
public long getTimeout() {
return timeout;
}
public Map<String, String> getParameterMap() {
return parameterMap;
public boolean istestFlag() {
return testFlag;
}
}
Is my above code thread safe as I am using ClientKey variables in ClientTask class in multithreaded environment so not sure what will happen if another thread tries to make ClientKey variable while making a call to TestingClient synchronous method -
Because customer will be making a call to us with the use of below code and they can call us from there Multithreaded application as well -
IClient testClient = ClientFactory.getInstance();
Map<String, String> testMap = new LinkedHashMap<String, String>();
testMap.put("hello", "world");
ClientKey keys = new ClientKey.Builder(12345L, 200).addTimeout(2000L).parameterMap(testMap).build();
String response = testClient.executeSync(keys);
So just trying to understand whether my above code will be thread safe or not as they can pass multiple values to my TestingClient class from multiple threads. I am having a feeling that my ClientKey class is not thread safe because of parameterMap but not sure.
And also do I need StringBuffer here or StringBuilder will be fine as StringBuilder is faster than StringBuffer because it's not synchronized.
Can anyone help me with this?
The parameter ClientKey keys is given, so I assume is always different.
I don't see any synchronization issues with your code, I'll explain:
ClientTask ClientTask = new ClientTask(keys, restTemplate);
future = service.submit(ClientTask);
Creating a ClientTask object from inside the method, which is not shared among threads.
Using service.submit, whih returns a Future object
The ClientTask object read the keys only inside the method generateURL, but, as I said before, the ClientKeys object is given as a parameter, so you are good as long as this object is not being shared.
In summary, the thread-safeness of your code depends on ExecutorService and Future being thread safe.
Update: Clarification for as long as this object is not being shared
ClientKeys keys;
add keys to #keys
.. code
executeAsync(.., keys)
... code
add keys to #keys
add keys to #keys
executeAsync(.., keys)
executeAsync(.., keys)
add keys to #keys
... code
add keys to #keys
executeAsync(.., keys)
This is (more and less) what I meant is sharing. keys is being used in several threads due to the calls to executeAsync(). In this case, some threads are reading keys, and others are writing data to it, causing whats is ussualy called a race condition.
Update 2: The StringBuffer object is local to (aka is in the scope of) generateURL, there's no need to synchronize it.

How do I get the data from a map when the data is available?

I am using Java Callable Future in my code. Below is my main code which uses the future and callables -
public class TimeoutThread {
public static void main(String[] args) throws Exception {
// starting the background thread
new ScheduledCall().startScheduleTask();
ExecutorService executor = Executors.newFixedThreadPool(5);
Future<String> future = executor.submit(new Task());
try {
System.out.println("Started..");
System.out.println(future.get(3, TimeUnit.SECONDS));
System.out.println("Finished!");
} catch (TimeoutException e) {
System.out.println("Terminated!");
}
executor.shutdownNow();
}
}
Below is my Task class which implements the Callable interface and this class needs to get the data from the ClientData class method. And I have a background thread which is setting the data in ClientData class by using the setters.
class Task implements Callable<String> {
public String call() throws Exception {
//.. some code
String hostname = ClientData.getPrimaryMapping("some_string").get(some_number);
//.. some code
}
}
Below is my background thread which is setting the value in my ClientData class by parsing the data coming from the URL and it is running every 10 minutes.
public class ScheduledCall {
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void startScheduleTask() {
final ScheduledFuture<?> taskHandle = scheduler.scheduleAtFixedRate(
new Runnable() {
public void run() {
try {
callServers();
} catch(Exception ex) {
ex.printStackTrace();
}
}
}, 0, 10, TimeUnit.MINUTES);
}
private void callServers() {
String url = "url";
RestTemplate restTemplate = new RestTemplate();
String response = restTemplate.getForObject(url, String.class);
parseResponse(response);
}
// parse the response and set it.
private void parseResponse(String response) {
//...
ConcurrentHashMap<String, Map<Integer, String>> primaryTables = null;
//...
// store the data in ClientData class variables which can be
// used by other threads
ClientData.setPrimaryMapping(primaryTables);
}
}
And below is my ClientData class
public class ClientData {
private static final AtomicReference<Map<String, Map<Integer, String>>> primaryMapping = new AtomicReference<>();
public static Map<String, Map<Integer, String>> getPrimaryMapping() {
return primaryMapping.get();
}
public static void setPrimaryMapping(Map<String, Map<Integer, String>> map) {
primaryMapping.set(map);
}
}
PROBLEM STATEMENT:-
The only problem I am facing is, whenever I am starting the program for the first time, what will happen is, it will start the background thread which will parse the data coming from the URL. And simultaneously, it will go into call method of my Task class. And the below line will throw an exception, why? bcoz my background thread is still parsing the data and it hasn't set that variable.
String hostname = ClientData.getPrimaryMapping("some_string").get(some_number);
How do I avoid this problem? Is there any better and efficient way to do this?
You just want to make the Task wait until the first update to the Map has happened before proceeding?
public class ClientData {
private static final AtomicReference<Map<String, Map<Integer, String>>> primaryMapping = new AtomicReference<>();
private static final CountDownLatch hasBeenInitialized = new CountDownLatch(1);
public static Map<String, Map<Integer, String>> getPrimaryMapping() {
try {
hasBeenInitialized.await();
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
return primaryMapping.get();
}
public static void setPrimaryMapping(Map<String, Map<Integer, String>> map) {
primaryMapping.set(map);
hasBeenInitialized.countDown();
}
}
A simpler and more efficient way that doesn't cause synchronization checks and make you deal with stupid InterruptedException being a Checked Exception might be to simply load initial values into the Map before firing up the multi-threaded engines.....

Categories

Resources