how to avoid switch statement? - java

I'm trying to learn client/server in java
until now i got the basics.
here how to accept and serve out many clients
public class Server {
ServerSocket serverSocket;
public void startServer() throws IOException {
serverSocket= new ServerSocket(2000);
while (true){
Socket s= serverSocket.accept();
new ClientRequestUploadFile(s).start(); //here is the first option.
}
}
}
Now suppose i have too many type of options the client can request.
the code will be as follow :
public void startServer() throws IOException {
serverSocket= new ServerSocket(2000);
while (true){
Socket s= serverSocket.accept();
DataInputStream clientStream= new DataInputStream(s.getInputStream());
String requestName=clientStream.readUTF();
switch (requestName){
case "ClientRequestUploadFile": new ClientRequestUploadFileHandler(s).start();break;
case "clientRequestCalculator": new clientRequestCalculatorHandler(s).start();break;
case "clientRequestDownloadFile": new clientRequestDownloadFileHandler(s).start();break;
}
}
}
if there 100 of options,is there any way to avoid switch statement(design-patterns maybe)?
keep in mind that may occur new option in the future.

This seems like an example where something like the Command pattern would be appropriate.
Basically, you want a way to map a given command (in this case, a String), into executing the appropriate behavior.
The simplest way to do this would be to create a Command interface like so:
interface Command {
void execute();
}
Then you can create a Map<String, Command> that holds your commands and maps each incoming String into some helper class that implements Command and does the thing you want to happen when you see that command. Then you would use it something like:
commandMap.get(requestName).execute();
This will, however, require a bit of on-the-fly setup at program startup to build the Map with the command strings and the Command objects. This is a very dynamic way of setting up the mapping, which may be a good or bad thing depending on how often your command set changes.
If your commands are more static, a more elegant way to set this up would be to use an enum to define the various commands and their behaviours. Here's a fairly simple and generic example of how you could do that:
public class CommandPatternExample {
public static void main(String[] args) throws Exception {
CommandEnum.valueOf("A").execute(); // run command "A"
CommandEnum.valueOf("B").execute(); // run command "B"
CommandEnum.valueOf("C").execute(); // IllegalArgumentException
}
interface Command {
void execute();
}
enum CommandEnum implements Command {
A {
#Override
public void execute() {
System.out.println("Running command A");
}
},
B {
#Override
public void execute() {
System.out.println("Running command B");
}
};
}
}
As pointed out in the comments, there's no way to get around having the command to helper object mapping somewhere in your code. The main thing is to not have it in your business logic, which makes the method hard to read, but rather in its own class somewhere.

So what you have is a large-ish switch block that in the end start()s some piece of code. The recommended way is to use existing interfaces, so that would be a Runnable (containing a void method with no parameters, just like your start()).
If you refactor the whole block out into a method, it would have two inputs: the Socket and the the requestName - so its signature would look like this:
Runnable getRequestCommand(Socket s, String request)
which would contain your switch block and returned something like
if ("ClientRequestUploadFile".equals(request)) {
return new ClientRequestUploadFileHandler(s);
}
// etc
Again using preexisting interfaces, this is a BiFunction<Socket, String, Runnable> (it requires the request string and the socket as input and returns the handler runnable).
Now you can split each individual case and create such a function:
BiFunction<Socket, String, Runnable> upload = (s, req) -> {
return "ClientRequestUploadFile".equals(req)
? new ClientRequestUploadFileHandler(s)
: null;
}
If you do the same for the other cases and store these in a Collection<BiFunction<Socket, String, Runnable>> (let's call it handlers), your getRequestCommand() method above looks like
Runnable requestHandler = null;
for (BiFunction<Socket, String, Runnable> handler : handlers) {
requestHandler = handler.apply(s, request);
if (requestHandler != null) { break; } // found the match
}
return requestHandler;
Now your switch actually also starts the created Runnable, so you can also if (requestHandler != null) { requestHandler.run(); } here and not return it back to the caller. As a single line, this is handlers.stream().map(h -> h.apply(s, request)).findFirst(Objects::nonNull).ifPresent(Runnable::run).
Anyway, now you're stuck with creating all the BiFunction<>s in the original class, but you can externalize them, eg. to an enum.
enum RequestHandler {
FILE_UPLOAD("ClientRequestUploadFile", ClientRequestUploadFileHandler::new),
CALCULATE("clientRequestCalculator", ClientRequestCalculatorHandler::new),
// ...
;
// the String that needs to match to execute this handler
private String request;
// creates the runnable if the request string matches
private Function<Socket, Runnable> createRunnable;
private RequestHandler(String r, Function<Socket, Runnable> f) {
request = r; createRunnable = f;
}
// and this is your handler method
static void runOnRequestMatch(Socket socket, String request) {
for (RequestHandler handler : values()) {
Runnable requestHandler = request.equals(handler.request)
? handler.createRunnable.apply(socket)
: null;
if (requestHandler != null) {
requestHandler.run();
break;
}
}
}
}
And in your client code, you'd get
// ...
Socket s= serverSocket.accept();
DataInputStream clientStream= new DataInputStream(s.getInputStream());
String requestName=clientStream.readUTF();
RequestHandler.runOnRequestMatch(s, requestName);
Now you've ended up with far more code than before, but the handling itself is removed from the class accepting the socket, so better single responsibility; this allows you to add functionality by adding a value to the enum without needing to touch your original code.
A simpler version would be to create the functions collection in a method by simply doing
Collection<BiFunction<Socket,String,Runnable>> createMappings() {
return Arrays.asList(
createMapping("ClientRequestUploadFile", ClientRequestUploadFileHandler::new),
createMapping("clientRequestCalculator", ClientRequestCalculatorHandler::new),
);
}
private BiFunction<Socket,String,Runnable> createmapping(String req, Function<Socket, Runnable> create) {
return (s, r) -> req.equals(r) ? create.apply(s) : null;
}

Related

Why do I get "Unidentified mapping from registry minecraft:block"?

I am learning how to write Minecraft mods (version 1.14.4) and was able to make an item. Now I am trying to make a block. I am following this tutorial video which actually covers 1.14.3, but I thought it would be close enough.
I am currently getting this error:
[20Mar2020 14:09:10.522] [Server thread/INFO] [net.minecraftforge.registries.ForgeRegistry/REGISTRIES]: Registry Block: Found a missing id from the world examplemod:examplemod
[20Mar2020 14:09:10.613] [Server thread/ERROR] [net.minecraftforge.registries.GameData/REGISTRIES]: Unidentified mapping from registry minecraft:block
examplemod:examplemod: 676
I also get presented with this at runtime:
I have tried messing around with how i'm naming the registries but I just can't seem to pin down what the issue is. Maybe i'm not formatting my json files correctly?
Note that ItemList and BlockList are just classes that contain static instances of each Block/Item I have created.
ExampleMod.java:
// The value here should match an entry in the META-INF/mods.toml file
#Mod(ExampleMod.MOD_ID)
public class ExampleMod
{
// Directly reference a log4j logger.
private static final Logger LOGGER = LogManager.getLogger();
public static final String MOD_ID = "examplemod";
public ExampleMod() {
// Register the setup method for modloading
FMLJavaModLoadingContext.get().getModEventBus().addListener(this::setup);
// Register the enqueueIMC method for modloading
FMLJavaModLoadingContext.get().getModEventBus().addListener(this::enqueueIMC);
// Register the processIMC method for modloading
FMLJavaModLoadingContext.get().getModEventBus().addListener(this::processIMC);
// Register the doClientStuff method for modloading
FMLJavaModLoadingContext.get().getModEventBus().addListener(this::doClientStuff);
// Register ourselves for server and other game events we are interested in
MinecraftForge.EVENT_BUS.register(this);
}
private void setup(final FMLCommonSetupEvent event)
{
// some preinit code
LOGGER.info("HELLO FROM PREINIT");
LOGGER.info("DIRT BLOCK >> {}", Blocks.DIRT.getRegistryName());
}
private void doClientStuff(final FMLClientSetupEvent event) {
// do something that can only be done on the client
LOGGER.info("Got game settings {}", event.getMinecraftSupplier().get().gameSettings);
}
private void enqueueIMC(final InterModEnqueueEvent event)
{
// some example code to dispatch IMC to another mod
InterModComms.sendTo(ExampleMod.MOD_ID, "helloworld", () -> { LOGGER.info("Hello world from the MDK"); return "Hello world";});
}
private void processIMC(final InterModProcessEvent event)
{
// some example code to receive and process InterModComms from other mods
LOGGER.info("Got IMC {}", event.getIMCStream().
map(m->m.getMessageSupplier().get()).
collect(Collectors.toList()));
}
// You can use SubscribeEvent and let the Event Bus discover methods to call
#SubscribeEvent
public void onServerStarting(FMLServerStartingEvent event) {
// do something when the server starts
LOGGER.info("HELLO from server starting");
}
// You can use EventBusSubscriber to automatically subscribe events on the contained class (this is subscribing to the MOD
// Event bus for receiving Registry Events)
#Mod.EventBusSubscriber(bus=Mod.EventBusSubscriber.Bus.MOD)
public static class RegistryEvents {
#SubscribeEvent
public static void onItemsRegistry(final RegistryEvent.Register<Item> blockItemEvent)
{
ItemList.bomb_item = new Item(new Item.Properties().group(ItemGroup.COMBAT));
ItemList.bomb_item.setRegistryName(new ResourceLocation(ExampleMod.MOD_ID, "bomb_item"));
ItemList.mystery_block = new BlockItem(BlockList.mystery_block, new Item.Properties().group(ItemGroup.MISC));
ItemList.mystery_block.setRegistryName(new ResourceLocation(ExampleMod.MOD_ID, "mystery_block"));
blockItemEvent.getRegistry().registerAll(ItemList.bomb_item, ItemList.mystery_block);
LOGGER.info("Items registered.");
}
#SubscribeEvent
public static void onBlocksRegistry(final RegistryEvent.Register<Block> blockRegistryEvent) {
BlockList.mystery_block = new Block(Block.Properties.create(Material.CAKE)
.hardnessAndResistance(2.0f, 2.0f)
.sound(SoundType.GLASS));
BlockList.mystery_block.setRegistryName(new ResourceLocation(MOD_ID, "mystery_block"));
blockRegistryEvent.getRegistry().registerAll(BlockList.mystery_block);
LOGGER.info("Blocks registered.");
}
}
}
blockstates/mystery_block.json:
{
"variants": {
"": {
"model": "examplemod:block/mystery_block"
}
}
}
models/block/mystery_block.json:
{
"parent": "block/cube_all",
"textures": {
"all": "examplemod:block/mystery_block"
}
}
models/item/mystery_block.json:
{
"parent": "examplemod:block/mystery_block"
}
All that means is that at some point you had a block registered as "examplemod:examplemod", and now you don't. You can safely ignore the message. If you start a new world instead of opening an old one, you won't get that message.
As an aside, HarryTalks is not a recommended way to learn to mod (on the Minecraft Forge forums ). Apparently he uses several bad practices (I have not actually used them).
Alternative suggestions are Cadiboo's example mod, or McJty's tutorials.

Concurrency on Vertx

i have joined to one of those Vertx lovers , how ever the single threaded main frame may not be working for me , because in my server there might be 50 file download requests at a moment , as a work around i have created this class
public abstract T onRun() throws Exception;
public abstract void onSuccess(T result);
public abstract void onException();
private static final int poolSize = Runtime.getRuntime().availableProcessors();
private static final long maxExecuteTime = 120000;
private static WorkerExecutor mExecutor;
private static final String BG_THREAD_TAG = "BG_THREAD";
protected RoutingContext ctx;
private boolean isThreadInBackground(){
return Thread.currentThread().getName() != null && Thread.currentThread().getName().equals(BG_THREAD_TAG);
}
//on success will not be called if exception be thrown
public BackgroundExecutor(RoutingContext ctx){
this.ctx = ctx;
if(mExecutor == null){
mExecutor = MyVertxServer.vertx.createSharedWorkerExecutor("my-worker-pool",poolSize,maxExecuteTime);
}
if(!isThreadInBackground()){
/** we are unlocking the lock before res.succeeded , because it might take long and keeps any thread waiting */
mExecutor.executeBlocking(future -> {
try{
Thread.currentThread().setName(BG_THREAD_TAG);
T result = onRun();
future.complete(result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
future.fail(e);
}
/** false here means they should not be parallel , and will run without order multiple times on same context*/
},false, res -> {
if(res.succeeded()){
onSuccess((T)res.result());
}
});
}else{
GUI.display("AVOIDED DUPLICATE BACKGROUND THREADING");
System.out.println("AVOIDED DUPLICATE BACKGROUND THREADING");
try{
T result = onRun();
onSuccess((T)result);
}catch (Exception e) {
GUI.display(e);
e.printStackTrace();
onException();
}
}
}
allowing the handlers to extend it and use it like this
public abstract class DefaultFileHandler implements MyHttpHandler{
public abstract File getFile(String suffix);
#Override
public void Handle(RoutingContext ctx, VertxUtils utils, String suffix) {
new BackgroundExecutor<Void>(ctx) {
#Override
public Void onRun() throws Exception {
File file = getFile(URLDecoder.decode(suffix, "UTF-8"));
if(file == null || !file.exists()){
utils.sendResponseAndEnd(ctx.response(),404);
return null;
}else{
utils.sendFile(ctx, file);
}
return null;
}
#Override
public void onSuccess(Void result) {}
#Override
public void onException() {
utils.sendResponseAndEnd(ctx.response(),404);
}
};
}
and here is how i initialize my vertx server
vertx.deployVerticle(MainDeployment.class.getCanonicalName(),res -> {
if (res.succeeded()) {
GUI.display("Deployed");
} else {
res.cause().printStackTrace();
}
});
server.requestHandler(router::accept).listen(port);
and here is my MainDeployment class
public class MainDeployment extends AbstractVerticle{
#Override
public void start() throws Exception {
// Different ways of deploying verticles
// Deploy a verticle and don't wait for it to start
for(Entry<String, MyHttpHandler> entry : MyVertxServer.map.entrySet()){
MyVertxServer.router.route(entry.getKey()).handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext ctx) {
String[] handlerID = ctx.request().uri().split(ctx.currentRoute().getPath());
String suffix = handlerID.length > 1 ? handlerID[1] : null;
entry.getValue().Handle(ctx, new VertxUtils(), suffix);
}
});
}
}
}
this is working just fine when and where i need it , but i still wonder if is there any better way to handle concurencies like this on vertx , if so an example would be really appreciated . thanks alot
I don't fully understand your problem and reasons for your solution. Why don't you implement one verticle to handle your http uploads and deploy it multiple times? I think that handling 50 concurrent uploads should be a piece of cake for vert.x.
When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:
DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to take utilise all the cores.
http://vertx.io/docs/vertx-core/java/#_specifying_number_of_verticle_instances
vertx is a well-designed model so that a concurrency issue does not occur.
generally, vertx does not recommend the multi-thread model.
(because, handling is not easy.)
If you select multi-thread model, you have to think about shared data..
Simply, if you just only want to split EventLoop Area,
first of all, you make sure Check your a number of CPU Cores.
and then Set up the count of Instances .
DeploymentOptions options = new DeploymentOptions().setInstances(4);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);
But, If you have 4cores of CPU, you don't set up over 4 instances.
If you set up to number four or more, the performance won't improve.
vertx concurrency reference
http://vertx.io/docs/vertx-core/java/

passing parameters through hashmap to function in java

i was trying to build a command line tool alike in java, for example, if i write down in console "dir c:/....", it will activate my Dir class and will get the "c:/...." path as a parameter to the Dir class, and doing so with hashmap.
i dont know how to pass parameters through the commandline and hashmap,
is it even possible?
every command has it's own class, which implements the main "Command" interface, with a doCommand() function.
after running the start() function in the CLI class, it should take commands and do the requested command.
Command Interface:
public interface Command {
public void doCommand();
}
my CLI class:
public class CLI {
BufferedReader in;
PrintWriter out;
HashMap<String, Command> hashMap;
Controller controller;
public CLI(Controller controller, BufferedReader in, PrintWriter out,
HashMap<String, Command> hashMap) {
this.in = in;
this.out = out;
this.hashMap = hashMap;
}
public void start() {
new Thread(new Runnable() {
#Override
public void run() {
try {
out.println("Enter a command please:");
String string = in.readLine();
while (!string.equals("exit")) {
Command command = hashMap.get(string);
command.doCommand();
string = in.readLine();
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
}
lets take for example my DirCommmand, as i said before, which should recognize the "dir" string through my hashMap configuration, and should pass the next word as a string parameter for the path
public class DirCommand implements Command {
#Override
public void doCommand() {
System.out.println("doing dir command...");
}
}
and my hashmap configuration:
hashMap.put("dir", new DirCommand());
which sets in a diffrent class the hashMap configuration and pass it to the CLI class's hashMap object at the start of the project.
i would love for some help because i have no idea how to do so.
First of all, in order to pass the parameters to doCommand, what I would do is use a variable param list, like:
public interface Command {
public void doCommand(String... params);
}
Second, I would split the input string on spaces as:
String[] result = command.split(" ");
Finally, the command would be the result[0] and the rest you would pass to the doCommand method.

What security issues come from calling methods with reflection?

I'm working on a project that has hosts and clients, and where hosts can send commands to clients (via sockets).
I'm determined that using JSON to communicate works the best.
For example:
{
"method" : "toasty",
"params" : ["hello world", true]
}
In this example, when this JSON string is sent to the client, it will be processed and a suitable method within the client will be run as such:
public abstract class ClientProcessor {
public abstract void toasty(String s, boolean bool);
public abstract void shutdown(int timer);
private Method[] methods = getClass().getDeclaredMethods();
public void process(String data) {
try {
JSONObject json = new JSONObject(data);
String methodName = (String) json.get("method");
if (methodName.equals("process"))
return;
for (int i = 0; i < methods.length; i++)
if (methods[i].getName().equals(methodName)) {
JSONArray arr = json.getJSONArray("params");
int length = arr.length();
Object[] args = new Object[length];
for (int i2 = 0; i2 < length; i2++)
args[i2] = arr.get(i2);
methods[i].invoke(this, args);
return;
}
} catch (Exception e) {}
}
}
And using the ClientProcessor:
public class Client extends ClientProcessor {
#Override
public void toasty(String s, boolean bool) {
//make toast here
}
#Override
public void shutdown(int timer) {
//shutdown system within timer
}
public void processJSON(String json) {
process(json);
}
}
The JSON is sent by the server to the client, but the server could be modified to send different JSONs.
My questions are:
Is this a safe way of running methods by processing JSON?
Is there a better way to do this? I'm thinking that using reflection is terribly slow.
There's a 100 and 1 ways you can process a JSON message so that some processing occurs, but they'll all boil down to:
parse message
map message to method
invoke method
send response
While you could use a reflective call (performance-wise it would be fine for most cases) to invoke a method, that, imho, would be a little too open - a malicious client could for example crash your system by issuing wait calls.
Reflection also opens you up to having to correctly map the parameters, which is more complicated than the code you've shown in your question.
So don't use Reflection.
Would you could do is define a simple interface, implementations of which would understand how to process the parameters and have your processor (more commonly referred to as a Controller) invoke that, something like this:
public interface ServiceCall
{
public JsonObject invoke(JsonArray params) throws ServiceCallException;
}
public class ServiceProcessor
{
private static final Map<String, ServiceCall> SERVICE_CALLS = new HashMap<>();
static
{
SERVICE_CALLS.put("toasty", new ToastCall());
}
public String process(String messageStr)
{
try
{
JsonObject message = Json.createReader(new StringReader(messageStr)).readObject();
if (message.containsKey("method"))
{
String method = message.getString("method");
ServiceCall serviceCall = SERVICE_CALLS.get(method);
if (serviceCall != null)
{
return serviceCall.invoke(message.getJsonArray("params")).toString();
}
else
{
return fail("Unknown method: " + method);
}
}
else
{
return fail("Invalid message: no method specified");
}
}
catch (Exception e)
{
return fail(e.message);
}
}
private String fail(String message)
{
return Json.createObjectBuilder()
.add("status", "failed")
.add("message", message)
.build()
.toString();
}
private static class ToastCall implements ServiceCall
{
public JsonObject invoke(JsonArray params) throws ServiceCallException
{
//make toast here
}
}
}
Map method names to int constants and just switch(case) on these constants to invoke appropriate method.
"toasty" : 1
"shutdown": 2
switch()
case 1: toasty()
case 2: shutdown()
I believe you are trying to convert JSON string to Java object and vice versa... if that is the requirement then this would not be the right approach...
Try any open source API like Gson...
it is the API by Google for conversin of Java to JSON and vice versa.
Please check ...
https://google-gson.googlecode.com/svn/trunk/gson/docs/javadocs/com/google/gson/Gson.html
Let me know if you have any further questions...

Socket-based Message Factory

I'm looking for some ideas on implementing a basic message factory that reads a header from an input stream and creates the appropriate message type based on the type defined in the message header.
So I have something like (roughly.. and I'm willing to change the design if a better paradigm is presented here)
class MessageHeader {
public String type;
}
class MessageA extends Message {
public static final String MESSAGE_TYPE = "MSGA";
public MessageA (DataInputStream din) {
var1 = din.readInt ();
var2 = din.readInt ()
// etc
}
}
and I essentially want to do something like this:
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
return new MessageA (din);
} else if (header.type == MessageB.MESSAGE_TYPE) {
return new MessageB (din);
}
Although this scheme works I feel like there could be a better method using a Map and an Interface somehow...
public interface MessageCreator {
public Message create (DataInputStream);
}
Map <String, MessageCreater> factory = new Map <String, MessageCreator> ();
factory.put (MessageTypeA.MESSAGE_TYPE, new MessageCreator () {
public Message create (DataInputStream din) {
return new MessageA (din); }});
...
// Read message header
Message createdMessage = Map.get (header.type).create (din);
But then whenever I want to use the message I have to use instanceof and cast to the correct subclass.
Is there a 3rd (better?) option? Maybe there's a way to accomplish this using templates. Any help is appreciated. Thanks
Edit: I guess it's important to note I want to "dispatch" the message to a function. So essentially I really want to do this:
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
handleMessageA (new MessageA (din));
} else if (header.type == MessageB.MESSAGE_TYPE) {
handleMessageB (new MessageB (din))
}
So a pattern that incorporates the factory and a dispatch would be perfect
How about letting the guy who creates the messages actually dispatch to a handler.
So you'd add a handler interface like this:
public interface MessageHandler {
void handleTypeA(MessageA message);
void handleTypeB(MessageB message);
}
Then you'd have a dispatcher which is basically the same thing as your MessageCreator, except it calls the correct method on the handler instead of returning the message object.
public interface MessageDispatcher {
void createAndDispatch(DataInputStream input, MessageHandler handler);
}
The implementation is then almost identical to the first code snippet you posted:
public void createAndDispatch(DataInputStream input, MessageHandler handler) {
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
handler.handleTypeA(new MessageA (din));
} else if (header.type == MessageB.MESSAGE_TYPE) {
handler.handleTypeB(new MessageB (din));
}
}
Now you only have the one spot in the code where you have to do a switch or if/else if and after that everything is specifically typed and there's no more casting.

Categories

Resources