Scenario: I stored some information (e.g. an array of doubles) in a class field (say field Measurements, array of integers in a class MeasureData). Now I would like to use this data to perform some calculations (e.g compute the arithmetic mean of the array, the maximum and the minimum). At the moment, I don't know if in the future I'll need to do any other operation on those data (e.g. maybe I will need to get the standard deviation, the sum or whatever). I'll have many objects of type MeasureData.
Solution: I could write a class Calculator, declare it final, use a private constructor and use several static methods to perform the calculations I need. This seems to make sense, since Calculator acts as an utility class, without any field, much like the standard Math class.
Problem: if, in a couple of months, I'll need to do any other calculation, I'll be needing to write another static method in Calculator. Does this mean to violate the open/closed principle (after all, I'm modifying the implementation of the class Calculator)?
The strict answer is yes; OCP states that a class is open for extension but closed for modification. You would be modifying Calculator, and, hence, violating OCP (as you've already concluded).
This leads to two points:
First, is violating OCP a big deal in this case? You're additively changing Calculator to add a new method to it. Calculator is a static helper class used to get meaningful data from your objects. Adding a new method, like calculating SD, is not going to affect any of the other operations within it. With a proper implementation, is there really a way that adding this method could compromise your project?
Second, if you feel like the OCP violation is not acceptable, then this is a textbook example of where Strategy Pattern can be utilized. Consider:
Measurements.Java
public class Measurements {
private int[] data;
public Measurements(int[] data) {
this.data = data;
}
public Number performCalculation(Calculation c) {
return c.performCalculation(data);
}
}
Calculation.java
public interface Calculation {
Number performCalculation(int[] data);
}
You can then make a calculation class for each different calculation you want to do on the data (eg: MeanCalculation, StdDevCalculation, etc.). If you want a new calculation (eg: MedianCalculation), you can make this without modifying any of the other code in this area (closed for modification, open for extension; OCP compliant). The end result looks like:
Measurements values = ...
Number mean = values.performCalculation(new MeanCalculation());
Number SD = values.performCalculation(new StdDevCalculation());
// etc.
I'm not saying this is the best approach (or best implementation of the approach even) for your specific case; you need to answer that for yourself. But I hope this answer provides a decent external perspective on the matter.
Related
I can't find any information on this anywhere and was wondering whether such a use of a class is considered bad practise or not.
Let me explain. I have a class ToDecimalConverter which converts an integer from any base to decimal. However, I now need to add the functionality to also convert fractions. As such, I abstracted the integer conversion into a separate class and created a new class with the purpose of converting fractions. (Code isn't finished so I just added some comments to explain)
public class ToDecimalConverter {
private IntegerToDecimalConverter integerConverter;
private DoubleToDecimalConverter doubleConverter;
public double convert(String number, int baseNumber) {
this.integerConverter = new IntegerToDecimalConverter();
this.doubleConverter = new DoubleToDecimalConverter();
number = this.removeZerosAtBeginningOfNumber(number);
// split the number into integer and fraction so they can be used below:
int decimalInt = this.integerConverter.convert(integerNumber, baseNumber);
double decimalDouble = this.doubleConverter.convert(fractioNumber, baseNumber);
// add them together and return them
}
}
Now, except for the methods that remove the zero's from the start of a number and the method that splits the number into integer and fraction (both of which can easily be abstracted into their own class), the ToDecimalConverter class does nothing but group the integer and fraction converters together.
When searching online, I don't see a lot of classes being used like this. Should this be avoided or not? and if so, what are alternatives?
This meant as a more general question, the above is just to explain what I mean.
Edit: Or should I see it as a sort of mini GoF Facade pattern?
There is nothing wrong with it by default, but I would guess that you could achieve the same result with two methods. something like:
public int convertFromInt(String number, int baseNumber) {
int theConvertedInt = 0;
//Really cool convertion
return theConvertedInt;
}
public double convertFromFraction(String number, int baseNumber) {
double theConvertedInt = 0;
//Really cool convertion
return theConvertedInt;
}
Also, keep in mind that a lot of this conversions are already done by Java native classes like BigInteger, BigDecimal, Integer, Decimal, Double, the Math package and so on.
Not going into the specifics of what your class is doing, there indeed value in grouping several or many function/classes together to from a single unified API.
This is called the Facade design pattern.
The intent is that instead of relying on your client to have to know of the various classes/objects you use internally to achieve a feature and to have to look all over the place inside your implementation code is that you put in place a single entry point for given feature/set of feature. It is much better for discoverability & documentation.
Also this way, you ensure to only provide the public API that is only one or a few classes that make the facade while the implementation remains hidden and can change at any time.
I am using the Builder pattern to make it easier to create objects. However, the standard builder pattern examples do not include error-checking, which are needed in my code. For example, the accessibility and demandMean arrays in the Simulator object should have the same length. A brief framework of the code is shown below:
public class Simulator {
double[] accessibility;
double[] demandMean;
// Constructor obmitted for brevity
public static class Builder {
private double[] _accessibility;
private double[] _demandMean;
public Builder accessibility(double[] accessibility) {
_accessibility = accessiblity.clone();
return this;
}
public Builder demandMean(double[] demandMean) {
_demandMean = demandMean.clone();
return this;
}
// build() method obmitted for brevity
}
}
As another example, in a promotion optimization problem, there are various promotional vehicles (e.g. flyers, displays) and promotion modes, which are a set of promotional vehicles (e.g. none, flyer only, display only, flyer and display). When I create the Problem, I have to define the set of vehicles available, and check that the promotion modes use a subset of these vehicles and not some other unavailable vehicles, as well as that the promotion modes are not identical (e.g. there aren't two promo modes that are both "flyer only"). A brief framework of the code is shown below:
public class Problem {
Set<Vehicle> vehicles;
Set<PromoMode> promoModes;
public static class Builder {
Set<Vehicle> _vehicles;
Set<PromoMode> _promoModes;
}
}
public class PromoMode {
Set<Vehicle> vehiclesUsed;
}
My questions are the following:
Is there a standard approach to address such a situation?
Should the error checking be done in the constructor or in the builder when the build() method is called?
Why is this the "right" approach?
When you need invariants to hold while creating an object then stop construction if any parameter violates the invariants. This is also a fail-fast approach.
The builder pattern helps creating an object when you have a large number of parameters.
That does not mean that you don't do error checking.
Just throw an appropriate RuntimeException as soon as a parameter violates the objects invariants
You should use the constructor, since that follows the Single Responsibility Principle better. It is not the responsibility of the Builder to check invariants. It's only real job is to collect the data needed to build the object.
Also, if you decide to change the class later to have public constructors, you don't have to move that code.
You definitely shouldn't check invariants in setter methods. This has several benefits:
* You only need to do checking ONCE
* In cases such as your code, you CAN'T check your invariants earlier, since you're adding your two arrays at different times. You don't know what order your users are going to add them, so you don't know which method should run the check.
Unless a setter in your builder does some intense calculations (which is rarely the case - generally, if there's some sort of calculation required, it should happen in the constructor anyway), it doesn't help very much to 'fail early' in, especially since fluent Builders like yours use only 1 line of code to build the object anyway, so any try block would surround that whole line either way.
The "right" approach really depends on the situation - if it is invalid to construct the arrays with different sizes, i'd say it's better to do the handling in the construction, the sooner an invalid state is caught the better.
Now, if you for instance can change the arrays and put in a different one - then it might be better to do it when calling them.
I've been developing a massive Role Playing Game. The problem is that I'm having trouble engineering how will I manage the Item and Inventory system. Currently I have something similar to this:
public abstract class Item has 5 Nested classes which all are abstract and static that represent the types of Items. Every Nested class has an unique use(), delete() (Which finalizes the class instance) and sell()(triggers delete) void. They also have optional getter and setter methods, like the setAll() method which fills all necesary fields.
Default: Has base price, tradeability boolean, String name, etc... Very flexible
Weapon: Else than the things that the Default type has, it has integers for stat bonus on being equipped(used in the equip() and unequip() voids). Interacts with public class Hero.
Equipment: Similar to Weapon, just that it has an Enum field called 'EquipSlot' that determines where it is equipped.
Consumable: Similar to default, just that has a consume() void that enables the player to apply certain effects to an Hero when using it. Consuming usually means triggering the delete() void.
Special: Usually quest related items where the 'Tradeable' boolean is static, final and always false.
Now, the way that I make customized items is this.
First, I make a new class (Not abstract)
Then, I make it extend Item.ItemType
Then, I make a constructor which has the setAll(info) void inside.
Then, I can use this class in other classes.
It all looks like this:
package com.ep1ccraft.Classes.Items.Defaults;
import com.ep1ccraft.apis.Item.*;
public class ItemExample extends Item.Default {
public ItemExample() { // Constructor
this.setAll(lots of arguments here);
}
}
then I can do:
ItemExample something = new ItemExample();
And I have a perfect ItemExample with all the properties that I want, So, I can make various instances of it, and use amazing methods like 'getName()' and that kind of stuff.
The problems come to Naming the instances, as I do not know how to make an automated form that will give the instance a Different name from the other instance so they don't collide. Also, I want to implement an inventory system that uses slots as containers and can keep stacks (Stackable items only), also the main feature of it is that you can drag and drop them into other slots (For example, to reorganize or to move to another inventory instance like a bank, or to place in an hero's weapon or equipment slots, if it is allowed) and that you can click on them to display a screen that shows the name, description and possible actions of the Item (Which trigger the previously mentioned delete() and use() voids).
Thank you for reading all that! I know that maybe I'm asking for too much, but I'll appreciate any answers anyway!
So basically, you're asking for a unique identifier for your object. There are probably countless approaches to this, but the three different approaches that immediately come to mind are:
1: A UUID; something like:
java.util.UUID.randomUUID()
Pros: A very, very simple solution...
Cons: It does generate a large amount of bytes (16 + the object itself), taking memory / disk storage, which might be an issue in a MMO
2: A global running number; something like:
class ID {
private static volatile long id = 0;
public synchronized long nextId() {
return id++;
}
}
Pros: Again, a simple solution...
Cons: Even though this is a very simple class, it does contain "volatile" and "synchronized", which might be an issue for an MMO, especially if it is used heavily. Also, What happens after X years of running time, if you run out of numbers. A 64 bit long does require quite a lot of named objects to be created, it may not be an issue after all... you'll have to do the math yourself.
3: Named global running numbers; something like:
class NamedID {
private static volatile Map<String, Long> idMap = new HashMap<String, Long>();
public synchronized long nextId(String name) {
Long id = idMap.get(name);
if (id == null) {
id = 0;
} else {
id++;
}
idMap.put(name, id);
return id;
}
}
Pros: You get id's "localized" to whatever name you're using for it.
Cons: A bit more complex solution, and worse than "2" in terms of speed, since the synchronzation lasts longer.
Note: I couldn't figure out how to make this last suggestion faster; i thought of using a ConcurrentHashMap, but that won't work since it works on a lower level; i.e. it will not guarantee that two thread does not interfere with each other between the idMap.get and the idMap.put statements.
What are some common strategies for refactoring large "state-only" objects?
I am working on a specific soft-real-time decision support system which does online modeling/simulation of the national airspace. This piece of software consumes a number of live data feeds, and produces a once-per-minute estimate of the "state" of a large number of entities in the airspace. The problem breaks down neatly until we hit what is currently the lowest-level entity.
Our mathematical model estimates/predicts upwards of 50 parameters for a timeline of several hours into the past and future for each of these entities, roughly once per minute. Currently, these records are encoded as a single Java class with a lot of fields (some get collapsed into an ArrayList). Our model is evolving, and the dependencies among the fields are not yet set in stone, so each instance wanders through a convoluted model, accumulating settings as it goes along.
Currently we have something like the following, which uses a builder pattern approach to build up the contents of the record, and enforce what the known dependencies are (as a check against programmer error as evolve the mode.) Once the estimate is done, we convert the below into an immutable form using a .build() type method.
final class OneMinuteEstimate {
enum EstimateState { INFANT, HEADER, INDEPENDENT, ... };
EstimateState state = EstimateState.INFANT;
// "header" stuff
DateTime estimatedAtTime = null;
DateTime stamp = null;
EntityId id = null;
// independent fields
int status1 = -1;
...
// dependent/complex fields...
... goes on for 40+ more fields...
void setHeaderFields(...)
{
if (!EstimateState.INFANT.equals(state)) {
throw new IllegalStateException("Must be in INFANT state to set header");
}
...
}
}
Once a very large number of these estimates are complete, they are assembled into timelines where aggregate patterns/trends are analyzed. We have looked at using an embedded database but have struggled with performance issues; we'd rather get this sorted out in terms of data modeling and then incrementally move portions of the soft-real-time code into an embedded data store.
Once the "time sensitive" pieces of this are done, the products are flushed to flat files and a database.
Problems:
It's a giant class, with way too many fields.
There is very little behavior encoded in the class; it's mostly a holder for data fields.
Maintaining the build() method is extremely cumbersome.
It feels clumsy to manually maintain a "state machine" abstraction merely for the purpose of ensuring that a large number of dependent modeling components are properly populating a data object, but it has saved us a lot of frustration as the model evolves.
There is a lot of duplication, particularly when the records described above are aggregated into very similar "rollups" which amount to rolling sums/averages or other statistical products of the above structure in time series.
While some of the fields could be clumped together, they are all logically "peers" of one another, and any breakdown we've tried has resulted in having behavior/logic artificially split and needing to reach two levels deep in indirection.
Out of the box ideas entertained, but this is something we need to evolve incrementally. Before anyone else says it, I'll note that one could suggest that our mathematical model is insufficiently crisp if the data representation for that model is this hard to get ahold of. Fair point, and we're working that, but I think that's a side-effect of an R&D environment with a lot of contributors, and a lot of concurrent hypotheses in play.
(Not that it matters, but this is implemented in Java. We use HSQLDB or Postgres for output products. We don't use any persistence framework, partly out of a lack of familiarity, partly because we have enough performance trouble with just the database alone and hand-coded storage routines... we're skeptical of moving towards additional abstraction.)
I had much of the same problem you did.
At least I think I did, sounds like I did. Representation was different, but at 10,000 feet, sounds pretty much the same. Crapload of discrete, "arbitrary" variables and a bunch of ad hoc relationships among them (essentially business driven), subject to change at a moment's notice.
You also have another issue, which you sorta mentioned, and that was the performance requirement. Sounds like faster is better, and likely a slow perfect solution would be tossed out for the fast lousy one, simply because the slower one can't meet a baseline performance requirement, no matter how good it is.
To put it simply, what I did was I designed a simple domain specific rule language for my system.
The entire point of the DSL was to implicitly express relationships and package them up in to modules.
Very crude, contrived example:
D = 7
C = A + B
B = A / 5
A = 10
RULE 1: IF (C < 10) ALERT "C is less than 10"
RULE 2: IF (C > 5) ALERT "C is greater than 5"
RULE 3: IF (D > 10) ALERT "D is greater than 10"
MODULE 1: RULE 1
MODULE 2: RULE 3
MODULE 3: RULE 1, RULE 2
First, this is not representative of my syntax.
But you can see from the Modules, that it is 3, simple rules.
The key though, is that it's obvious from this that Rule 1 depends on C, which depends on A and B, and B depends on A. Those relationships are implied.
So, for that module, all of those dependencies "come with it". You can see if I generated code for Module 1 it might look something like:
public void module_1() {
int a = 10;
int b = a / 5;
int c = a + b;
if (c < 10) {
alert("C is less than 10");
}
}
Whereas if I created Module 2, all I would get is:
public void module_2() {
int d = 7;
if (d > 10) {
alert("D is greater than 10.");
}
}
In Module 3 you see the "free" reuse:
public void module_3() {
int a = 10;
int b = a / 5;
int c = a + b;
if (c < 10) {
alert("C is less than 10");
}
if (c > 5) {
alert("C is greater than 5");
}
}
So, even though I have one "soup" of rules, the Modules root the base of the dependencies, and thus filter out the stuff it doesn't care about. Grab a module, shake the tree and keep what's left hanging.
My system used the DSL to generate source code, but you can easily have it create a mini runtime interpreter as well.
Simple topological sorting handled the dependency graph for me.
So, the nice thing about this is that while there was inevitable duplication in the final, generated logic, at least across modules, there wasn't any duplication in the rule base. What you as a developer/knowledge worker maintain is the rule base.
What is also nice is that you can change an equation, and not worry so much about the side effects. For example, if I change do C = A / 2, then, suddenly, B drops out completely. But the rule for IF (C < 10) doesn't change at all.
With a few simple tools, you can show the entire dependency graph, you can find orphaned variables (like B), etc.
By generating source code, it's going to run as fast as you want.
In my case, it was interesting to see a rule drop a single variable and see 500 lines of source code vanish from the resulting module. That's 500 lines I didn't have to crawl through by hand and remove during maintenance and development. All I had to do was change a single rule in my rule base and let "magic" happen.
I was even able to do some simple peephole optimization and eliminate variables.
It's not that hard to do. Your rule language can be XML, or a simple expression parser. No reason to go full boat Yacc or ANTLR on it if you don't want to. I'll put a plug in for S-Expressions, no grammar needed, brain dead parsing.
Spreadsheets also make a great input tool, actually. Just be strict on the formatting. Kind of sucks for merging in SVN (so, Don't Do That), but end users love it.
You may well be able to get away with an actual rule based system. My system wasn't dynamic at runtime, and didn't really need sophisticated goal seeking and inference, so I didn't need the overhead of such a system. But if one works for you out of the box, then happy day.
Oh, and for an implementation note, for those who don't believe you can hit the 64K code limit in a Java method, well I can assure you it can be done :).
Splitting a Large Data Object is very similar to Normalizing a Large Relational Table (first and second normal form). Follow the rules to reach at least second normal form and you may have a good decomposition of the original class.
From experience working also with R&D stuff with soft real-time performance constrains (and sometimes monster fat classes), I would suggest NOT to use OR mappers. In such situations, you'll be better off dealing "touching the metal" and working directly with JDBC result sets. This is my suggestion for apps with soft real-time constrains and massive amounts of data items per package. More importantly, if the number of distinct classes (not class instances, but class definitions) that need to persisted is large, and you also have memory constrains in your specs, you will also want to avoid ORMs like Hibernate.
Going back to your original question:
What you seem to have is a typical problem of 1) mapping multiple data items into a OO model and 2) such multiple data items do not exhibit a good way of grouping or segregation (and any attempt to grouping tends simply not to feel right.) Sometimes the domain model does not lend itself for such aggregation, and coming up with an artificial way of doing so typically ends up in compromises that don't satisfy all design requirements and desires.
To make matters worse, a OO model typically requires/expects you to have all the items present in a class as class' fields. Such a class is typically without behavior, so it is just a struct-like construct, aka data envelope or data shuttle.
But such situations beg the following questions:
Does your application need to read/write all 40, 50+ data items at once, always?
*Must all data items be always present?*
I do not know the specifics of your problem domain, but in general I've found that we rarely ever need to deal with all data items at once. This is where a relational model shines because you don't have to query all rows from a table at once. You only pulls those you need as projections of the table/view in question.
In a situation where we have a potentially large number of data items, but on average the number of data items being passed down the wire is less than the maximum, you'd be better off using a Properties pattern.
Instead of defining a monster envelope class holding all items :
// java pseudocode
class envelope
{
field1, field2, field3... field_n;
...
setFields(m1,m2,m3,...m_n){field1=m1; .... };
...
}
Define a dictionary (based on a map for example):
// java pseudocode
public enum EnvelopeField {field1, field2, field3,... field_n);
interface Envelope //package visible
{
// typical map-based read fields.
Object get(EnvelopeField field);
boolean isEmpty();
// new methods similar to existing ones in java.lang.Map, but
// more semantically aligned with envelopes and fields.
Iterator<EnvelopeField> fields();
boolean hasField(EnvelopeField field);
}
// a "marker" interface
// code that only needs to read envelopes must operate on
// these interfaces.
public interface ReadOnlyEnvelope extends Envelope {}
// the read-write version of envelope, notice that
// it inherits from Envelope, but not from ReadOnlyEnvelope.
// this is done to make it difficult (but not impossible
// unfortunately) to "cast-up" a read only envelope into a
// mutable one.
public interface MutableEnvelope extends Envelope
{
Object put(EnvelopeField field);
// to "cast-down" or "narrow" into a read only version type that
// cannot directly be "cast-up" back into a mutable.
ReadOnlyEnvelope readOnly();
}
// the standard interface for map-based envelopes.
public interface MapBasedEnvelope extends
Map<EnvelopeField,java.lang.Object>
MutableEnvelope
{
}
// package visible, not public
class EnvelopeImpl extends HashMap<EnvelopeField,java.lang.Object>
implements MapBasedEnvelope, ReadOnlyEnvelope
{
// get, put, isEmpty are automatically inherited from HashMap
...
public Iterator<EnvelopeField> fields(){ return this.keySet().iterator(); }
public boolean hasField(EnvelopeField field){ return this.containsKey(field); }
// the typecast is redundant, but it makes the intention obvious in code.
public ReadOnlyEnvelope readOnly(){ return (ReadOnlyEnvelope)this; }
}
public class final EnvelopeFactory
{
static public MapBasedEnvelope new(){ return new EnvelopeImpl(); }
}
No need to set up read-only internal flags. All you need to do is downcast your envelope instances as Envelope instances (that only provide getters).
Code that expects to read should operate on read-only envelopes and code that expects to change fields should operate on mutable envelopes. Creation of the actual instances would be compartmentalized in factories.
That is, you use the compiler to enforce things to be read-only (or allow things to be mutable) by establishing some code conventions, rules governing what interfaces to use where and how.
You can layer your code into sections that need to write separate from code that only needs to read. Once that's done, simple code reviews (or even grep) can identify code that is using the wrong interface.)
Problems:
Non-public Parent Interface:
Envelope is not declared as a public interface to prevent erroneous/malicious code from casting a read-only envelope down to a base envelope and then back to a mutable envelope. The intended flow is from mutable to read-only only - it is not intended to be bi-directional.
The problem here is that extension of Envelope is restricted to the package that contains it. Whether that is a problem will depend on the particular domain and intended usage.
Factories:
The problem is that factories can (and most likely will) be very complex. Again, the nature of the beast.
Validation:
Another problem introduced with this approach is that now you have to worry about code that expects field X to be present. Having the original monster envelope class partially frees you from that worry because, at least syntactically, all fields are there...
... whether the fields are set or not, that was another matter that still remains with this new model I'm proposing.
So if you have client code that expects to see field X, the client code has to throw some type of exception if the field is not present (or to computer or read a sensible default somehow.) In such cases, you will have to
Identify patterns of field presence. Clients that expect field X to be present might be grouped separately (layered apart) from clients that expect some other field to be present.
Associate custom validators (proxies to read-only envelope interfaces) that either throw exceptions or compute default values for missing fields according to some rules (rules provided programmatically, with an interpreter, or with a rules engine.)
Lack of Typing:
This might be debatable, but people used to work with static typing might feel uneasy with losing the benefits of static typing by going to a loosely typied map-based approach. The counter-argument of this is that most of the web works on a loose typing approach, even on the Java side (JSTL, EL.)
Problems aside, the larger the maximum number of possible fields and the lower the average number of fields present at any given time, the most effective wrt performance this approach will be. It adds additional code complexity, but that's the nature of the beast.
That complexity doesn't go away, and either will be present in your class model or in your validation code. Serialization and transferring down the wire is much more efficient, though, specially if you expect massive numbers of individual data transfers.
Hope it helps.
Actually this looks like a frequent problem that game developers face, bloated classes holding numerous variables and methods because of a deep inheritance tree etc.
There's this blog post about how and why to select composition over inheritance, maybe it would help.
One way you may be able to intelligently break up a large data class is to look at patterns of access by client classes. For example, if a set of classes only accesses fields 1-20 and another set of classes only accesses fields 25-30, maybe those groups of fields belong in separate classes.
I'm revisiting data structures and algorithms to refresh my knowledge and from time to time I stumble across this problem:
Often, several data structures do need to swap some elements on the underlying array. So I implement the swap() method in ADT1, ADT2 as a private non-static method. The good thing is, being a private method I don't need to check on the parameters, the bad thing is redundancy. But if I put the swap() method in a helper class as a public static method, I need to check the indices every time for validity, making the swap call very unefficient when many swaps are done.
So what should I do? Neglect the performance degragation, or write small but redundant code?
Better design should always trump small inefficiencies. Only address performance problem if it actually is proven to be one.
Besides, what kind of checking are you doing anyway? Aren't naturally thrown ArrayIndexOutOfBoundsException and/or NullPointerException good enough?
It's worth nothing that while there's public static Collections.swap(List<?>,int,int), java.util.Arrays makes its overloads (for int[], long[], byte[], etc) all private static.
I'm not sure if Josh Bloch ever explicitly addressed why he did that, but one might guess that it has something to do with Item 25 on his book Effective Java 2nd Edition: Prefer lists to arrays. Yes, there will be "performance degradation" in using List, but it's negligible, and the many advantages more than make up for it.
If you don't need to make the checks in the private method, don't make them in the static one. This will result in a RuntimeException for invalid calls, but since all your calls are supposed to be valid, it will be as though you've used a private method.
It's always better for your code to be less efficient than to be duplicated (some constant calls are not considerable). At least that is what is taught at my university.
Code duplication produces bugs. So you prefer your program to work correctly rather than to work a little faster.
If you want to prevent constraints checking: what comes to my mind is that you can either accept naturally thrown exceptions as polygenelubricants suggested or create an abstract super class to all your data structures based on arrays. That abstract class would have protected method swap that will not check parameters. It's not perfect, but I guess that a protected method that does not check parameters is better than a public method that does not do it.