In Short: Using AmazonS3Client to connect to a local instance of MinIO results in a UnknownHostException thrown because the url is resolved to http://{bucket_name}.localhost:port.
Detailed description of the problem:
I'm creating an integration test for a Java service that uses AmazonS3Client lib to retrieve content from S3. I'm using MinIO inside a test container to perform the role of Amazon S3, as follows:
#Container
static final GenericContainer<?> minioContainer = new GenericContainer<>("minio/minio:latest")
.withCommand("server /data")
.withEnv(
Map.of(
"MINIO_ACCESS_KEY", AWS_ACCESS_KEY.getValue(),
"MINIO_SECRET_KEY", AWS_SECRET_KEY.getValue()
)
)
.withExposedPorts(MINIO_PORT)
.waitingFor(new HttpWaitStrategy()
.forPath("/minio/health/ready")
.forPort(MINIO_PORT)
.withStartupTimeout(Duration.ofSeconds(10)));
and then I export its url dynamically (because test containers are deployed at a random port) using something like this:
String.format("http://%s:%s", minioContainer.getHost(), minioContainer.getFirstMappedPort())
which in turn results in a url like this:
http://localhost:54123
The problem I encountered during the runtime of my test lies within the actual implementation of AmazonS3Client.getObject(String,String) - when creating the request it performs the following validation (class S3RequestEndpointResolver, method resolveRequestEndpoint):
...
if (shouldUseVirtualAddressing(endpoint)) {
request.setEndpoint(convertToVirtualHostEndpoint(endpoint, bucketName));
request.setResourcePath(SdkHttpUtils.urlEncode(getHostStyleResourcePath(), true));
} else {
request.setEndpoint(endpoint);
request.setResourcePath(SdkHttpUtils.urlEncode(getPathStyleResourcePath(), true));
}
}
private boolean shouldUseVirtualAddressing(final URI endpoint) {
return !isPathStyleAccess && BucketNameUtils.isDNSBucketName(bucketName)
&& !isValidIpV4Address(endpoint.getHost());
}
This in turn returns true for the url http://localhost:54123 and as a result this method
private static URI convertToVirtualHostEndpoint(URI endpoint, String bucketName) {
try {
return new URI(String.format("%s://%s.%s", endpoint.getScheme(), bucketName, endpoint.getAuthority()));
} catch (URISyntaxException e) {
throw new IllegalArgumentException("Invalid bucket name: " + bucketName, e);
}
}
concatenates the name of the bucket to the host resulting in: http://mybucket.localhost:54123 and this ultimately results in a UnknownHostException to be thrown. I can work around this by setting the host to 0.0.0.0 instead of localhost, but this is hardly a solution.
Therefore I was wondering if i) this a bug/limitation in AmazonS3Client?; ii) I'm the one who is missing something, e.g. poor configuration ?
Thank you for your time
I was able to find a solution. Looking at the method used by the resolver:
private boolean shouldUseVirtualAddressing(final URI endpoint) {
return !isPathStyleAccess && BucketNameUtils.isDNSBucketName(bucketName)
&& !isValidIpV4Address(endpoint.getHost());
}
which was returning true and leading the flow to the wrong concatenation I found that we can set the first variable isPathStyleAccess when building the client. In my case, I created a bean in my test configuration to override the main one:
#Bean
#Primary
public AmazonS3 amazonS3() {
return AmazonS3Client.builder()
.withPathStyleAccessEnabled(true) //HERE
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(AWS_ACCESS_KEY.getValue(), AWS_SECRET_KEY.getValue())
))
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(s3Endpoint, region)
)
.build();
}
For the SDK V2, the solution was pretty similar:
S3AsyncClient s3 = S3AsyncClient.builder()
.forcePathStyle(true) // adding this one
.endpointOverride(new URI(s3Endpoint))
.credentialsProvider(() -> AwsBasicCredentials.create(s3Properties.getAccessKey(), s3Properties.getSecretKey()))
.build()
Related
We would like to access the same RMI-server from different hosts in our network (dev-pc via ssh-tunnel, jenkins-server via direct connection). The problem is that the RMI-host is known under different names on the different client hosts.
This is not a problem when we connect to the registry, because we can set the target host name like this:
Registry registry = LocateRegistry.getRegistry("hostname", 10099, new CustomSslRMIClientSocketFactory());
But when we lookup the remote object like below, it contains the wrong hostname.
HelloRemote hello = (HelloRemote) registry.lookup(HelloRemote.class.getSimpleName());
In the debugger I can observe that the host is like needed on the Registry object, but not on the Stub:
We get a connection timeout as soon as we call a method on the Stub. If I manually change the host value to localhost in the debugger the method invocation succeeds.
I'm aware that I can set java.rmi.server.hostname on the server side but then the connection from jenkins does not work anymore.
The simplest solution would be that I force RMI to use the same host as for the registry for all Stubs retrieved from that registry. Is there a better way than replacing the host-value in the Stub via reflection?
Unfortunately RMI has a deeply built-in assumption that the server host has a single 'most public' IP address or hostname. This explains the java.rmi.server.hostname fiasco. If your system doesn't comply you are out of luck.
As pointed out by EJP there seems to be no elegant out-of-the-box solution.
I can think of two unelegant ones:
Changing the network configuration on every client host in order to redirect traffic to the non-reachable ip to localhost instead.
Changing the host value on the "hello"-object via Reflection.
I went for the second option because I'm in a test environment and the code in question will not go productive anyways. I wouldn't recommend to do this otherwise, because this code might break with future versions of java and won't work if a Security Manager is in place.
However, here my working code:
private static void forceRegistryHostNameOnStub(Object registry, Object stub) {
try {
String regHost = getReferenceToInnerObject(registry, "ref", "ref", "ep", "host").toString();
Object stubEp = getReferenceToInnerObject(stub, "h", "ref", "ref", "ep");
Field fStubHost = getInheritedPrivateField(stubEp, "host");
fStubHost.setAccessible(true);
fStubHost.set(stubEp, regHost);
} catch (Exception e) {
LOG.error("Applying the registry host to the Stub failed.", e);
}
}
private static Object getReferenceToInnerObject(Object from, String... objectHierarchy) throws NoSuchFieldException, IllegalArgumentException, IllegalAccessException {
Object ref = from;
for (String fieldname : objectHierarchy) {
Field f = getInheritedPrivateField(ref, fieldname);
f.setAccessible(true);
ref = f.get(ref);
}
return ref;
}
private static Field getInheritedPrivateField(Object from, String fieldname) throws NoSuchFieldException {
Class<?> i = from.getClass();
while (i != null && i != Object.class) {
try {
return i.getDeclaredField(fieldname);
} catch (NoSuchFieldException e) {
// ignore
}
i = i.getSuperclass();
}
return from.getClass().getDeclaredField(fieldname);
}
The method invocation on the Stub succeeds now:
Registry registry = LocateRegistry.getRegistry("hostname", 10099, new CustomSslRMIClientSocketFactory());
HelloRemote hello = (HelloRemote) registry.lookup(HelloRemote.class.getSimpleName());
forceRegistryHostNameOnStub(registry, hello); // manipulate the stub
hello.doSomething(); // succeeds
I'm having trouble writing unit tests for a method that overwrites a file to a S3 bucket. The method grabs the original metadata of the file, and then overwrites the file with a new modified version and the same original metadata.
What I want the test to do is verify the inner methods like getObjectMetadata and putObject are called correctly with the right parameters
Here is the method:
public void upload(File file, String account, String bucketName) {
String key = "fakekey";
ObjectMetadata objMData = client.getObjectMetadata(bucketName, key).clone();
try {
// cloning metadata so that overwritten file has same metadata as original file
client.putObject(new PutObjectRequest(bucketName, key, file).withMetadata(objMData));
} catch(AmazonClientException e) {
e.printStackTrace();
}
Here is my test method:
#Mock
private AmazonS3 client = new AmazonS3Client();
public void testUpload() {
S3Uploader uploader = new S3Uploader(client);
File testFile = new File("file.txt");
String filename = "file.txt";
String bucketname = "buckettest";
String account = "account";
String key = account+filename;
ObjectMetadata objMetadata = Mockito.mock(ObjectMetadata.class);
when(client.getObjectMetadata(bucketname, key).clone()).thenReturn(objectMetadata);
// can I make this line do nothing? doNothing()??
doNothing.when(client.putObject(Matchers.eq(new PutObjectRequest(bucketName, key, file).withMetadata(objMData))));
uploader.upload(aFile, anAccount, bucketName);
// how do I verify that methods were called correctly??
// what can I assert here?
}
I'm getting a NullPointerException at the line in my test
when(client.getObjectMetadata(bucketname, key).clone()).thenReturn(objectMetadata);
I'm not even able to reach the method call. Honestly, what I'm pretty much asking is, how do I verify that this upload() method is correct?
The method you showed in your question uses a client instance to talk to S3. The client instance in the class to which this method belongs is either injected (at construction time, for example) or created (via a factory, perhaps). Assuming it is injected when the containing class is created then your test case might look like this:
#Test
public void testSomething() {
AmazonS3 client = Mockito.mock(AmazonS3.class);
S3Uploader uploader = new S3Uploader(client);
String bucketName = "aBucketName";
// ensures that the getObjectMetadata call fails thereby throwing the exception which your method catches
Mockito.when(client.getObjectMetadata(Matchers.eq(bucketName), Matchers.eq("fakekey")).thenThrow(new AmazonServiceException());
uploader.uploadToS3(aFile, anAccount, bucketName);
// at this stage you would typically assert that the response
// from the upload invocation is valid but as things stand
// upload() swallows the exception so there's nothing to assert against
}
#Test
public void testSomethingElse() {
AmazonS3 client = Mockito.mock(AmazonS3.class);
S3Uploader uploader = new S3Uploader(client);
String bucketName = "aBucketName";
String key = "fakekey";
File aFile = ...;
ObjectMetadata objMData = ...;
// ensures that the getObjectMetadata call succeeds thereby allowing the call to continue to the subsequent putObject invocation
Mockito.when(client.getObjectMetadata(eq(bucketName), eq(key)).thenReturn(objMData);
// ensures that the putObject call fails thereby throwing the exception which your method catches
Mockito.when(client.putObject(Matchers.eq(new PutObjectRequest(bucketName, key, file).withMetadata(objMData)).thenThrow(new AmazonServiceException());
uploader.uploadToS3(aFile, anAccount, bucketName);
// at this stage you would typically assert that the response
// from the upload invocation is valid but as things stand
// upload() swallows the exception so there's nothing to assert against
}
The above code uses Mockito to mock the AmazonS3 client, this allows you to tweak the behaviour of your client instance such that your test invocations go down the 'throw exception' paths.
On a side note the catch clauses look a little odd since AmazonS3.putObject and AmazonS3.getObjectMetadata are both declared to throw AmazonServiceException and AmazonServiceException extends AmazonClientException.
I would suggest you to use this project https://github.com/findify/s3mock.
Create a mock of S3 bucket, and then you can test what happens when the bucket you look for exist or not.
I am trying to use the google cloud endpoints Java from android as such:
client:
Core.Builder coreBuilder = new Core.Builder(
AndroidHttp.newCompatibleTransport(), new GsonFactory(), null);
coreBuilder.setApplicationName("myapp");
if (MainActivity.ENDPOINTS_URL != null) {
coreBuilder.setRootUrl(MainActivity.ENDPOINTS_URL);
coreBuilder.setGoogleClientRequestInitializer(new GoogleClientRequestInitializer() {
public void initialize(AbstractGoogleClientRequest<?> request)
throws IOException {
request.setDisableGZipContent(true);
}
});
}
Core core = coreBuilder.build();
myList = core.asdf("x=&+x", myObject);
server:
#ApiMethod(name = "asdf")
public List<String> asdf(#Named("param1") String param1, MyObject myObject) {
if (param1.equals("x=&+x")) {
//should go here, but never does
}
...
While it mostly works, somehow the param1 string does not get correctly transmitted, meaning that "x=&+x" arrives at the server as "x=&%2Bx". Is this a known bug? Or do arguments have to be manually encoded somehow? Or is this somehow particular to my environment?
Appengine SDK V1.8.8 for java, google api 1.17.0-rc, using the dev environment.
Cheers,
Andres
I'm writing a play 2.0 java application that allows users to upload files. Those files are stored on a third-party service I access using a Java library, the method I use in this API has the following signature:
void store(InputStream stream, String path, String contentType)
I've managed to make uploads working using the following simple controller:
public static Result uploadFile(String path) {
MultipartFormData body = request().body().asMultipartFormData();
FilePart filePart = body.getFile("files[]");
InputStream is = new FileInputStream(filePart.getFile())
myApi.store(is,path,filePart.getContentType());
return ok();
}
My concern is that this solution is not efficient because by default the play framework stores all the data uploaded by the client in a temporary file on the server then calls my uploadFile() method in the controller.
In a traditional servlet application I would have written a servlet behaving this way:
myApi.store(request.getInputStream(), ...)
I have been searching everywhere and didn't find any solution. The closest example I found is Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0? but I didn't found how to modify it to fit my needs.
Is there a way in play2 to achieve this behavior, i.e. having the data uploaded by the client to go "through" the web-application directly to another system ?
Thanks.
I've been able to stream data to my third-party API using the following Scala controller code:
def uploadFile() =
Action( parse.multipartFormData(myPartHandler) )
{
request => Ok("Done")
}
def myPartHandler: BodyParsers.parse.Multipart.PartHandler[MultipartFormData.FilePart[Result]] = {
parse.Multipart.handleFilePart {
case parse.Multipart.FileInfo(partName, filename, contentType) =>
//Still dirty: the path of the file is in the partName...
String path = partName;
//Set up the PipedOutputStream here, give the input stream to a worker thread
val pos:PipedOutputStream = new PipedOutputStream();
val pis:PipedInputStream = new PipedInputStream(pos);
val worker:UploadFileWorker = new UploadFileWorker(path,pis);
worker.contentType = contentType.get;
worker.start();
//Read content to the POS
Iteratee.fold[Array[Byte], PipedOutputStream](pos) { (os, data) =>
os.write(data)
os
}.mapDone { os =>
os.close()
Ok("upload done")
}
}
}
The UploadFileWorker is a really simple Java class that contains the call to the thrid-party API.
public class UploadFileWorker extends Thread {
String path;
PipedInputStream pis;
public String contentType = "";
public UploadFileWorker(String path, PipedInputStream pis) {
super();
this.path = path;
this.pis = pis;
}
public void run() {
try {
myApi.store(pis, path, contentType);
pis.close();
} catch (Exception ex) {
ex.printStackTrace();
try {pis.close();} catch (Exception ex2) {}
}
}
}
It's not completely perfect because I would have preferred to recover the path as a parameter to the Action but I haven't been able to do so. I thus have added a piece of javascript that updates the name of the input field (and thus the partName) and it does the trick.
I'm trying to run a Gemfire client app but I'm getting an IllegalStateException when running the following code:
//clientPool is the name of the pool from the client
DynamicRegionFactory.Config config = new DynamicRegionFactory.Config(null,(String)"clientPool",false,true);
dynRegFact = DynamicRegionFactory.get();
dynRegFact.open(config);
_cache = new ClientCacheFactory().set("locators", "")
.set("mcast-port", "0").set("log-level", "error")
.set("cache-xml-file", xmlFileName)
.create();
Exception in thread "main" java.lang.IllegalStateException: The client pool of a DynamicRegionFactory must be configured with queue-enabled set to true.
I can't figure out how to set the queue-enabled to true. I would appreciate some code, not answers like "check this part of the documentation". I've already looked everywhere.
You should enable subscription in your pool. Just add subscription-enabled="true" attribute to your pool configuration.
Note: Your client should support transactions. It's better to use dynamic regions on cache servers. From client call remote function.
Example:
Function:
public class CreateRegionFunction extends FunctionAdapter {
#Override
public void execute(FunctionContext fc) {
String name = (String) fc.getArguments();
Region reg = DynamicRegionFactory.get().createDynamicRegion("/parent",
name);
if (reg == null) {
fc.getResultSender().lastResult("ERROR");
} else {
fc.getResultSender().lastResult("DONE");
}
}
#Override
public String getId() {
return "create-region-function";
}
}
Server side:
CreateRegionFunction creatRegFun = new CreateRegionFunction();
FunctionService.registerFunction(creatRegFun);
Add dynamic-region-factory in your server cache:
<dynamic-region-factory />
Client side:
FunctionService.onServer(PoolManager.find("poolName"))
.withArgs("child")
.execute("create-region-function")
.getResult();
In this case it's not obligatory to use DynamicRegionFactory, you can use RegionFactory and create root regions.